content
stringlengths
86
994k
meta
stringlengths
288
619
Tau versus Pi - Mathematics It has been proposed that Pi should be replaced with the Tau for mathematical purposes. The Tau has twice the value of Pi. The reasoning behind this change seems to be that it would simplify many formulae since"2*Pi" is much more common in formulae than Pi on its own. This certainly seems to be the case in electronics and it seems some eminent scholars are convinced. What do you think?
{"url":"http://www.scienceforums.net/topic/58135-tau-versus-pi/","timestamp":"2014-04-20T21:44:57Z","content_type":null,"content_length":"127901","record_id":"<urn:uuid:0c467103-ca45-4a81-91a7-74af82015f84>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Special Notation $d,k,m,n$ positive integers (unless otherwise indicated). $d\divides n$ $d$ divides $n$. $\left(m,n\right)$ greatest common divisor of $m,n$. If $\left(m,n\right)=1$, $m$ and $n$ are called relatively prime, or coprime. $\left(d_{1},\dots,d_{n}\right)$ greatest common divisor of $d_{1},\dots,d_{n}$. $\sum_{d\divides n}$, $\prod_{d\divides n}$ sum, product taken over divisors of $n$. $\sum_{\left(m,n\right)=1}$ sum taken over $m$, $1\leq m\leq n$ and $m$ relatively prime to $n$. $p,p_{1},p_{2},\dots$ prime numbers (or primes): integers ($>1$) with only two positive integer divisors, $1$ and the number itself. $\sum_{p}$, $\prod_{p}$ sum, product extended over all primes. $x,y$ real numbers. $\sum_{n\leq x}$ $\sum_{n=1}^{\left\lfloor x\right\rfloor}$. $\mathop{\mathrm{log}\,\/}olimits x$ natural logarithm of $x$, written as $\mathop{\ln\/}olimits x$ in other chapters. $\mathop{\zeta\/}olimits\!\left(s\right)$ Riemann zeta function; see §25.2(i). $\mathop{(n|P)\/}olimits$ Jacobi symbol; see §27.9. $\mathop{(n|p)\/}olimits$ Legendre symbol; see §27.9.
{"url":"http://dlmf.nist.gov/27.1","timestamp":"2014-04-19T19:59:20Z","content_type":null,"content_length":"24101","record_id":"<urn:uuid:7aae57dc-3bcb-4c42-acd4-f14e97f47449>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Two months ago, we talked about something called Win Probability, and we followed up with a couple of Win Probability Added (WPA) bullpen articles, Team Bullpens and Ranking the Relievers of 2004. WPA and bullpens are a match made in baseball statistics heaven, because WPA can tell you much more about relievers than the current bag of statistics can. There’s a related statistic that I didn’t introduce at the time, though you may be familiar with it. It was developed by Doug Drinen in the Big Bad Baseball Annuals of the late 1990′s, and it’s called “P”. Right, just one letter. P. A whole lot easier to remember than the standard three-letter acronym, don’t you think? Before we move on, feel free to insert your own juvenile P joke here (“spell ‘pig’ backwards and then say ‘funny’”). Okay. Back to the article. P is the measure of how important a situation is, based on its potential impact on Win Probability. The higher the P, the more critical the situation. Bases loaded, bottom of the ninth, visiting team up by one run? Very high P. If you’re the visiting team, you want your best pitcher on the mound. Conceptually, P is very similar to Tangotiger’s Leveraged Index, but the math is different. To calculate P, you simply take the difference between the current Win Probability and what the Win Probability will be if the pitcher retires the side with no more runs scoring. For example, a home team’s Win Probability, with the bases loaded and none out in the bottom of the ninth of a tie game, is .936 (by my calculations). If the visiting team pitcher miraculously retires the next three batters without allowing a run, the Win Probability decreases to .500. So the P is .436 (.936 – .500). As you can imagine, that’s a very high P. When a manager brings a new reliever into the game, you can learn a lot by calculating the P of that situation. Particularly, who does the manager turn to in high-P situations? Does he turn to his best reliever or the best “matchup?” Does he waste good relievers in low-P situations? Does he even understand how critical the situation is? I’ve been able to calculate the P of every relief appearance from 2002 through 2004, so we can start to answer these questions. Here is a list of the relievers who were brought into a game most often when the P was 0.20 or higher, along with the WPA that resulted from those appearances and the number of saves or holds the pitcher subsequently received: NAME TEAM App. WPA Saves Holds Avg. P Marte, Damaso CWS 18 2.531 5 4 0.291 Myers, Mike ARI/BOS/SEA 17 -0.344 1 7 0.303 Ryan, B.J. BAL 16 0.476 0 4 0.294 Rincon, Ricardo CLE/OAK 15 -0.602 0 7 0.272 Romero, J.C. MIN 14 0.908 0 7 0.283 Grimsley, Jason BAL/KC 14 -1.055 0 1 0.271 Cormier, Rheal PHI 14 -0.081 0 4 0.308 Bradford, Chad OAK 13 0.849 0 6 0.247 Quantrill, Paul LAD/NYY 12 1.613 0 7 0.248 Rhodes, Arthur OAK/SEA 12 -1.200 0 5 0.247 Stanton, Mike NYM/NYY 12 0.180 1 3 0.264 Groom, Buddy BAL 12 1.898 2 2 0.345 The White Sox’s Marte was brought into more difficult situations than any other reliever during this time period, and he did his job extremely well (WPA of 2.531) — particularly in 2002 and 2003. Buddy Groom had the highest average P Value among all pitchers on this list, and he also performed very well (WPA of 1.898). Overall, this list represents 169 appearances, 9 saves and 57 holds. Which tells you something about the relative importance of saves and holds. And the list of pitchers is very revealing, because it primarily consists of middle relief and situational pitchers, including a couple of LOOGYs (Lefthanded One Out GuYs), such as Mike Myers and Buddy Groom. There are no pure closers here. Now, remember that P is based on the situation when the pitcher first enters the game. Therefore, P is going to be highest when a pitcher is brought into a game with men already on base. Pure closers, such as Gagne, Smoltz and Rivera, are typically used in what Bill James called the “Robb Nen” pattern: beginning of the ninth inning, none on, none out. So, by definition, today’s relief aces — the guys with the most saves — are not likely to be brought into high-P situations. I’d like to talk about closers more specifically. First, here is a list of all pitchers with at least twenty saves last year, ranked by average P in 2004: Name Team App WPA Saves P Value Hoffman, Trevor SDP 55 2.619 41 0.101 Cordero, Francisco TEX 67 3.477 49 0.092 Kolb, Dan MIL 64 1.288 39 0.091 Nathan, Joe MIN 73 5.390 44 0.090 Herges, Matt SFG 70 -2.262 23 0.089 Gagne, Eric LAD 70 5.977 45 0.088 Benitez, Armando FLO 64 4.644 47 0.086 Rivera, Mariano NYY 74 4.916 53 0.086 Percival, Troy ANA 52 0.743 33 0.084 Lidge, Brad HOU/OAK 80 7.292 29 0.084 Looper, Braden NYM 71 2.465 29 0.082 Graves, Danny CIN 68 -0.278 41 0.081 Dotel, Octavio HOU/OAK 77 1.077 36 0.081 Smoltz, John ATL 73 5.240 44 0.080 Mesa, Jose PIT 70 1.964 43 0.080 Hawkins, LaTroy CHC 77 1.723 25 0.076 Wagner, Billy PHI 45 2.316 21 0.076 Isringhausen, Jason STL 74 3.738 47 0.074 Julio, Jorge BAL 65 0.926 22 0.069 Urbina, Ugueth DET 54 0.250 21 0.069 Chacon, Shawn COL 66 -3.664 35 0.069 Foulke, Keith BOS 72 3.047 32 0.067 Baez, Danys TBD 62 1.160 30 0.060 For comparison, the average P for all reliever appearances in 2004 was 0.059, and it has remained fairly stable over the past three years. Trevor Hoffman had the highest P among all closers last year, primarily because almost half of his appearances occurred during a one- or two-run lead. Conversely, only 20% of Danys Baez’s appearances occurred with one- or two-run leads, and his overall P was about average, despite his thirty saves. If you only want to use your closer at the top of the ninth inning, the score differential (one vs. two run lead, for example) is obviously key to getting the most value from him. Let’s use P to identify the most important score differentials — here is a graph of the P value at the beginning of the ninth inning, by score differential: As you can see, tie games and one-run leads are by far the most important in the ninth inning (and extra innings, too). This makes a lot of sense. If you pitch a scoreless ninth with a one-run lead, you’ve just finished the game for your team. And if you do so in a tie game, you’ve given your team a chance to win it all in the bottom of the inning. So do teams deploy their closers most often in the most critical situations? Well, let’s add a line to the graph that shows the percent of the time each team’s “closer” was used in each situation (closer being defined as staff leader in saves). Ideally, the line should follow the same outline as the bars in the graph — let’s see if it does: It doesn’t. A closer is two-and-a-half times more likely to be brought into the ninth with a three-run lead (75% of the time) than with the score tied (30% of the time). Excuse my bold formatting, but this makes no sense at all! Three-run leads are gimme situations; fans are heading to the exits. On the other hand, tie games in the ninth are the epitome of crucial situations. Yet most managers would rather use their closer with a three-run lead. What gives? As Steve Treder documented so well last year, the save statistic has warped the way we think of critical situations. Instead of using closers in the most critical situations, managers use their closers in order to maximize their saves and not the team wins. But a save is a statistic, it’s not a strategic metric. WPA is both. There is a lot to question about relievers in this day and age. Do closers really only have to pitch at the beginning of the ninth inning? Should managers really pitch to matchups as often as they do? Is the overuse of relievers leading to more, rather than less, injuries? But I have one simple question: if managers really want to hold their closers back until the ninth inning, why aren’t they at least consistently using them in the most important situations? References & Resources In a couple of days, the Hardball Times Bullpen Book will be on sale. This book will include over 80 pages of WPA statistics on all relievers between the years 2002 and 2004, including P and WPA for each reliever each year. Watch for it! My deepest thanks go to Doug Drinen, for giving me the insight and permission to use his creation. Here’s a list of teams that used their closers in tie games in the ninth at least 50% of the time: - Braves/Smoltz: 7 of 9 opportunities - Red Sox/Foulke: 3 of 5 - Tigers/UUU: 9 of 15 - Mets/Looper: 8 of 14 - Yankees/Rivera: 2 of 4 By the way, I do believe that Tangotiger’s “Leveraged Index” is a better metric than P. However, Tango has not yet released Leveraged Indices for the past three years, and so I present P as a public service (and a way to address a pet issue of mine). By the way, Tango pointed out to me that this subject was well covered by Baseball Prospectus nearly five years ago. I’m not sure why the baseball world hasn’t made more progress since then, but maybe my graphs can help a llittle bit. Leave a Reply Cancel reply
{"url":"http://www.hardballtimes.com/closer/","timestamp":"2014-04-18T11:00:27Z","content_type":null,"content_length":"51406","record_id":"<urn:uuid:fb19d247-baad-4223-899e-be02d8dc3db0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Dear Reddit, if you HAD to choose a gif to be trapped in for the rest of time, which gif would it be? (self.AskReddit) submitted ago by drinkingfor2 sorry, this has been archived and can no longer be voted on Like the title says, if you had to choose a gif to be stuck in for the rest of time (or maybe just a really long period of time) which gif would you be in? Alright, guys and gals, post your gifs! You could be a person in the gif or just replace them or you could just be there watching it. Feel free to post more than one, there are no rules here. Edit: These are some amazing gifs but I'm really sad i don't get any of this karma :(
{"url":"http://www.reddit.com/r/AskReddit/comments/z8mqf/dear_reddit_if_you_had_to_choose_a_gif_to_be/","timestamp":"2014-04-21T11:24:35Z","content_type":null,"content_length":"693129","record_id":"<urn:uuid:d1c48a56-be1a-4a35-a4e3-e8d1406d8922>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Input: is the tracking result of prior frame, is set of candidate samples credible positions of the center-point coordinates in next frame. is Over-complete dictionary, is frame numbers. (1) for (2) SRGC calculate = (3) calculate identity; where , (4) obtaining the center point from (5) obtaining the by and Affine transformation of (6) SRLC to ; calculate ; (7) calculate identity, where
{"url":"http://www.hindawi.com/journals/aaa/2013/323072/alg1/","timestamp":"2014-04-21T08:50:04Z","content_type":null,"content_length":"93700","record_id":"<urn:uuid:cb5c0406-6d96-45ae-942f-b753ed25e777>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
[Lapack] BUG IN DBDSQR LAPACK Archives <prev [Date] next> <prev [Thread] next> [Lapack] BUG IN DBDSQR From: Kinji Kimura Date: Wed, 02 May 2012 15:44:06 +0900 Dear Rodney James Thank you for your kindness. I added "-fdefault-integer-8" to gfortran compiler option. And I could solve the problem. Anyway, I was surprised by 6n^2. Because DLASQ1 has the following line. NBIG = 100*( N0-I0+1 ) Compered with DLASQ1, DBDSQR is allowed large iteration numbers. Thank you Kinji Kimura (2012/05/02 0:05), James, Rodney wrote: Dear Kinji Kimura, The problem you are seeing is due to the iteration counter exceeding the capacity of the default 32-bit fortran integer type. The maximum iteration is set internally to 6n^2, so when 6n^2> 2^31, (which happens for n>18918) the max iteration count is negative, and the routine exits with info> 0 due to failed convergence. One way to address this problem is to recompile the LAPACK and BLAS libraries to use a 64-bit default integer type, which can usually be done with a compiler option. Another possibility is to change line 415 in dbddsqr.f MAXIT = MAXITR*N*N so that MAXIT is set to an integer value less than 2^31. Best regards, Rodney James University of Colorado Denver On Apr 30, 2012, at 6:53 PM, Kinji Kimura wrote: Dear LAPACK team I made the program which is attached in this e-mail. The result is the following. ./test 10000 ./test 30000 ./test 50000 ./test 100000 ./test 300000 ./test 500000 ./test 1000000 ./test 3000000 Therefore, DBDSQR has bug. Kinji Kimura. Lapack mailing list <Prev in Thread] Current Thread [Next in Thread> For additional information you may use the LAPACK/ScaLAPACK Forum Or one of the mailing lists, or
{"url":"http://icl.cs.utk.edu/lapack-forum/archives/lapack/msg01294.html","timestamp":"2014-04-16T16:38:42Z","content_type":null,"content_length":"7497","record_id":"<urn:uuid:61283f17-da07-43b1-8622-2b58fd812e01>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Comments on On Bicycles, and.... what else is there?: Calculating Net Climbing and DescendingI&#39;m not sure that sum of delta z approximation... 01484858820878605035noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-1564958057737541664.post-8522471860256115152010-11-14T00:02:54.234-08:002010-11-14T00:02:54.234-08:00I&#39;m not sure that sum of delta z approximation is any more challenging or different that sum of delta x and or y. The whole thing is indeed muddled since the path our body travels through space is not the same as the bike&#39;s, the path is not the straight line we idealize because bike motion is more like a series of jiggly little balance corrections, and the overall function from x1,y1,z1 to xn, yn, zn is not continuous but usually punctuated with a series of stops and/or dismounts. When you add those to the notion that is amplified by your post, that people conceive of our distance as travel across an x-y plane on this bumpy planet, saying how far you went with what sum of delta z is almost never exactly what we&#39;re thinking anyway.John Romeo Alphahttp://www.blogger.com/profile/
{"url":"http://djconnel.blogspot.com/feeds/8686903226731170389/comments/default","timestamp":"2014-04-16T07:14:10Z","content_type":null,"content_length":"4397","record_id":"<urn:uuid:5769075d-c459-47f3-9124-778e44b376e6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Plotting convergent series June 29th 2010, 07:22 AM #1 Jun 2010 Plotting convergent series I am working with convergent series, and I need to plot each calculation of a loop, against the number of the loop. Rather than post my code in here, I thought I'd post a more basic code, and then I can re-arrange any solution to fit my code later on. Imagine that the loop was calculating numbers in the Fibonacci sequence, and then dividing each number by it's previous to converge on Phi. The following function would display each outcome as it got closer and closer to Phi (1.618) clear all; loops=input('Loops: '); while (k<loops) How would I plot this, so that if the user entered 15 loops, the x-axis would have intervals of 1 and the y-values would be the outcome of each loop for 'r'? Sorry, I should have put it in the title: I am using MATLAB. June 29th 2010, 07:26 AM #2 Jun 2010
{"url":"http://mathhelpforum.com/math-software/149690-plotting-convergent-series.html","timestamp":"2014-04-17T05:03:11Z","content_type":null,"content_length":"31895","record_id":"<urn:uuid:b3bbabb3-f64b-4f02-8fb8-b8232e0572e4>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Select the system of equations that corresponds to the given graph. A. -2x - y = 4 2x + y = -2 B. -8x + 4y = -16 -4x + 2y = -8 c. x - 2y = 6 -2x + y = 4 d. x + 3y = 9 3x + 2y = 4 • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. Hey, sorry I forgot about your other post. If you can type up the formulea in y=mx+c, I'll watch this now. Best Response You've already chosen the best response. Tryin to help you. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/501ef713e4b0be43870f31e2","timestamp":"2014-04-21T12:26:48Z","content_type":null,"content_length":"33525","record_id":"<urn:uuid:37e6beef-88ad-4e67-bbac-2ef718d607b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: January 2005 [00174] [Date Index] [Thread Index] [Author Index] Re: FullSimplify on Hyperbolic Functions • To: mathgroup at smc.vnet.net • Subject: [mg53389] Re: [mg53383] FullSimplify on Hyperbolic Functions • From: Andrzej Kozlowski <akoz at mimuw.edu.pl> • Date: Sun, 9 Jan 2005 23:03:40 -0500 (EST) • References: <200501090402.XAA12451@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com The reasons why some of these do not work have already been explained before: basically they are that FullSimplify when using certain transformations functions, cannot use assumptions during the intermediate steps and also that certain transformation functions only "proceed in one direction". In particular this means that you often need to use explicitly ComplexExpand or TrigToExp with FullSimplify. This is one of such cases: r1 = ArcCosh[1 + b^2/2]; r2 = 2*ArcSinh[b/2]; r1 - r2], b > 0] However, this is more tricky: r1^2 - r2^2], b > 0] -4*Log[(1/2)*(b + Sqrt[ 4 + b^2])]^2 + Log[(1/2)*(2 + b*(b + Sqrt[4 + b^2]))]^2 The problem seems to be again that FullSimplify does not try certain transformations that go in the "opposite direction" to what it normally does, but without which it can't see the cancellations. You can make it see them by forcing expansion of terms: This is actually an interesting case because it shows a use of FunctionExpand with assumptions in cases where FullSimplify with assumptions is alone not sufficient. Also note that ComplexExpand performs better than TrigToExp here. Of course if you make the expression even more complicated it will get harder and harder for Mathematica to reduce it and at some point it will become completely impossible. This isn't really surprising, is it? Andrzej Kozlowski On 9 Jan 2005, at 05:02, carlos at colorado.edu wrote: > Obviously ArcCosh[1+b^2/2]=2 ArcSinh[b/2] if b>=0. Here are > 4 variations on trying to show it by FullSimplify: > ClearAll[b]; r1=ArcCosh[1+b^2/2]; r2=2*ArcSinh[b/2]; > Print[FullSimplify[r1-r2,b>=0]//InputForm]; > ArcCosh[1 + b^2/2] - 2*ArcSinh[b/2] > Print[FullSimplify[TrigToExp[r1-r2],b>=0]//InputForm]; > 0 > Print[FullSimplify[r1^2-r2^2,b>=0]//InputForm]; > ArcCosh[1 + b^2/2]^2 - 4*ArcSinh[b/2]^2 > Print[FullSimplify[TrigToExp[r1^2-r2^2],b>=0]//InputForm]; > -4*Log[(b + Sqrt[4 + b^2])/2]^2 + Log[(2 + b*(b + Sqrt[4 + b^2]))/2]^2 > Why only the second one works? • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Jan/msg00174.html","timestamp":"2014-04-19T09:41:24Z","content_type":null,"content_length":"36757","record_id":"<urn:uuid:419d6b12-eecf-4328-a802-a3ee94bdd3ec>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
A profile of the Computational Science Program Take a look at what past grads from Witt have done with their majors – internships, grad school programs, and first jobs after graduation. It’s not always as cut and dried as you think; a Liberal Arts degree has a lot of flexibility! What Past Witt Computational Science Students Have Done Why Computational Science? Computational science is the field of study that integrates natural science, computer science, and applied mathematics. The problems addressed by computational science typically come from one of the natural sciences. The models developed to describe these problems and the methods used to solve them are often (although not always) mathematical in nature. The implementation of the algorithmic methods requires computer science knowledge for accurate, efficient, and reliable results. The spectrum of scientific problems ranges from models that can be solved on a calculator, through models that require symbolic, numeric, and visualization software, to simulation and optimization models that are so complex that a supercomputer is necessary to solve them. Computational science has now taken up a position along with the traditional use of scientific theory and experimentation as a third paradigm of scientific methodology. This is largely due to the successful employment of the computer in research and development to model complex physical events, to process large quantities of data, and to provide important insights. Because of the increasing importance of computational science, some of these skills must be provided to the undergraduate science student as well as to the graduate student. Computational science courses clearly strengthen a science program. In addition, within a broad-based liberal arts setting, it is also important to provide a balance that includes communication skills and an appropriate combination of ethics and philosophy of science. The methods of Computational Science (COSC) have been applied to problems such as aeronautical design, environmental improvement, neuroscience, pharmaceutical design, and weather forecasting. More recently, high performance computation, traditionally used in physics and chemistry, has been applied to biology, geology, environmental studies, and some of the social sciences. The development of COSC as an interdisciplinary field has had a profound effect on the way that basic and applied research in science, engineering, and industry are conducted. Less than ten years ago, these methods required very expensive supercomputers and specialized parallel programming techniques to be effective. Today, a large percentage of these applications can be done on personal computers, workstations, and parallel computing clusters. Using such equipment, the Computational Science program facilitates an in-depth study of computational techniques and modeling approaches as they are applied to the sciences. The program is beneficial to students from any discipline that involves empirical approaches to gain an understanding of the world. This is especially true for students pursuing undergraduate research, including those students intending to pursue graduate studies in such disciplines. Degrees Offered Minor: Computational Science Goals of Computational Science Within a Liberal Arts Environment: • Ability to understand and do science • Skill to develop and use valid scientific models • Knowledge of the strengths and limitations of the methodologies used • Proficiency to express ideas orally and in writing • Desire to Use ethical practices for the benefit of society Requirements for Minor in Computational Science Nineteen to 26 semester hours are required for the Computational Science minor, in accordance with the following: Computer Science 150 (Introduction to Programming) or equivalent, either Mathematics 201 (Calculus I) or Mathematics 131 (Essentials of Calculus), Mathematics/Computer Science 260 (Computational Models and Methods), at least 8 semester hours in elective coursework from courses (listed below) containing a significant integrated COSC component, and a capstone project from a separate activity (0-4 semester hours), which substantially involves computational modeling and analysis and results in a formal product such as a written report and/or professional presentation. In addition, COSC Minors are required to have a laboratory experience in two courses that meet the Natural World goal (the General Education program requires only one.) Required Courses (14-18 semester hours) 1. Computer Science 150Q. Introduction to Programming. 5 semester hours. Prerequisites: Level 22 placement on the Mathematics Placement Exam. 2. One of the following courses: Mathematics 201Q. Calculus I. 4 semester hours. Prerequisite: MATH 120 or level 25 placement on the Mathematics Placement Exam. Mathematics 131Q. Essentials of Calculus I. 4 semester hours. Prerequisite: MATH 120 or level 25 placement on the Mathematics Placement Exam. 3. Mathematics/Computer Science 260. Computational Models and Methods. 5 semester hours. Prerequisites: Either Mathematics 201 or 131, either Computer Science 150 or permission of the instructor. 4. Capstone Experience (0-4 semester hours) In the Capstone Experience, students must demonstrate that they can apply the knowledge from the required and elective coursework in a substantial project within a given discipline. This must involved a significant and integrated computational focus throughout the project. The project must be equivalent to a creditbearing activity of at least 4 semester hours, typically in the student’s major, though it may not simply be a project completed for the required or elective coursework for the major. For students in any major field, the capstone project could take the form of a required senior thesis, a departmental honors project, a project related to one of Wittenberg’s summer programs, a project from an internship, an independent study in the major, a directed student research project, etc. Regardless of the form, the project must result in a formal product such as a professional presentation or report. Before beginning the capstone project, the student must submit a project proposal for approval to both the Director of the Computational Science Minor and the Chairperson of the participating department. This proposal will specify the name of a faculty member to supervise the project, will detail how computational models and computational methods will be used in the project, and describe the plans for the formal presentation of the work. A formal presentation, either written, oral, or both, will be evaluated by the Director, Chairperson, and supervising faculty member. Elective Courses (8 Semester hours with at at least two of the following) • 316. Molecular Genetics and Bioinformatics. 5 semester hours. Prerequisites: Biology 170 and 180 and Chemistry 121 and 162. • 341. Limology. 5 semester hours. Prerequisites: Chemistry 121 and 162. • 342. Stream Ecology. 5 semester hours. Prerequisites: Chemistry 121, 162, and Biology 341. • 346. Ecology. 5 semester hours. Prerequisites: One group 2, 3 or 4 Biology course and Math Placement 22. • 347. Evolution. 4 semester hours. Prerequisites: Two Biology courses in addition to 170 and 180. • 311. Physical Chemistry I. 5 semester hours. Prerequisites: Chemistry 281, Mathematics 202 and Physics 218. • 321. Inorganic Chemistry. 5 semester hours. Prerequisites: Chemistry 281, Mathematics 202, and Physics 218. • 352. Physical Chemistry II. 5 semester hours. Prerequisites: Chemistry 311. • 372. Biochemistry II. 5 semester hours. Prerequisites: Chemistry 271, Mathematics 201 and Physics 200. Computer Science • Computer Science/Mathematics 320. Numerical Analysis. 4 semester hours. Prerequisites: Mathematics 202, Mathematics 205, Computer Science 150. • Computer Science 350. Artificial Intelligence 4 semester hours. Prerequisites: Mathematics 171 and 205, Computer Science 250. • Computer Science 370. Computer Graphics 4 semester hours. Prerequisites: Computer Science 275. • Computer Science/Mathematics 380. Optimization Prerequisites: Computer Science 150, Mathematics 201, Mathematics 205. • 300. Econometrics. 4 semester hours. Prerequisites: Economics 190, Management 210 or its equivalent. • 370. Mathematics for Economists. 4 semester hours. Prerequisites: Economics 310, Mathematics 201 or Mathematics 131. • 220. Environmental Geology. 5 semester hours. Prerequisites: Geology 150 or 110 and a score of 22 on Math Placement Exam. • 240. Process Geomorphology. 5 semester hours. Prerequisites: Geology 150, Geology 210 or permission of instructor. • 400. Sedimentology. 5 semester hours. Prerequisites: Geology 210, 300. • Mathematics 205. Applied Matrix Algebra. 4 semester hours. Prerequisite: Mathematics 201. • Mathematics 215. Differential Equations. 4 semester hours. Prerequisite: Mathematics 202. • Mathematics 227. Data Analysis. 4 semester hours. Prerequisite: a score of 25 on the Math Placement Exam. • 311. Classical Mechanics. 4 semester hours. Prerequisite: Physics 220. • 320. Computational Physics. 2 semester hours. Prerequisites: Physics 220, Mathematics 202, Computer Science 150. • 321. Signal Processing. 2 semester hours. Prerequisites: Physics 218, Mathematics 202. • 332. Electromagnetism. 4 semester hours. Prerequisites: Physics 311, Mathematics 212. • 410. Mathematical Physics. 4 semester hours. Prerequisites: Physics 311, Mathematics 212, Mathematics 215. • 411. Quantum Mechanics. 4 semester hours. Prerequisite: Physics 311.
{"url":"http://www5.wittenberg.edu/print/administration/careers/majors/depts/computationalsci.html","timestamp":"2014-04-21T04:46:12Z","content_type":null,"content_length":"15426","record_id":"<urn:uuid:53c967f7-c14e-4211-adfc-1eb876a000d2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Mathematics to Understand the Brain and Describing the Brain to Understand Mathematics Mathematics is a powerful tool for solving problems in the world around us. Using very abstract models we are able to describe and predict the sometimes very complex behaviors of people, markets, diseases and physical objects. It is often difficult for students of mathematics to grasp some of these abstract concepts without concrete examples. In particular, it can be a challenge to motivate students without showing some relevance to their own lives. I would like to capitalize on the students' natural curiosity about their own brains to motivate them to learn mathematics. As a teacher of mathematics at Metropolitan Business Academy (MBA) in New Haven I have used real examples from the seminar. They include comparing the reaction time of a giraffe and a mouse. What is the relation between the number of neurons and brain diameter? How much louder is a jet taking off than a vacuum cleaner? Why do some musical notes sound pleasant while others do not? Relevant mathematical models, and their representations, will be used in answering these questions. In addition to business careers, many students at MBA will pursue careers in the health professions. Additionally, everyone is curious how the brain works, especially as it pertains to them. The students will use examples taken from the seminar to create mathematical models and then represent the data and their models. These relevant examples will help motivate my students to understand this very abstract subject matter. The New Haven curriculum for Algebra II and Precalculus includes units on the family of functions. Consequently the curriculum unit developed here could be adopted for either course, although this unit will be devoted to the Precalculus curriculum in order to include logarithm and periodic functions. Students coming into Algebra II, in particular, as well as Precalculus often do not have a grasp on the properties of various functions and the type of change modeled by each. Furthermore, given a mathematical model, or equation, there frequently is difficulty in displaying data and mathematical models for precise, effective, quick analysis. This unit will develop the students ability to select an appropriate function in order to best represent the data. The unit will reinforce the notion of data, mathematical models, and graphs as representations of change. The uniquely mathematical perspective of change is the rate of change. In mathematical terms rate of change may be depicted as slope in a graph. By working with data and models, and then making visual representations of that change, it is intended that students will have a better understanding of this concept. Additional skills in rates, ratios and proportions will also be bolstered in depicting and analyzing the data. Each member in the family of functions will be developed using data from the seminar. The functions included will be linear functions, power functions, quadratic functions, polynomial functions, exponential functions, logarithm functions and periodic functions. The linear function in slope intercept form is, y = mx + b where m represents slope and b is the y intercept. This function will be used to compare the time it takes for nerve conduction to travel from the foot to the brain of the giraffe and the mouse. The power function, in function notation, takes the form f(x) = x^a, where a is a constant real number. This function will be used to show the relationship between total number of neurons and brain diameter. The quadratic function, a specific type of polynomial function, might also model the same sort of change as above, but takes the form f(x) = ax^2+ bx + c, where a, b, c are constant real numbers. The polynomial function utilizes a combination of operations on variables and constants with non-negative whole number exponents. Of these, the third degree polynomial will probably prove most useful. The logarithm function generally may be stated in any base. The natural base of e is used in many growth functions while this unit will utilize base 10. The conventional function notation is f(x) = ln(x) for natural base and f(x) = Log(x) for base 10. This function, in base 10, will be utilized in measuring sound levels in decibels. Finally, periodic functions will be demonstrated. These are the trigonometric functions sine, cosine, and tangent. For modeling purposes the sine function will mostly be utilized in the form f(x) = a*sin(b*x + c)+d , where a is amplitude, b is the frequency, which determines period, c determines phase shift, and d determines vertical shift. This function will be used to determine the frequency of sound and the activation of the auditory system. Using concrete and interesting data the intent is to have students better understand rate of change, how best to model and represent that change by choosing appropriate mathematical models and graphs, and apply skills involving rate, ratio, and proportion. Students will also be introduced to music theory. Linear functions The typical nerve cell consists of a body and two kinds of branches. Dendrites receive input from other nerve cells and are usually short. Axons by contrast are often very long and may conduct impulses very rapidly. The axon terminals are positioned near an adjacent nerve or a muscle. Nerve impulses pass from the axons of one nerve to the next nerve or muscle. Surrounding the axons is an insulating coat of material called myelin. A nerve may be a single axon or bundle of axons. Nerve impulses may move electrically in both directions, but from skin or to muscle it is normally in one direction. (Stewart) In order to determine conduction velocity stimulating electrodes are placed beneath the skin close to a bundle of nerves. Small recording electrodes are placed on the skin over the muscles being tested. Compound action potential is the sum of firing of many axons within the nerve. For this unit idealized data will be used to study the reaction times of various animals, as well as humans. Reaction time is nerve conduction from the foot to the brain and back to the foot. The idealized universal conduction velocity will be 100 meters per second. The students will be given data such as the distance from the extremities to the cortex. For example, from the giraffe's toes to his brain is approximately 5 meters. From a mouse's toe to his brain is approximately 0.1 meters. More detailed procedures and sample data are provided in the lesson plan that follows this section. Students will make tables and from the tables make graphs. Students will then write an equation from the data. In addition to reviewing basic algebra concepts, students will also be reviewing the use of decimals. This section will also reinforce use of the metric system. An example of linear equations and their graphs are presented in Figure 1. Figure 1. source: Wikimedia Power function The power function, in function notation, takes the form f(x) = x^a, where a is a constant real number. This function will be used to show the relationship between total number of neurons and brain The diameter of a neuron ranges in size from 4 microns to 100 microns. (http://faculty.washington.edu/chudler/facts.html#neuron) Using the equation for the volume of a sphere, V = (4/3)*Π*r^3, and an average neuron size of 50 microns (1 micron = one millionth of a meter) , students will calculate brains size based upon 100,000 neurons up to 100,000,000,000 neurons. Figure 2 are graphs of power Figure 2. source: Wikimedia Logarithm function The logarithmic function, in base 10, f(x) = Log(x), will be utilized in measuring sound levels in decibels. We humans are equipped with sensitive ears capable of detecting sound waves of quite low intensity. The difference in magnitude between what we call the auditory threshold, the softest noise audible to a normal ear, and the loudest audible noise is huge. For example, the magnitude of difference between the auditory threshold and the threshold of pain is some 10,000,000,000,000,000 (ie., ten to the thirteenth power). Logarithms, like scientific notation, are useful when trying to represent or communicate such large magnitudes. Since the decibel (dB) scale is one of proportion between the baseline, auditory threshold, and an observed or recorded phenomenon the actual unit of measure is not important as the ratio will remain the same. The ratio can be expressed in the following equation: dB = 10*log[10](P[1] / P[0]) where is the baseline, or auditory threshold, and is the observed or recorded sound level. While the magnitude of difference between, say a whisper, and normal conversation may appear to be a magnitude of two or three it is actually much larger. An engaging exercise for students might be having them guess the magnitude of difference of some common sounds and then compare with what the actual dB level is. Below are OSHA permissible noise levels. For example, a normal conversation or TV would be 60 dB, a jack hammer about 100 dB, a major highway also about 100 dB, and a jet taking off 150 dB (http:// Another application of logarithms is calculating gestation periods. For example, if we know that a cell will divide approximately each week creating two cells then we can calculate how long it will take for an embryo to fully develop. As an example we can use the development of the human brain. The newborn has about one hundred billion neurons. The formula to represent this growth pattern is N = 2^x. If N equals one hundred billion we can then solve, using the logarithm base 2 as follows: x = log[2] 100,000,000,000. Using a calculator we find that x is approximately 36.54 weeks. Other examples may also be used. The graph of log[10](x) is shown in figure 3. Figure 3. source: Wikimedia Sine function Harmony is a function of the relationship between the pitches of different tones either in sequence or sounded at the same time. Within these tonal contexts the pitches set up expectations as to what will come next. The skillful composer arranges these progressions to either meet or violate the listener's expectations for artistic purposes. (Levitin, 2006, p.17). Jonah Lehrer (2006) suggests that Stravinsky understood, ahead of the science confirming it, that these expectations that we have are learned. If the listener's expectations are not met the piece may be considered disagreeable or discordant. In Rites of Spring Stravinsky violates many of the then current expectations and the work was initially considered discordant. Eventually the piece became accepted and well liked. Daniel Levitin explains in his PBS show, The Music Instinct, that every object has the capacity to vibrate and make sound. Music is organized sound. When we hear music we are literally being touched and moved. The sound waves hit our eardrums and move the fluid in the cochlea against the hair cells which are laid out from low to high frequency. This is transmitted through the brain stem to the auditory cortex which is laid out in pitch order. It was previously thought that there was a music center in the brain. It is now believed that there are many parts of the brain involved, much like a neural orchestra. Different parts of the brain are affected by the different musical elements of pitch, timbre, tempo, harmony, and melody. One apparent universal element of music worldwide is the octave. There also appears to be a certain predilection for consonance as opposed to dissonance. The elements of lullabies seem to be universal in the type of consonance that they have. Dissonance and consonance may be graphed using sine waves. The sine wave related to a musical pitch has the following form, where A is the amplitude of the sound (or the volume, measured in decibels) and B is the frequency of the note (measured in Hz): f ( x) = Asin (Bx). See figure 4 where the frequency B determines T, the period of the function. Period is calculated as (2Π / frequency). Students will be directed to graph a variety of chord patterns and determine, by listening and looking at their graphs, which combinations are dissonant or consonant. Figure 4. source: Wikimedia Within the key of C the only recognized chords are built off of the notes of the respective major scale. This causes some chords to be major and some minor, because of the unequal spacing of tones in the scale. To build the standard three-note chord we start with any of the tones of the C major scale, skip one, then use the next one, then skip one again and use the next one after that. The first chord of C major is C-E-G. Because the bfirst interval formed, between C-E is a major third, we call this a major chord, C major. The next chord built in similar fashion is D-F-A. Because the interval between D and F is a minor third this is a minor chord, D minor (Levitin, 2006). If we graph the sine waves of the three notes, C-E-G, we will notice that all three intersect at a specific point. The intersection point for C will be exactly 2 periods. The intersection point for E will be exactly 2.5 periods. The intersection point for G will be exactly 3 periods. This will hold true for all major chords. Major chords and minor chords have a very different sound. Even though most non-musicians could not name a chord upon hearing it, or label it as major or minor, if they hear a major chord and a minor chord played one right after the other they would be able to distinguish the two. "And their brains can certainly tell the difference - a number of studies have shown that nonmusicians produce different physiological responses to major versus minor chords, and major versus minor keys" (Levitin, 2006, p. 267). The sine waves for randomly selected notes will also intersect randomly. It is interesting that only when sounds fit a certain pattern do we find them pleasing, or consonant. Another, although more complex, application of sine waves is in modeling brain activity through brain waves. One way to record brain activity is with the electroencphalogram (EEG) along with measures of eye movement and skeletal muscle movement. In figure 5 the EEG is highlighted in the dark box. The heavy dark line highlights rapid eye movement (REM) sleep. There are generally two stages to sleep, REM and non-REM. Most memorable dreaming occurs during REM sleep. The EEG is characterized by rapid, low voltage. This is represented in the graph of the EEG as having a high frequency, or short period, and low amplitude. During this stage of sleep mammals lose muscle tone and are in a near state of paralysis, likely thought to prevent self-injury while sleeping. The deepest sleep is characterized by a graph with a relatively low frequency and resulting longer period. Classroom Activities Lesson 1 For this lesson idealized data will be used to study the reaction times of various animals, as well as humans. Reaction time is nerve conduction from the foot to the brain and back to the foot. The idealized universal conduction velocity will be 100 meters per second. The students will be given data such as the distance from the extremities to the cortex. For example, from the giraffe's toes to his brain is approximately 5 meters. From a mouse's toe to his brain is approximately 0.1 meters. Begin the lesson with the question of which will have the quickest reaction time - a giraffe or a mouse? A human or a blue whale? Then have students complete the table remembering that the impulse must travel from the foot or tail to the cortex and back. Once the table is complete have students write and equation. Finally, have students graph their results. Lesson 2 This lesson will use bags of marbles to represent neurons in the brain. Remind students of the formula for the volume of a sphere: V = (4/3)*Π*r^3. Have about six different size bags - clear plastic bags work well - in order to show the differing amounts of marbles each bag will hold. Demonstrate the capacity of various bags. Alternatively you may have several students demonstrate this. Using an average size of 50 micron diameter for each neuron, have students calculate brain volume and required diameter in the following table: Lesson 3 For this activity I use an electronic keyboard and TI-83 or TI-84 graphing calculator. This especially motivates those students with a musical background. Many students have never realized the deep connection between music and mathematics. First, in order to determine the frequencies of each note in the tempered scale, we start with middle C, which has a frequency of 261.63. Each note up is calculated by the formula f[n] = f[0] * (2^1/ 12)^n which will produce the following table of frequencies: Have a student, preferably one with a musical background, play several different chords, both consonant and dissonant. Also, have the student play the C major chord. Using 2 as the amplitude have students insert frequencies for C, E, and G into the y= with each respective frequency in . Set windows on the calculator to y min -2.5, y max 2.5, x min 0, x max Π/265. Make sure students have set the calculators in radian mode. Have students observe where the three graphs intersect. For C major, the graphs will intersect where C is 2 periods, E is 2.5 periods and G is 3 periods. Have students compare other chords. Lehrer, J., (2007). Proust was a Neuroscientist. New York: Houghton Mifflin. Levitin, D. J., (2006). This is Your Brain on Music: The Science of a Human Obsession. New York: Dutton. Levitin, D.J., (2008). The World in Six Songs: How the Musical Brain Created Human Nature. New York: Dutton. Larson, R., et al. (2004). Algebra 2. Evanston: McDougall Littell. Sacks, O. (1985). The Man who Mistook His Wife for a Hat. New York: Summit Books. Senk, S., et al. (1998). UCSMP Functions, Statistics and Trigonometry. Glenview: Addison Wesley. Tufte, E. R., (1983). The Visual Display of Quantitative Information. Cheshire: Graphics Press.
{"url":"http://www.yale.edu/ynhti/curriculum/units/2009/4/09.04.07.x.html","timestamp":"2014-04-17T18:56:30Z","content_type":null,"content_length":"27279","record_id":"<urn:uuid:7818cea9-54ac-45e3-b621-69e481931e79>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilities independent of ZFC? MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Hi guys, is it possible to change the probability of an event via forcing? More precisely, is there an innocent looking question on the probability of "something" whose answer is independent of ZFC? up vote 7 down vote favorite All the best, Sebastian set-theory forcing pr.probability measure-theory add comment is it possible to change the probability of an event via forcing? More precisely, is there an innocent looking question on the probability of "something" whose answer is independent of ZFC? There are several issues. On the one hand, any set can be made countable by forcing, and this process will certainly affect the measure of the set, if it did not have measure zero in the ground model. But in the context of the Lebesgue measure on the reals, say, it is natural to consider not the set itself, but the Borel description of the set, interpreted first in the ground model and then reinterpreted in the forcing extension. (For exampe, the "unit interval" of $V$ is not necessarily the same as the unit interval of a forcing extension $V[G]$, but we have a borel code that correctly picks out the unit interval when interpreted in any model of ZFC.) In this case, one gets a positive solution for preservation of measure. The reason is that the assertion that the measure of the set with Borel code $b$ is $x$ has complexity at most $\Sigma^1_2(b,x)$ and hence is absolute to all forcing extensions by the Shoenfield absoluteness theorem. In this sense, the measure of a measurable set cannot be affected by forcing. up vote 18 down vote Meanwhile, the use of other non-absolute descriptions can lead again to a negative answer, where the measure can be affected by forcing. For example, consider the set $X$ of all binary accepted sequences $x$ whose sequence of digits is realized somewhere in the GCH pattern of cardinals, in the sense that there is an ordinal $\beta$ such that $x(n)=1$ iff $2^{\aleph_{\beta+n}}=\ aleph_{\beta+n+1}$. If the Generalized Continuum Hypothesis holds, then $X$ has measure zero, since only one pattern is realized. But one can force the GCH pattern to realize all patterns, and so there are forcing extensions in which $X$ has full measure. Here is another comparatively concrete example. Consider the set of reals that are constructible, in the sense of Gödel's constructible universe. This set has complexity $\Sigma^1_2$ in the descriptive set-theoretic hierarchy, which is just a step up from Borel. The set has full measure in the constructible universe, of course, but it is easily made to have measure zero in a forcing extension. Thus, the probability that a randomly chosen real number is constructible has an answer that is independent of ZFC, because in some models of set theory this probability is 1 and in others it is 0. show 5 more comments On the one hand, any set can be made countable by forcing, and this process will certainly affect the measure of the set, if it did not have measure zero in the ground model. But in the context of the Lebesgue measure on the reals, say, it is natural to consider not the set itself, but the Borel description of the set, interpreted first in the ground model and then reinterpreted in the forcing extension. (For exampe, the "unit interval" of $V$ is not necessarily the same as the unit interval of a forcing extension $V[G]$, but we have a borel code that correctly picks out the unit interval when interpreted in any model of ZFC.) In this case, one gets a positive solution for preservation of measure. The reason is that the assertion that the measure of the set with Borel code $b$ is $x$ has complexity at most $\Sigma^1_2(b,x)$ and hence is absolute to all forcing extensions by the Shoenfield absoluteness theorem. In this sense, the measure of a measurable set cannot be affected by forcing. Meanwhile, the use of other non-absolute descriptions can lead again to a negative answer, where the measure can be affected by forcing. For example, consider the set $X$ of all binary sequences $x$ whose sequence of digits is realized somewhere in the GCH pattern of cardinals, in the sense that there is an ordinal $\beta$ such that $x(n)=1$ iff $2^{\aleph_{\beta+n}}=\aleph_{\beta+n+1}$. If the Generalized Continuum Hypothesis holds, then $X$ has measure zero, since only one pattern is realized. But one can force the GCH pattern to realize all patterns, and so there are forcing extensions in which $X$ has full measure. Here is another comparatively concrete example. Consider the set of reals that are constructible, in the sense of Gödel's constructible universe. This set has complexity $\Sigma^1_2$ in the descriptive set-theoretic hierarchy, which is just a step up from Borel. The set has full measure in the constructible universe, of course, but it is easily made to have measure zero in a forcing extension. Thus, the probability that a randomly chosen real number is constructible has an answer that is independent of ZFC, because in some models of set theory this probability is 1 and in others it is 0.
{"url":"http://mathoverflow.net/questions/56990/probabilities-independent-of-zfc?sort=oldest","timestamp":"2014-04-19T15:07:42Z","content_type":null,"content_length":"59278","record_id":"<urn:uuid:27301b3f-147a-4e7f-a333-8ad36da069b9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Differentiate the following February 2nd 2009, 03:26 PM #1 Feb 2009 Differentiate the following i havn't done maths in awhile and when i got back into it ive totally forgot how to do these so if anyone could help me out it will be great so differentiate the following and simplify where appropriate y= x^5/(3x+4) y = lnx/x r(t) = ln(2t square root (t + 1)) For the first one you will need the quotient rule: In this case $f(x)=x^5$ and $g(x)=3x+4$ Then we have: The rest is just simplifying. The second one is similar to the first one. You can do this problem using the quotient rule. (Note also: $\ln x=\frac{1}{x}$) The third one is a combination of the chain rule and the product rule or by changing it a little bit can avoid the chain rule. (I assume that you know these two rules?) February 2nd 2009, 03:39 PM #2 Dec 2008
{"url":"http://mathhelpforum.com/calculus/71419-differentiate-following.html","timestamp":"2014-04-16T09:03:30Z","content_type":null,"content_length":"32307","record_id":"<urn:uuid:5ce91886-98c5-44c6-b4f2-467e8d26a730>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Cuboid Perimeters To Volume For any given cuboid it is possible to measure up to three different perimeters. For example, one perimeter could be measured this way. Given that cuboid A has perimeters 12, 16, and 20, and cuboid B has perimeters 12, 16, and 24, which cuboid has the greatest volume? Problem ID: 357 (26 Aug 2009) Difficulty: 3 Star
{"url":"http://mathschallenge.net/view/cuboid_perimeters_to_volume","timestamp":"2014-04-16T10:45:15Z","content_type":null,"content_length":"4346","record_id":"<urn:uuid:c90e58b7-1040-4a70-a060-05766576830b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Glen Echo SAT Math Tutor Find a Glen Echo SAT Math Tutor ...Trigonometry is all about triangles, that's why I love it! The difficulty lies in the fact that there is so much rote memorization early on with the different trig functions, their graphs, working with degrees or radians as well as all the different angle values. It is formula heavy and requires more memorization than the previous year of Algebra 2. 24 Subjects: including SAT math, reading, geometry, algebra 1 ...I worked in my university's writing center for two years, assisting undergraduate students with their writing assignments and teaching outlining, writing, and editing techniques. Many of the students who came to the writing center struggled with confidence as well as writing proficiency. Althou... 22 Subjects: including SAT math, English, reading, writing ...As a teacher, I use many hands-on manipulatives to help build a conceptual understanding of the concepts my students are learning. I also enjoy incorporating technology and math games into my lessons. It is very important as a teacher to make the connection between the math and real world applications. 4 Subjects: including SAT math, algebra 1, elementary math, prealgebra ...I realize that every student learns differently, so I am skilled in explaining new concepts at his or her own pace and in different ways. Through patience, enthusiasm, and excellence, my goal as an educator is not only to help raise my students' grades but also to instill in them an equaled pass... 17 Subjects: including SAT math, reading, algebra 1, geometry ...I have 5 years of MATLAB experience. I often used it during college and graduate school. I have experience using it for simpler math problems, as well as using it to run more complicated 27 Subjects: including SAT math, calculus, physics, geometry
{"url":"http://www.purplemath.com/Glen_Echo_SAT_math_tutors.php","timestamp":"2014-04-19T02:05:49Z","content_type":null,"content_length":"24047","record_id":"<urn:uuid:7add1030-75b2-47b9-b8e2-48eb9d6404e4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Rest Mass Versus Relativistic Mass I can't think of any other concept in physics in which (i) it is as 'well-known' to the general public as it is to physicists and (ii) it causes as much confusion to the general public as it does to physicists. I'm talking about the concept of "mass" and the infamous Einstein's equation "E = mc^2". One of the issues surround the concept of mass and the Einstein equation is the idea on what "mass" really means in relativity, and the validity of the concept of "relativistic mass". There have been many articles written to address this issue, but it is obvious that, even today, many media, including textbooks and popular writings, continue to use the term "relativistic mass" to mean an increase in the measured mass when an entity is moving at relativistic speeds. Whether the faulty understanding of such a concept can create a stumbling block in understanding relativity or not is an entirely different issue. But can there be a simpler approach to such a concept without invoking the name "relativistic mass"? Lev Okun seems to think so. In a highly compact 2-page paper in the Am. Journal of Physics[1], he wrote a very concise explanation of what "mass" is, and why there is really only ONE concept of mass as defined in terms of momentum and energy by what he called the most fundamental equation of relativity theory: m^2 = (E/c^2)^2 - (p/c)^2, where E is the energy, p is the momentum. There is nothing new here that someone who has gone through an intro class in relativity/Modern Physics would not have seen. But it is put in such a compact and clear form that it summarizes Special Relativity in almost 1 1/2 pages! What is as interesting is his commentary on how this issue has been treated in the media and in textbooks. Unfortunately, sometimes and especially in his popular writings Einstein was careless about the subscript 0 and spoke about the equivalence of mass and energy and omitted the attribute “rest” for the energy. As a result Einstein's equation E0=mc^2 became known in its famous but misleading form E=mc^2. One of the most unfortunate consequences is the concept that the mass of a relativistic body increases with its velocity. This velocity dependent mass is known as “relativistic mass.” Another consequence is the term “rest mass” and the corresponding symbol m0. These confusing concepts and notations prevail in such classic texts as the ones by Born and Feynman. Moreover, in these texts the dependence of mass on velocity is presented as an experimental fact predicted by relativity theory and proving its correctness. To substantiate the formula m=E/c^2 some authors use the connection between momentum and velocity in Newtonian mechanics, p=mv, forgetting that this relation is valid only when v (is significantly less than) c and that it contradicts the basic equation m^2=(E/c^2)^2−(p/c)^2. Einstein's tolerance of E=mc^2 is related to the fact that he never used in his writings the basic equation of relativity theory. However, in 1948 he forcefully warned against the concept of mass increasing with velocity. Unfortunately this warning was ignored. The formula E=mc^2, the concept relativistic mass, and the term rest mass are widely used even in the recent popular science literature, and thus create serious stumbling blocks for beginners in relativity. [1] L.B. Okun Am. J. Phys. v.77, p.430 (2009). 6 comments: L.B.Okun asking your e- mail adres for private correspodence. According to Einstein Theory of Relativity, E=mc^2. According to this relationship of Energy and Mass 1 kg mass of any matter is equivalent to 9 x 10^16 J of energy. Does it mean that, Mass of any matter is Condensed Form of Energy and Energy is Diffused Form of Mass of any matter ? A question may also arise what existed before the creation of the Universe Energy or Mass or both? Based on E=mc^2, can it be said that mass is the ‘potential state’ of matter and energy is the ‘kinetic state’ of matter and just multiply mass with c^2 you will get huge amount of energy and divide energy by c^2 you get very small amount of mass OR some other factors/ mechanisms are essential for these conversions ? E=mc^2 is called ‘Einstein’s energy-mass relation’. According to this relation, 1 kg mass of any matter is equivalent to 9x10^16J of energy. This is a huge amount of energy, equal to 2.5x10^ 10kWh. It is evident that the amount of energy is same irrespective of the matter taken, whether it is carbon, iron, copper or any other including radioactive elements. The amount of energy thus released does not depend on the atomic number, atomic weight, electronic configuration etc. It is the mass of the matter only based on which the amount of energy is calculated. It means that ‘mass’ is the connecting link between energy and matter. It is written in the Text-Books of Physics that if we give ∆E energy to some matter, then according to E=mc^2, its mass will increase by ∆m, where Since the value of c is very high, the increase in mass ∆m is very small. For example, if we heat a substance, then the heat-energy given to this substance will increase its mass. But this increase in mass is so small that we cannot measure it even by the most sensitive balance. Similarly, if we compress a spring, its mass will increase, but we cannot confirm this mass-increase by any experiment. Now the question is whether the change in mass as quoted in these two examples is reversible i.e. when the same substance of example one is cooled down, energy is produced equal to ∆m x c^2 (∆E= ∆m x c^2) and in second example when we release the spring , energy is produced equal to ∆m x c^2 and initial mass is retained in both the cases ? Or the above changes are irreversible ? Of course this equation E=mc2 assumes you can get each and EVERY atom in said mass to release its energy which requires perfect efficiency. Which never happens in the real world. In a fission reaction you'd probably be lucky to get 10 % of the atoms to release their energy (that's just a guess maybe a physicist could answer that more accurately). But if you COULD release the energy of matter perfectly than 1 lb of anything equates to roughly 10 million tons of TNT.
{"url":"http://physicsandphysicists.blogspot.com/2009/04/rest-mass-versus-relativistic-mass.html","timestamp":"2014-04-20T15:51:29Z","content_type":null,"content_length":"147898","record_id":"<urn:uuid:4118347e-223d-4ccd-af1e-919c715a271e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: check (constraint) on point data type? On Jul 24, 2007, at 14:59 , Jill wrote: > The field is of type 'point', and I'd like it to reject any values > less than 0 or bigger than 1 (i.e., accept only points with values > like (0.4, 0.26)). > Let's say I try to define the upper boundary by doing: > ALTER TABLE "public"."locations" ADD CONSTRAINT "up_boundary_chk" > CHECK (location < (1,1)); One issue is that point literals are quoted: '(1,1)', not (1,1). However, I don't think your constraint would do quite what you think it would. Here's what I would do: -- Define a helper function to determine if a float is during a particular open interval: CREATE FUNCTION strict_during(double precision, double precision, double precision) LANGUAGE SQL AS $_$ SELECT $1 > $2 AND $1 < $3 -- Note that the check constraint tests both the x and y values of the point using the -- strict_during helper CREATE TABLE points a_point point not null check (strict_during(a_point[0], 0, 1) AND strict_during(a_point[1], 0, 1)) test=# INSERT INTO points (a_point) VALUES ('(-1,-1)'); -- should fail ERROR: new row for relation "points" violates check constraint test=# INSERT INTO points (a_point) VALUES ('(-1,0.5)'); -- should fail ERROR: new row for relation "points" violates check constraint test=# INSERT INTO points (a_point) VALUES ('(0.5,-1)'); -- should fail ERROR: new row for relation "points" violates check constraint test=# INSERT INTO points (a_point) VALUES ('(0,0)'); -- should fail ERROR: new row for relation "points" violates check constraint test=# INSERT INTO points (a_point) VALUES ('(0.5, 0.5)'); -- should be ok INSERT 0 1 test=# INSERT INTO points (a_point) VALUES ('(1,0.5)'); -- should fail ERROR: new row for relation "points" violates check constraint test=# INSERT INTO points (a_point) VALUES ('(0.5, 1)'); -- should fail ERROR: new row for relation "points" violates check constraint test=# INSERT INTO points (a_point) VALUES ('(10,0.5)'); -- should fail ERROR: new row for relation "points" violates check constraint test=# INSERT INTO points (a_point) VALUES ('(0.5, 10)'); -- should fail ERROR: new row for relation "points" violates check constraint test=# select * from points; (1 row) I haven't looked at the geometric functions closely enough to see if you could use some of those rather than defining your own helper, but this should work. Hope that helps. Michael Glaesemann grzm seespotcode net In response to pgsql-novice by date Next: From: Jim Nasby Date: 2007-07-24 21:10:57 Subject: Re: check (constraint) on point data type? Previous: From: Jim Adams Date: 2007-07-24 20:38:27 Subject: Re: check (constraint) on point data type?
{"url":"http://www.postgresql.org/message-id/B1357D3E-C00C-4180-B891-57805FA39759@seespotcode.net","timestamp":"2014-04-24T12:45:53Z","content_type":null,"content_length":"14726","record_id":"<urn:uuid:d708031f-a44e-48f7-b52c-f58f0cacc53b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Teacher2Teacher - Q&A #3025 View entire discussion [<< prev] [ next >>] From: Jacob Keilman <jacob_keilman@hotmail.com> To: Teacher2Teacher Public Discussion Date: 2004021022:14:39 Subject: Re: Core Plus Math This is an essay i sent to my local school board regarding core plus. I hope that no other district makes the mistake that mine did, and that no students have to go through what I have to get a proper Many people are asking the question “If it’s not broken, why fix it?” about the Northshore School Districts new math program. Recently, the Northshore school district has adopted “Core Plus, Contemporary Mathematics in Context”. This switch has upset a large number of the higher-level, college bound junior high students who were expecting to be taking algebra or geometry this year. Before the switch to Core Plus, Northshore had a really great math program, one of the very best in Washington. Other districts across the country have tried Core Plus, and students that then went onto college had major problems understanding basic math concepts that were supposed to be taught in algebra. An example of the problems with Core Plus is Michigan’s Bloomfield Hills School District, where Core Plus was used. Students that did very well in junior high and high school in that school district failed college placement tests and struggled in college level remedial math courses. More recently, after seeing their students fail, Bloomfield’ school board has voted to give their students a choice between Core Plus and the traditional math curriculum. One reason the students in Core Plus did not do well later in their academic career was that, although Core Plus introduced many key concepts and ideas, hardly any practice was given, so concepts did not sink in. A chairman of a department of mathematics while discussing Core Plus was quoted to say, “It may be fashionable to teach from such a book, but it is not effective.” As well as having shown disappointing results in other districts, Core Plus is also frustrating students right here in our own district. In class, students are bored of writing essays answers for each math problem about how they devised an answer, or why their solution is the correct one. This method covers less material in more time, so not only will students not remember the concepts very well, but also they will not have been taught some necessary ideas. Opponents of the program are not suggesting that Core Plus be removed completely but they do think that something different needs to be offered for those students who want to take a more traditional method. This is not as strange as one might think, for some districts, such as Bloomfield Hills have already done this very thing. Core Plus might help those students who do not normally excel at math, but for the higher-level students, it does not do an acceptable job of teaching important algebraic concepts. When algebra is finally taught in the third book, it is too little, too late. By this time, most students already need the algebra in chemistry or physics, and, due to the lack of practice, the concepts don’t sink in. Many people argue that Core Plus prepares students for the WASL, but again, the higher-level students already do well enough on the WASL because they understand the concepts and know the material. Another significant problem with the Core Plus series is the numerous errors in the “real world” problems. There is one graph in Core Plus 1A, on page 104 that shows a graph of the height of a Ferris wheel versus time into the ride. The graph depicts a Ferris wheel that makes nine five second stops. However, just before and after each of the stops, there are near vertical or vertical lines showing that the Ferris wheel crossed several meters of space in no time, suggesting infinite velocity. One of the main purposes of this book is that it shows students how they can use math in real world applications, but the book is filled with so many errors that students become even more confused, in math as well as getting false notions about various scientific principles. Lastly, in light of the current budget crises, if the school district needs to purchase any more materials for this new math program, it might be wise to reconsider that decision. Any purchase of more materials will inevitably require a severe funding cutback somewhere else, and other valuable programs will be lost. The districts budget is already quite lean, so why make things worse? In conclusion, Core Plus might be an acceptable solution for students who struggle with math, but a different method needs to be offered for the students who already excel at math. Even though it might help some students with the WASL, an alternative needs to be offered for students who already succeed on the WASL. Many opponents of the program pose the question, “should we be using a math book that has produced disappointing results in other districts or rather should we be learning from their mistakes?” Post a reply to this message Post a related public discussion message Ask Teacher2Teacher a new question
{"url":"http://mathforum.org/t2t/discuss/message.taco?thread=3025&n=6","timestamp":"2014-04-17T19:39:03Z","content_type":null,"content_length":"9255","record_id":"<urn:uuid:0d4815d5-bc4f-4ecc-b8aa-36a23b9bf0b9>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Constructive Proof Homework March 1st 2009, 08:00 PM #1 Mar 2009 Constructive Proof Homework I need help proving the following: If p and q are rational numbers with p<q, then there exists a rational number x such with p<x<q. Thanks in advanced. It is a constructive proof. I love you. March 1st 2009, 08:31 PM #2 March 1st 2009, 08:45 PM #3 Mar 2009
{"url":"http://mathhelpforum.com/number-theory/76466-constructive-proof-homework.html","timestamp":"2014-04-20T16:05:46Z","content_type":null,"content_length":"34690","record_id":"<urn:uuid:cfcbe9f9-31ed-48c5-b787-a6d68d83ab74>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
The official "rate my gear" thread thought i'd throw my setup in here hume male, only 1 polearm merit so far, i just dinged 75 a couple weeks ago. i'm usually /sam in parties. weap - thalassocrat sub - pole strap ammo - tiphia head - wyvern helm neck - chiv chain ears - fowling/assault body - AJ hands - tarasque +1/pallas rings - 2x woodsman/2x ruby back - amemet +1 waist - life/potent/warwolf/swordbelt +1 legs - barone feet - amir/marine i dont have much of a ws set yet. ive basically only been swapping in warwolf, pallas and now marine (since i got my amir) for wheeling thrust. for penta i have been wearing this setup. ive mostly been wearing dual ruby rings or 1 ruby, 1 woodsman depending on if i need more acc. the potent belt is on almost full time, but i have those other belts as options. i havent had much experience with merit parties yet, so i'll see how i do there pretty soon. i've been thinking of saving for a brutal earring from buying anc. beastcoins, going for barbarossa's pants, more assault gear (pahl. head, body, etc)... Very solid setup for a fresh 75. Good to see you keep on some ACC for penta since you only have 1 polearm merit. Once the merits get higher you can drop more acc for atk/str. Only upgrades I see are getting a haste setup(at least 10% from gear) going once merits are done and the typical Rare/Ex stuff(p.gear, homam, ares's, and heca). If you have WoTG then do the quest for the Smart Grenade. Hidden effect is 4atk. Be careful though, it can be thrown and you can't do the quest again from what I know. Threw mine the same day I got Edited, May 13th 2008 3:57pm by Maxom its in my sig... Edited, May 13th 2008 5:30pm by itege itege, nice pick on the lance. I would change the grip to a pole strap though. For rings get some pure ACC ones and keep them on at all times, even for penta, until you get into merits. Woodsman's tend to be cheaper than Sniper's so you might want to go that route. Minus getting a +1 earring and Spiked Finger Gauntlets, everything else looks good for your level. I know it still needs a little work, but I think I have a good start: Main: Thalassocrat Sub: Pole Strap Range: Tiphia Sting Head: Walmart/Homam/Pahluwan/Wyvern Helm (askar soon, floor 100 farming this week yay!!) Neck: Chivalrous Chain Ear: Brutal/Fowling/Hollow (if I need ACC lol.. Assault I could save for but..) Body: Askar/Heca/Pahluwan Hands: Homam/Heca (also have Wyrmal and will have Askar when they drop) Ring: Raja/Ulthalam's/Ruby/Sniper (saving for Flame... slowly) Back: Amemet +1 Waist: Potent/Warwolf (swift eventually) Legs: Askar/Pahluwan/Barone (have items for Drachen +1 for solo eventually) Feet: Homam/Heca/Askar/Drachen As you can see, I am very much into the ra/ex gear lol. This is mainly because I don't have a ton of time to play anymore. I log in to do sky/limbus/nyzul and that is about it now. So far I have been lucky enough to get items from Nyzul that help fund consumables and pay for Limbus (Thank you serket ring), but I only farm on rare occassions now if I have a little free time to get in game. Thoughts, tips, etc? Legs are my real issue on DRG, I like the Pahluwan and actually sold Barone for a while as I was only using them for my jump macro anyway... I will have Homam soon-ish there as well, so that is TP.. more looking for a good WS piece that doesn't involve camping kings :( the only reason i dont have sniper's/woodsman's is because i was told dex stacks well. i macro in str rings for WSes and still hit 4/5-5/5 most of the time itege wrote: the only reason i dont have sniper's/woodsman's is because i was told dex stacks well. i macro in str rings for WSes and still hit 4/5-5/5 most of the time 4Dex may =3 Acc, however I'm still a firm beliver that 10Acc > 10Dex, if you're TPing in Thunder/Rajas, GTFO and change to Sniper's/Rajas. Thunder are like twice the price of Sniper's now. I guess rather than starting a thread, this is the place for it. Not so much "rate my gear, here it is" as rate my numbers. At 75 I have acc+50 and atk+56 for my TP gear, not including food -- using Bison Steak (merit) or Couerl Sub (Dynamis and Assault), and not including STR and DEX from gear (not a significant contribution to acc). Those activities are what I mostly am doing these days. Is this too much accuracy? I'm fairly sure it's not too little. I'm a mithra with currently 5 polearm merits. When I cap out polearm merits for another 5.4 +acc, that should perhaps allow me to drop some acc out of gear for more atk during TP? I know the answer is to parse it, but I don't have one running yet(*). I've parsed very well in overall damage in a Dynamis parse, but accuracy wasn't included in the numbers. I do want to keep tuning my gear as merits increase, but I wondered if there's a good rule of thumb. Edit: This is not a haste build. I may yet put together a haste build, but I don't want to go into it half-assed before it's fully ready. (*)related question: does DirectParse run without either windower? Edited, May 16th 2008 1:24pm by Laverda Grandlethal, if you got the money for it get Dusk Pants. They are the best WS pants we can get for penta that you can buy, no camping or farming for drops needed. Barone is fine too, nothing wrong with it in a WS build. itege, DEX does give us ACC but those rings don't give as much as actual ACC rings do. If you're going for a crit build then those rings are fine but most people don't. Woods/Snipers(whatever is cheaper) is the way to go. Laverda actually, you are right at a perfect amount of +acc. Mithra drg, with 8/8 polearm merits needs +49 acc to hit 95% acc. you are at +50, so you are just barely at the cap. Keeping it there would be very good for your build. Edit: when you get those last 3 merits. Edited, May 16th 2008 2:21pm by Meldi Thanks, Meldi! You made me smile on a dull Friday. :D Meldi wrote: Laverda actually, you are right at a perfect amount of +acc. Mithra drg, with 8/8 polearm merits needs +49 acc to hit 95% acc. you are at +50, so you are just barely at the cap. Keeping it there would be very good for your build. Edit: when you get those last 3 merits. Edited, May 16th 2008 2:21pm by Meldi Keep in mind this only applies to Greater Colibri. It will vary from camp to camp and from event to event. Laverda wrote: I know the answer is to parse it, but I don't have one running yet(*)... (*)related question: does DirectParse run without either windower? Kparser written by Kinematics is a very good parser I use, and it can do parses via memory, or via reading the logs. This means you can have it parse from a completely different computer if you map the log files to a shared drive, which is how I do it. My main computer slows down too much with the memory parse, every 6 seconds or so, the game hangs for about .5 seconds. This is bad for my NIN, as I miss shadows going down, utsu timing, vokes, etc. Edited, May 16th 2008 4:28pm by Souji Rate my gear please, I'm at 61, items I swap for WS are in brackets. ^^ Weapon: Dark Mezraq Grip: Pole Strap Range: Tiphia Sting Head: Walkure Mask Neck: PCC Ear1: Assault Earring Ear2: Spike Earring Body: Scorpion Harness (Akinji Peti) Hands: Spiked Finger Gauntlets Finger1: Woodsman Ring (Puissance Ring) Finger2: Woodsman Ring (Puissance Ring) Back: Amemet +1 Belt: Potent Belt Leg: Republic Subligar Feet: Tabin Boots +1 (Leaping Boots for Penta Modifier) Any recommendations? I'm thinking of buying Victory Rings... also... Spiked Finger Gauntlets or Pallas's Braclets for WS? I'm planning to get the Hume Male RSE Boots at 63 as well. All help is much appreciated. Thank You archzai wrote: Rate my gear please, I'm at 61, items I swap for WS are in brackets. ^^ Weapon: Dark Mezraq Grip: Pole Strap Range: Tiphia Sting Head: Walkure Mask Neck: PCC Ear1: Assault Earring Ear2: Spike Earring Body: Scorpion Harness (Akinji Peti) Hands: Spiked Finger Gauntlets Finger1: Woodsman Ring (Puissance Ring) Finger2: Woodsman Ring (Puissance Ring) Back: Amemet +1 Belt: Potent Belt Leg: Republic Subligar Feet: Tabin Boots +1 (Leaping Boots for Penta Modifier) Any recommendations? I'm thinking of buying Victory Rings... also... Spiked Finger Gauntlets or Pallas's Braclets for WS? I'm planning to get the Hume Male RSE Boots at 63 as well. All help is much appreciated. Thank You If using Penta, you should probably not do leaping, they're pretty close, but the acc will help more. At a mod of 20%, it takes 7(at 75, probably 6 at 61) DEX to add 1 base damage, so chances are high the DEX is only contributing ACC, and Tabin has more. For similar reasons, use Spike Finger Gauntlets for Penta, since Pally's will probably do less damage AND have less acc. Souji wrote: Meldi wrote: Laverda actually, you are right at a perfect amount of +acc. Mithra drg, with 8/8 polearm merits needs +49 acc to hit 95% acc. you are at +50, so you are just barely at the cap. Keeping it there would be very good for your build. Edit: when you get those last 3 merits. Edited, May 16th 2008 2:21pm by Meldi Keep in mind this only applies to Greater Colibri. It will vary from camp to camp and from event to event. Thanks Souji... I always forget that one important piece of information that is critical to the argument. It's been a long time since I leveled DRG. I even put it in semi-retirement and sold some stuff to fund PUP. Now I'm playing both, and I need to re-gear. Any advice on what I can get / do without? LV 65 Weapon: GKL Grip: Mythril +1 Range: X Ammo: Bibiki Shell Head: Walkure Mask Neck: Spike Ear 1: Spike Ear 2: Spike Body: Jaridah Hands: AF Ring 1: Woodsman Ring 2: Venerer Back: Amemet NQ Waist: Life Belt / Swordbelt +1 Pants: AF Feet: Marine F Total Bonuses (w/ Life Belt) ACC: 32 ATK: 25 STR: 9 Parts I'm aware of. Neck: Spike --> Chivalrous Chain Pants: AF --> Feral Trousers Anything fun, new, and exciting come out for gear or has it been the same old stuff from about a year ago? Any help is appreciated, thanks! For my gear. I have plans for Mythril Grip +1 soon and Tabin Boots +1 at 59 and hopefully Bushido Cape at 60. I'm 55 drg/sam as it says going mostly for acc atm but also accepting other ideas on how I can do it. Also at night time I use a vampire mask instead of the dandy specs. With gear + trait + hasso I come to +40 acc. add acc from dex ([61/4]x3=45.75) 85.75. Edit: No merits yet, this is my highest job and I'm only unlocked to 60 right now. Edited, May 25th 2008 10:15pm by SINisterWyvern SINisterWyvern wrote: For my gear. I have plans for Mythril Grip +1 soon and Tabin Boots +1 at 59 and hopefully Bushido Cape at 60. I'm 55 drg/sam as it says going mostly for acc atm but also accepting other ideas on how I can do it. Also at night time I use a vampire mask instead of the dandy specs. With gear + trait + hasso I come to +40 acc. add acc from dex ([61/4]x3=45.75) 85.75. Edit: No merits yet, this is my highest job and I'm only unlocked to 60 right now. Edited, May 25th 2008 10:15pm by SINisterWyvern don't get bushido cape. wait 1 level for amemet+1. bushido cape is sh*t. "what about the zanshin???" zanshin is sh*t. "i need that 1 store tp!" no you don't. pole strap at 60 is best. woodsman/sniper rings instead of worthless DEX rings. if you want to level into assault gear, make it chivalrous chain (you can assault for it if you'd rather do fun activities than save gil; in fact, assault is good for making gil, if you can find people do level capped assaults with you). upgrading your earrings would be nice. first get assault earring asap, and may as well keep second fang until you get to coral earrings. everything else is good. if your ACC is good, you may want to consider complementing your crow hose with crow head and feet. pole strap, smart grenade (it's a quest; it's free), and the money upgrades you should already know you should get (2nd woodsman, HQ mantle, etc). as for pole strap, if you've been away awhile, it was probably expensive when you left. check the AH now, the price has gone way down (at least on phoenix). Edited, May 26th 2008 1:27am by milich I was looking through some gear on FFXIclopedia when I found the Ecphoria Ring. Would this be a decent substitute for the 2nd Woodsniper? I don't think 1 less ACC is going to destroy my setup. Also, should I get the Scorpion Harness, or stick with Jaridah and go for an Assault Jerkin from Ose in a few levels? Ecphoria is a fine ring. If you can afford it, get SH until you get assault jerkin. Get an assault jerkin. Even after you have AJ, if you can afford it, keep SH for times where you're fighting evasive mobs. You can replace SH later with Hydra mail, Pahluwan body or homam corazza. All 4 make a fine acc body piece. milich wrote: don't get bushido cape. milich wrote: don't get bushido cape. milich wrote: don't get bushido cape. milich wrote: don't get bushido cape. TY. Just I have to work on where to farm at. I've been in MMOs for 7 years so not fully lost but this is my highest job, just started in january. I know the best stuff and where I can upgrade if I just had that kinda gil flow, lol. I'll look up those others ya listed and see how I can work 'em in. Thank you. Edited, May 26th 2008 6:10pm by SINisterWyvern I'll jump on the thread and ask my question about gears. For a couple of times now, I've been trying to get Conte Cosciales for TP and WS. I'm farming the materials and I'm having a friend trying to synth them (0/12 so far). Now, it was brought in a post earlier that Dusk pants is a good alternative to macro in for Penta Thrust. That made me think: Are Dusk trousers a better choice than the Conte Cosciales? I thought about it and I do not know which would be better TP wise and WS wise. One has more attack than the other, but one has a STR bonus and a TP jump bonus. ElvaanEiji wrote: I'll jump on the thread and ask my question about gears. For a couple of times now, I've been trying to get Conte Cosciales for TP and WS. I'm farming the materials and I'm having a friend trying to synth them (0/12 so far). Now, it was brought in a post earlier that Dusk pants is a good alternative to macro in for Penta Thrust. That made me think: Are Dusk trousers a better choice than the Conte Cosciales? I thought about it and I do not know which would be better TP wise and WS wise. One has more attack than the other, but one has a STR bonus and a TP jump bonus. conte better for jumps (of course) and probably better for penta (definitely better for wheeling thrust). dusk slightly better for TPing and possibly for penta. if you're interested as to why, the comparison is: 2 enmity so, 5ATT vs 3STR. 3STR has a 75% chance of raising your fSTR by 1. with thal, that's about a 1% increase. 5ATT may beat that, may not. if you start with, say, 500ATT, and fight something 6 levels above you with 322 DEF (like greater colibri), 5ATT is around a 1.2% increase. if your ATT starts lower, 5ATT does more. if it starts higher, the ATT does less. it does more vs higher DEF, higher level mobs. if your fSTR gets raised by conte's 3 STR, it will be a larger % increase if you use love halberd instead of one of the higher base dmg polearms. i've heard the movement speed sh*t on dusk is pretty annoying (i don't know, i use homam), and the difference is close enough that you may want to just TP in conte once you get it made (until you get homam or barbarossa). if you're interested in where the numbers above come from, read the section called "background" in this post on the SAM forums. Edited, May 26th 2008 9:17pm by milich That's pretty interesting. Best situation would probably be having the two and swap them to suit best the situation. But seeing that, I'll probably end up with the Conte Cosciales for TP and WS until I can get my hand on the Homam Cosciales. I'll keep the Conte for WS I guess since they're good for Wheeling AND Penta. Thanks for that. I would like to add to what Milich said and state that if your Conte Cosciales dont increase your fSTR (or WSC on Penta/Wheeling), Dusk trousers will always be better for anything you are doing. The confusion is when it does increase fSTR and or WSC. Okay I was trying to figure something out. As Drg/Sam is the pole strap > mythril grip +1? I just don't see the adding 2% chance to double attack to myself would add more than +2 str/vit/acc. Not sure though so that's why I'm going to ask here and see what the opinion is. Pole Strap is pretty much the best grip available at the present time - the exception being if you're using a multi-hit weapon (e.g. Soboro). SINisterWyvern wrote: Okay I was trying to figure something out. As Drg/Sam is the pole strap > mythril grip +1? I just don't see the adding 2% chance to double attack to myself would add more than +2 str/vit/acc. Not sure though so that's why I'm going to ask here and see what the opinion is. homework: what exactly does mythril grip+1 do, and what exactly does pole strap do? answer in terms of % DoT increase. if you can't answer, peruse the DRG and SAM forums until you can. once you know the answer, post it, and you'll have answered your own question. edit: this is not a question of opinion. if "what's better in terms of overall DoT increase?" is the question, there is exactly 1 right answer. don't think of it in terms of "opinions". we know enough about this game to answer this question with complete certainty (p.s. the answer is "pole strap is better", so if you don't come to that, try again). Edited, May 30th 2008 11:46pm by milich Thanks Milich instead of helping me you found a gentleman's way of being a douchebag. Edit: Ty Solrain for actually answering me. Edited, Jun 1st 2008 4:55pm by SINisterWyvern i love trying to help people and being insulted for it. how about this: fuck you. expect about the same if you come asking for people to do more work for you, dick. edit: alla put a quote by darkani instead of SINisterWyvern when i originally posted this. i'm not sure how, considering there aren't any posts by darkani on this page according to the quick ctrl+f i just did (so i can't see how it could have been my mistake as i first assumed), but meh. this edit cuts some of the punchiness of my initial reply, so let me reiterate that SINisterWyvern can fuck as an aside, i suspect this jackass thinks simply saying "pole strap is better" is "really answering" and "more helpful" than trying to get him/her to understand what STR, ACC, VIT, and double attack do. in case anyone else agrees, i want to point out that you are wrong. if you would like me to prove this point, watch this: brass grip is better. Edited, Jun 2nd 2008 1:39am by milich It wasn't the answer it was the way you answered it. Sorry I don't pay allah's to search the forums and came up with no answers on my own. In logical fashion I saw Str = WS's and dmg, Acc = everything, and Vit = jumps vs. an extra swing every 50 swings statistically. I do appreciate the answer but I don't appreciate the you made about me looking for it. Sorry for being a newer player with a real question. On the note of doing any work did you do all the number crunching to come to your conclusion? On the note of doing any work did you do all the number crunching to come to your conclusion? as a matter of fact, yes i did. I've got a couple questions about my gear as well. I'm pretty much an AH dragoon, as I grind far too often, and am still rank 5, and on Promy 2-5. Right now my gear is as follows: weapon - Thalassocrat sub - Pole Strap ammo - Smart Grenade head - Walahra Turban (TP) and Wyvern Helm (if i remember to swap for WS) neck - Chivalrous Chain ears - Assault/Coral body - Assault Jerkin hands - Tarasque Mitts rings - 2x Woodsman back - Amemet Mantle +1 waist - Potent Belt legs - Barone Coscialles feet - Amir Boots Right now I have 5 Polearm Merits and 1 Merit in Angon. So far I'm pretty happy with my setup, as I just exited a decent Merit PT at Mamool Ja Staging point contributing 40% of the damage from me, and 7% more from Kagero. However, my accuracy (only 4 polearm merits at the time) was only 82.9% while eating meat. I realize that Mamool are very evasive, but I the higher damage output from Coeurl Subs made up for the 4-5% loss from using Sole Sushi. However, as I'm slightly elitist, I'm looking for upgrades to my setup. I've been thinking about swapping out the Tarasque Mitts for Dusk Gloves, but I'm not sure if the 5%>8% difference is worth 400k. Other than that, I'm not really sure what I should be looking for, and I think i'll stick with the amount of accuracy I have until i have capped Polearm skill, and then go from there. Any tips, comments would be greatly appreciated. Thanks in advance, well heres mine i guess, don't know if it matters for my lvl though, feel free to tip me in on what would be better but not for rich people >.> yeah your prolly gonna tell me that i could use more acc gear but i dont miss too much so i sacrificed my tilt belt for the sword belt and my DEX rings for STR Korokedo wrote: yeah your prolly gonna tell me that i could use more acc gear but i dont miss too much so i sacrificed my tilt belt for the sword belt and my DEX rings for STR Ya, I'd say a bit more acc gear, although it depends what you're eating. I'm more of a fan of accuracy gear (in as many slots as I can get it) and meat as opposed to mixed attack/accuracy gear and I just went through your lvls with this stuff (Brigandine>Peti, Brass+1>Mythril+1, Battle Gloves>SFG, Beelte+1>Spike, Swordbelt+1/Tilt Belt>Life Belt/Swift Belt[I don't use Swift just yet but it's there]) and only parsed 80%-85% accuracy with meat and usually one Madrigal which to me is frustrating. I pretty much keep the same stuff in for Penta - maybe throw the Swordbelt+1 in when I feel lucky (read: feel like missing 2-3 hits XD). So take it from me who just went through your level range. Accuracy, accuracy, accuracy and meat! Gear critiques welcome, btw. I thought this thread was all about helping people with gear questions...not tearing them a new one for asking a gear question. The guy had a legitimate question and can't use the search function. I could understand if he was one of those "You dont need to tell me how to better gear my job" guys, but from what I could see, he just wanted some clarification on something. I got a Love Halberd last night^^ Haven't had the chance to try it out on colibri yet. Solrain wrote: Korokedo wrote: yeah your prolly gonna tell me that i could use more acc gear but i dont miss too much so i sacrificed my tilt belt for the sword belt and my DEX rings for STR Ya, I'd say a bit more acc gear, although it depends what you're eating. I'm more of a fan of accuracy gear (in as many slots as I can get it) and meat as opposed to mixed attack/accuracy gear and I just went through your lvls with this stuff (Brigandine>Peti, Brass+1>Mythril+1, Battle Gloves>SFG, Beelte+1>Spike, Swordbelt+1/Tilt Belt>Life Belt/Swift Belt[I don't use Swift just yet but it's there]) and only parsed 80%-85% accuracy with meat and usually one Madrigal which to me is frustrating. I pretty much keep the same stuff in for Penta - maybe throw the Swordbelt+1 in when I feel lucky (read: feel like missing 2-3 hits XD). So take it from me who just went through your level range. Accuracy, accuracy, accuracy and meat! Gear critiques welcome, btw. lol also i just remembered at lv49 i get penta thrust so imma need more acc to land my hits anyways so more acc it is! (ding hit lv47 a few mins ago) i'm satisfied with my party gear (see above), but i would like some input on a solo setup. i dont have much solo experience, and just recently i have been getting into soloing sea puks by the mamool staging point with blu sub. here's what i've been working with, and my ideas for solo upgrades in parenthesis. weap: iron ram lance sub: pole strap range: mamoolbane head: drachen helm neck: chiv chain earring: fowling/storm loop (insomnia earring?) body: SH (hydra mail?) hands: carapace gauntlets (hydra finger gauntlets?) rings: woodsmans or rubys (unyielding ring?) back: amemit +1 waist: warwolf or life belt legs: drachen brais feet: amir i also have a chanoix's gorget, so i will look at macroing that in soon. i used this setup last night and solo'd 18k lp. hydra mail looks nice, and that would bring my dmg reduction up to -13% with some nice dex, agi and def over the SH. hydra hands look nice too with the added vit, acc and eva (look nice for party use too). i'm thinking of unyielding for the str and agi. i really want to lessen the number of crits i take, they're a major annoyance. and then insomnia for +hp/mp...the storm loop is only there to replace my assault earring with it's -def and -eva. is there any elusive rare/ex gear that is great for solo? Zisaa, Mercenary Major wrote: I thought this thread was all about helping people with gear questions...not tearing them a new one for asking a gear question. The guy had a legitimate question and can't use the search function. I could understand if he was one of those "You dont need to tell me how to better gear my job" guys, but from what I could see, he just wanted some clarification on something. since i don't have my free premium anymore, i use a barbaric search function called a scroll bar and my eyes. honestly, i didn't feel like explaining base damage and accuracy for the thousandth time. i'd be shocked if i haven't provided math answering this person's question in at least 3 threads on the first page. and really, looking around for various explanations in different threads IS HELPFUL. i directed him/her to a good place to look (DRG and SAM forums, probably could have mentioned the "calculating melee damage" article by VZX on the wiki). i didn't think that was acting like a douchebag, and i think anyone who can't see why i was offended by the response is pretty thick. if it makes you happy, here's the mythril grip+1 vs pole strap math: pole strap increases total melee damage by 2%, for reasons which should be pretty obvious. for mythril grip+1, 2STR = 50% chance of raising base dmg with an 81 dmg lance + your fSTR, base dmg will be somewhere between 85 and 90. 1/90 = 1.11% increase in dmg from the 2STR, 50% of the time (the other 50% of the time it's 0% increase in dmg, so average of .555% bonus) 2ACC raises ACC% by 1. assuming 85%ACC, that's a 1.17% increase in total melee damage. 1.0117 * 1.0111 = 2.3% bonus best case, 1.17% bonus worst case, average bonus: 1.7% if ACC is capped, this bonus drops to 1.1% half of the time, 0% the other half. so, for melee damage: mythril grip+1: average bonus = 1.7% pole strap: average bonus = 2%. (incidentally, add in /WAR, going from 15% DA to 17% DA represents a 1.7% damage increase. to illustrate double attack's mildly decreasing returns: 100 rounds w/ 0% DA = 100 swings, 0% increase 100 rounds w/ 2% DA = 102 swings, 2% increase over 0% DA 100 rounds w/ 15% DA = 115 swings, 15% increase over 0% DA 100 rounds w/ 17% DA = 117 swings, 1.73% increase over 15% DA notably, mythril grip+1's bonus goes down as you level up, because a) your ACC should be higher, and b) your base damage definitely will start higher, meaning you'll get about a .95% increase half of the time from the 2STR, and of course a 0% increase the other half of the time, for an average dmg/hit increase of .47%) the 2VIT will raise jump by about .9%, or by 0%, depending on your current VIT. add in the 2ACC and you get about a 2.6% jump damage bonus when fSTR doesn't go up (assuming the VIT multiplier impacts your dmg, the chances of happening of which are pretty good). the other 50% of the time (when fSTR is raised), it's about 3.6% damage increase. so, for jump damage: mythril grip+1 will almost always beat pole strap's 2% increase. shame swapping pole strap to mythril grip+1 erases TP (though on the topic of TP, you'll get overall a slight bit more of it from pole strap DAs than you will from mythril grip+1's ACC; the chances of this perceptibly influencing anything are rather low though). if ACC is capped, of course, pole strap pulls ahead (at least 50% of the time). the 2 STR has the same 50% chance of raising fSTR, and about a 30% chance of raising WSC on penta thrust. base dmg will start out around 120 so, 35% of the time: 0% bonus bonus, (fSTR up 0, WSC up 0) 15% of the time: .83% bonus, (fSTR up 0, WSC up 1) 35% of the time: .83% bonus, (fSTR up 1, WSC up 0) 15% of the time: 1.66% bonus, (fSTR up 1, WSC up 1) average bonus: .66% since the fTP of all hits of penta thrust is 1.0, 2% DA will provide a flat 2% total WS dmg increase, and thus wins for penta thrust. for wheeling thrust, again 50% of the fSTR goes up, 50% of the time it doesn't. WSC will go up around 90% of the time, so for ease let's just say half the time base dmg goes up by 1, half the time it goes up by 2. your base dmg will start higher because of the 50% STR mod, say around 130 (90STR). so, 50% of the time: .77% bonus 50% of the time: 1.5% average bonus: 1.13% wheeling thrust's fTP is 1.75, so the first hit is 75% stronger than the double attack. so, pole strap will only provide around a 1.14% damage increase for wheeling thrust. (1/1.75 * 2 = 1.14~) so for WS dmg: pole strap wins soundly for penta thrust, and is about even for wheeling thrust (if the ACC matters for wheeling, mythril grip +1 would win slightly, but for reasons i don't want to go into, it probably won't matter). conclusion: if most of your damage came from jump, mythril grip+1 would win. however, most of your damage comes from melee and WS, where pole strap does more. side comment about this discussion: the number crunching wasn't entirely necessary, if you look up and get a feel for how damage works. pole strap obviously increases damage by 2% or slightly less. if you know your base dmg is around 100, and that 2STR raises fSTR 50% of the time, you know that you're getting around an average dmg bonus of .5% from the STR, and 1% from the ACC. 1.01 * 1.005 < 1.02, and we're done. but yes, i can and often do crunch the numbers, and i don't appreciate being insulted for trying to urge someone on to learning about it themselves. i post things like this because i had to learn all the math myself, since back in the day the people who knew it (with the exception of VZX) never made any effort to share or explain their knowledge. i've tried to change this, and have been (along with many other people doing the same thing on KI, here, and BG) largely successful. calling it "acting like a douchebag" makes me think twice about helping people. what do i care if you blindly buy sh*t off the AH and take any asshole on a forum's word for gear decisions (and yes, i have verified for myself in game all the equations i use)? am i an ass for requesting people take 10 minutes to find and read 1 of the other 200 posts i spent over an hour each writing on the topic they're looking for? you're not the first jerk to ask about pole strap you know. so again, fuck you. korokodo wrote: well heres mine i guess, don't know if it matters for my lvl though, feel free to tip me in on what would be better but not for rich people >.> yeah your prolly gonna tell me that i could use more acc gear but i dont miss too much so i sacrificed my tilt belt for the sword belt and my DEX rings for STR you miss more than you think you do at that level; your ACC isn't capped unless you're fighting like even match mobs exclusively. your 2 STR rings raise your dmg by about 2%, while 2 woodsmans would raise it by about 7%, which is a significant improvement. and, as you seem to have already noticed, you'll need to get the ACC rings eventually to avoid gimpness (i would never re-invite someone if they showed up to a party TPing in STR rings by the way; it's a big nono). so, may as well get them now. the tilt belt, and even moreso life belt at 48, is better than swordbelt, but keep the swordbelt. if you find yourself in a high ACC situation (maybe getting madrigal, or soloing something DC/EM, or in a party fighting T/VT) you may want to switch to swordbelt. it's a nice situational piece of equipment. though, if you only wear 1 belt for the next several levels, making it life belt would be the right choice. while dusk gloves are indeed an upgrade, i wouldn't bother with them until you get swift belt and maybe some homam/barbarossa (though you may as well wait on homam hands in that case, like i did). if you regularly use hasso, dusk gloves will be a bigger increase, though dusk pants would be nice as well or instead, at least before you build up a haste setup. also, in the 80-85% ACC situation you described, wearing ohat over turban would probably be a good idea, especially if you're not using hasso or getting march and/or haste. tiphia sting is slightly better than smart grenade if you have some gil lying around. everything else i wouldn't change until you start getting haste gear. i'm not the world's greatest soloer, so i'd get some other opinions here, but i'm not convinced that the huge DoT increase you absorb by using iron ram lance is justified by its -10% dmg taken or HP/ MP. i know the dps is comparable to thalassocrat, but you still do jumps and WSs. i'm thinking you'd do maybe 15%-20% more DoT with thalassocrat, ie the mob would die about that much quicker. maybe it's situational, dependent on how hard the mob hits and how much you end up simply outlasting things? i agree with the hydra upgrades. as for elusive r/ex gear, wyrm armet of course, and homam for haste and HP/MP. i think swift belt would ultimate help as well, as would turban or ohat for when you're not doing healing breath (i'm lazy and wear AF+1 head while TPing solo). if you do limbus but for some reason don't get homam, AF+1 legs are very nice. i can't think of anything else, but like i said before, i'm a pretty bad soloer. milich wrote: korokodo wrote: well heres mine i guess, don't know if it matters for my lvl though, feel free to tip me in on what would be better but not for rich people >.> yeah your prolly gonna tell me that i could use more acc gear but i dont miss too much so i sacrificed my tilt belt for the sword belt and my DEX rings for STR Well thanks for another great math post. Will they ever learn? Thanks very much Milich ^^ Definitly go for thalassocrat as milich mention. And since you already soloed in your current setup, you don't realy need to go out of your way for specific solo gear. I solo in my PT gear setup except for head gear, where I use homam instead of turban for the DEF. Hydra mail is still a good upgrade over SH, and you can still use it in PTs, though it cost a lot. The carapace gauntlets are nice since latent tricker fast in solo. Fowling / storm is also fine, but what you should look into for solo is to macro in +HP gear so you can tricker HB earlier. Do 1 macro for HB / wyvern HP gear and then hit your HB tricker. Here is some usefull gear; ammo: happy egg (if you want to use tiphia sting or smart granade for TP) neck: chanoix's gorget earring: insomnia earring (if you already have it) / physical earring rings: bomb queen ring back: gigant mantle waist: ocean sash (if you are hume) feet: marine m boots (if you are hume) Most of the high +HP gear cost a lot though :-/ I personaly haven't gotten gigant mantle or ocean sash yet and sold marine m boots again. When you hit your HP gear macro you don't have to wait till your HP bar changes. You can just hit your HB tricker macro right after. Saurian helm also helps a lot on HB if you don't have wyrm armet and should be your 1st priority. After some levels and merits I feel like I'm not getting enough bang for my buck with my gear. I'm an Elvaan M, 4 merits into Polearm (6 now), 1 into Crit. Can somebody please critique my set up for This is my TP/Penta set up... Pole Strap O. Hat (Walahra, Wyvern Helm avail.) Smart Grenade (Is the Tiphia Stinger any better?) Chiv Chain Assault Earring Fowling Earring Barone Corazza Woodsman x2 Amemet +1 Life/Warwolf/Sword +1 Conte Cosciales Baron Gambieras Aside from some haste items I'm farming for, and CoP/Sky being out of the question, is this the best I can do? Are there things that I just don't need in my set up? I'm hoping for both opinions and maybe some math. I'll appreciate any help I get, though. This is a pretty decent TP set until you can get some haste gear. You should parse at about 85% accuracy more or less on greater colibri, higher on imps/flies/jnun, lower on mamools which are the 3 common targets at 75. If you can get some decent haste equipment (some of it ra/ex) you might be able to switch some of this equipment out although it will lower your accuracy a bit. For tp some pieces I would recommend you get to counter the accuracy you are going to loose with common haste pieces would be Assault for Amir boots and Pahluwan Seraweels (SE hates us, that is why they put both of these items in the same assault). that will allow you to safely take out 10 accuracy elsewhere in your tp set without hurting your accuracy percentage. Yes, Tiphia Sting is going to be better for you than smart grenade. -2 atk for +2 accuracy will make a big difference in the long run. Getting those last 2 polearm merits will give you another 3.6 acc and 4 atk too, so don't neglect those. If you haven't started already, start camping the Assault Jerkin, +3 acc +18 atk will be good for your build for 2 reasons. One it is better than Barone corazza (except for maybe soloing, 2 hp Regen, and +2tp per Jump). Two it will allow you to sell that Barone Corazza so you can afford some of that haste gear you are talking about. Sadly, without Sea/Sky your options for improvement are pretty limited. Maybe you can do salvage/nyzul isle assault and go for some ares/askar gear, but you are pretty tapped out as far as AH, easy NM gear you can get. FYI, with your current gear, I would tp and WS in the same gear, full accuracy. Thats basically what I do at the moment. Never swap out anything. It saddens me that I really am lacking in the dmg potency dept, but you're helping me steer myself in the right direction. DaimenKain wrote: milich wrote: korokodo wrote: well heres mine i guess, don't know if it matters for my lvl though, feel free to tip me in on what would be better but not for rich people >.> yeah your prolly gonna tell me that i could use more acc gear but i dont miss too much so i sacrificed my tilt belt for the sword belt and my DEX rings for STR Well thanks for another great math post. Will they ever learn? i quoted this one because its smaller but like i said i remembered i get(got) pentas thrust so i bought me a life belt for 20k and i switched out my str rings for dex rings becasue like i said, im broke and 90k-200k rings are too much for me at the time because im saving for my scorp harness in 7 lvls, but i will get some of something sooner or later thanks for the comments on my solo gear. i need to work on a few things and see what works out best for me. i am a hume male so i can get the hp+ gear. i had marine boots...but theyre soo expensive i decided to sell them! i plan to replace them with rutters for wheeling. Really just about to get 75, but gear on my profile. Drachen Greaves FTW! Gonna get better, but I dont have 10 hours to play every day. Quit playing for almost a year, gave Everything away, got back end of February this year @ 60 and started over gear wise, only a full set of AF, including polearm for a few days LOL >.<. Im not doin too bad considering I gave away about 4 mil in gear/gil last time. Up to about 2.5 mil just on drg... Not bad ha ha ha, better equiped than most drg I see, have few different peices I use for /whm or /sam, next up Blue Mage, sub job #4. Actually, I would like a second opinion for a lvl 60 polearm, seems like easy choice for me, but for advice not taken... For Galka Schwarz Lance (8.93dps) Grand Knight's Lance (9.76dps) Does the stats on the Schwarz make up in jumps ect. the dps and a galka w/ close to no acc gear, how much more of a gap in dps should be expected from extra misses ect. (For alot more gil...) Dark Mezraq +1 (10.29dps - 9.88dps reg.) Have to figure out how to link items to page still >.>. Edited, Jun 5th 2008 1:17pm by JackDanielsDrg
{"url":"http://wow.allakhazam.com/forum.html?forum=249&mid=119832559882273020&p=4","timestamp":"2014-04-19T05:58:00Z","content_type":null,"content_length":"191791","record_id":"<urn:uuid:ea21eeef-d31f-4fba-b2bf-96fbad7a1dad>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help April 27th 2010, 07:53 PM #1 Oct 2009 Suppose that f and g are functions from a set S to R and that f is continuous at a given number a at which the function g fails to be continuous. (1) What can be said about the continuity of the function f+g at the number a? (2) What can be said about the continuity of the function fg at the number a? (3) What can be said about the continuity of the function fg at the number a if f(a)=0 and g is a bounded function? (4) What can we say about the continuity of the function fg at the number a if f(a) does not equal 0? Suppose that f and g are functions from a set S to R and that f is continuous at a given number a at which the function g fails to be continuous. (1) What can be said about the continuity of the function f+g at the number a? (2) What can be said about the continuity of the function fg at the number a? (3) What can be said about the continuity of the function fg at the number a if f(a)=0 and g is a bounded function? (4) What can we say about the continuity of the function fg at the number a if f(a) does not equal 0? What do you think? For 2) how about $f(x)=x$ and $g(x)=\begin{cases}\frac{1}{x} & \mbox{if}\quad xe 0 \\ 1 & \mbox{if}\quad x=0\end{cases}$ April 27th 2010, 08:03 PM #2
{"url":"http://mathhelpforum.com/differential-geometry/141809-continuity.html","timestamp":"2014-04-20T14:07:26Z","content_type":null,"content_length":"34085","record_id":"<urn:uuid:21401d91-8881-4a34-9154-36cf3667d144>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantized energy - Photon San K is there no minimum value for the quantum? San K is frequency quantized? The frequencies of a system are very often constrained to discrete values. This is true even in classical mechanics. For instance, the resonant modes of a violin string have discrete frequencies. The boundary conditions of the string cause the wavelengths of the string to change by discrete values. This results in the frequencies changing by discrete values. This discrete division between notes of a stringed instrument have been known since Aristotle. Probably even before Aristotle. Newton understood frequencies of vibrations on strings. However, the separate frequencies on a string were not called quanta. The resonant frequencies of a hydrogen atom are caused by the periodic boundary conditions of the electron-wave. In that sense, they are like the waves on a violin string. However, the discrete values of frequency are not the direct result of an ad hoc hypothesis. What is really quantized in a hydrogen atom is radius of the electron's orbit. The radius of the electrons orbit is a type of amplitude. You can think of the radius as the limit of the back and forth motion of the electron. It is this radius, which is a type of amplitude, which is quantized. The discrete values of frequency are an indirect consequence of the fact that the radius is "quantized". The frequencies aren't quantized, but the radii are quantized. You have to be careful about the word quantized. The word is not quite synonymous with discrete. Quantization is a type of discreteness. Maybe the word "digitized would be better. No, I take that back. There are certain qualifications to a digital system. The "quantization" first hypothesized by Planck referred to San K does quantization only make sense when in a bounded state at a specific orbital/energy level/shell in an atom? Discrete values for frequency usually make sense for bounded states. The real reason frequencies change in discrete values is because of the boundary conditions on the wave. The violin string is a good analog. The reason that the notes of a violin string are discrete is because the violin string is fixed on both ends. Thus, the violin string has to be "bounded" in order to produce notes. Notes are bounded states! A violin string that isn't tied down does not produce separate notes. A propose that frequencies should never be called quantized. Frequencies are merely discrete.
{"url":"http://www.physicsforums.com/showthread.php?p=4250423","timestamp":"2014-04-19T15:04:43Z","content_type":null,"content_length":"97713","record_id":"<urn:uuid:bfd50a01-46bd-4fb3-b509-37eb74ef81f2>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Crypto Functions This module provides a set of cryptographic functions. • md5: The MD5 Message Digest Algorithm (RFC 1321) • sha: Secure Hash Standard (FIPS 180-2) • hmac: Keyed-Hashing for Message Authentication (RFC 2104) • des: Data Encryption Standard (FIPS 46-3) • aes: Advanced Encryption Standard (AES) (FIPS 197) • ecb, cbc, cfb, ofb: Recommendation for Block Cipher Modes of Operation (NIST SP 800-38A). • rsa: Recommendation for Block Cipher Modes of Operation (NIST 800-38A) • dss: Digital Signature Standard (FIPS 186-2) The above publications can be found at NIST publications, at IETF. byte() = 0 ... 255 ioelem() = byte() | binary() | iolist() iolist() = [ioelem()] Starts the crypto server. Provides the available crypto functions in terms of a list of atoms. Data = iolist() | binary() Digest = binary() Computes an MD5 message digest from Data, where the length of the digest is 128 bits (16 bytes). Creates an MD5 context, to be used in subsequent calls to md5_update/2. md5_update(Context, Data) -> NewContext Data = iolist() | binary() Context = NewContext = binary() Updates an MD5 Context with Data, and returns a NewContext. Context = Digest = binary() Finishes the update of an MD5 Context and returns the computed MD5 message digest. Data = iolist() | binary() Digest = binary() Computes an SHA message digest from Data, where the length of the digest is 160 bits (20 bytes). Creates an SHA context, to be used in subsequent calls to sha_update/2. sha_update(Context, Data) -> NewContext Data = iolist() | binary() Context = NewContext = binary() Updates an SHA Context with Data, and returns a NewContext. Context = Digest = binary() Finishes the update of an SHA Context and returns the computed SHA message digest. Key = Data = iolist() | binary() Mac = binary() Computes an MD5 MAC message authentification code from Key and Data, where the the length of the Mac is 128 bits (16 bytes). Key = Data = iolist() | binary() Mac = binary() Computes an MD5 MAC message authentification code from Key and Data, where the length of the Mac is 96 bits (12 bytes). Key = Data = iolist() | binary() Mac = binary() Computes an SHA MAC message authentification code from Key and Data, where the length of the Mac is 160 bits (20 bytes). Key = Data = iolist() | binary() Mac = binary() Computes an SHA MAC message authentification code from Key and Data, where the length of the Mac is 96 bits (12 bytes). des_cbc_encrypt(Key, IVec, Text) -> Cipher Key = Text = iolist() | binary() IVec = Cipher = binary() Encrypts Text according to DES in CBC mode. Text must be a multiple of 64 bits (8 bytes). Key is the DES key, and IVec is an arbitrary initializing vector. The lengths of Key and IVec must be 64 bits (8 bytes). des_cbc_decrypt(Key, IVec, Cipher) -> Text Key = Cipher = iolist() | binary() IVec = Text = binary() Decrypts Cipher according to DES in CBC mode. Key is the DES key, and IVec is an arbitrary initializing vector. Key and IVec must have the same values as those used when encrypting. Cipher must be a multiple of 64 bits (8 bytes). The lengths of Key and IVec must be 64 bits (8 bytes). des3_cbc_encrypt(Key1, Key2, Key3, IVec, Text) -> Cipher Key1 =Key2 = Key3 Text = iolist() | binary() IVec = Cipher = binary() Encrypts Text according to DES3 in CBC mode. Text must be a multiple of 64 bits (8 bytes). Key1, Key2, Key3, are the DES keys, and IVec is an arbitrary initializing vector. The lengths of each of Key1, Key2, Key3 and IVec must be 64 bits (8 bytes). des3_cbc_decrypt(Key1, Key2, Key3, IVec, Cipher) -> Text Key1 = Key2 = Key3 = Cipher = iolist() | binary() IVec = Text = binary() Decrypts Cipher according to DES3 in CBC mode. Key1, Key2, Key3 are the DES key, and IVec is an arbitrary initializing vector. Key1, Key2, Key3 and IVec must and IVec must have the same values as those used when encrypting. Cipher must be a multiple of 64 bits (8 bytes). The lengths of Key1, Key2, Key3, and IVec must be 64 bits (8 bytes). aes_cfb_128_encrypt(Key, IVec, Text) -> Cipher aes_cbc_128_encrypt(Key, IVec, Text) -> Cipher Key = Text = iolist() | binary() IVec = Cipher = binary() Encrypts Text according to AES in Cipher Feedback mode (CFB) or Cipher Block Chaining mode (CBC). Text must be a multiple of 128 bits (16 bytes). Key is the AES key, and IVec is an arbitrary initializing vector. The lengths of Key and IVec must be 128 bits (16 bytes). aes_cfb_128_decrypt(Key, IVec, Cipher) -> Text aes_cbc_128_decrypt(Key, IVec, Cipher) -> Text Key = Cipher = iolist() | binary() IVec = Text = binary() Decrypts Cipher according to Cipher Feedback Mode (CFB) or Cipher Block Chaining mode (CBC). Key is the AES key, and IVec is an arbitrary initializing vector. Key and IVec must have the same values as those used when encrypting. Cipher must be a multiple of 128 bits (16 bytes). The lengths of Key and IVec must be 128 bits (16 bytes). erlint(Mpint) -> mpint(N) -> Mpint Mpint = binary() N = integer() Convert a binary multi-precision integer Mpint to and from an erlang big integer. A multi-precision integer is a binary with the following form: <<ByteLen:32/integer, Bytes:ByteLen/binary>> where both ByteLen and Bytes are big-endian. Mpints are used in some of the functions in crypto and are not translated in the API for performance reasons. Generates N bytes randomly uniform 0..255, and returns the result in a binary. Uses the crypto library pseudo-random number generator. Lo, Hi, N = Mpint | integer() Mpint = binary() Generate a random number N, Lo =< N < Hi. Uses the crypto library pseudo-random number generator. The arguments (and result) can be either erlang integers or binary multi-precision integers. N, P, M, Result = Mpint Mpint = binary() This function performs the exponentiation N ^ P mod M, using the crypto library. rsa_verify(Digest, Signature, Key) -> Verified Verified = boolean() Digest, Signature = MPint Key = [E, N] E, N = MPint MPint = binary() Verifies the digest and signature using the public key Key, using the crypto library function for RSA signature verification. dss_verify(Digest, Signature, Key) -> Verified Verified = boolean() Digest, Signature = MPint Key = [P, Q, G, Y] P, Q, G, Y = MPint MPint = binary() Verifies the digest and signature using the public key Key, using the crypto library function for DSS signature verification. rc4_encrypt(Key, Data) -> Result Key, Data = iolist() | binary() Result = binary() Encrypts the data with RC4 symmetric stream encryption. Since it is symmetric, the same function is used for decryption. Data1, Data2 = iolist() | binary() Result = binary() Performs bit-wise XOR (exclusive or) on the data supplied. DES in CBC mode The Data Encryption Standard (DES) defines an algoritm for encrypting and decrypting an 8 byte quantity using an 8 byte key (actually only 56 bits of the key is used). When it comes to encrypting and decrypting blocks that are multiples of 8 bytes various modes are defined (NIST SP 800-38A). One of those modes is the Cipher Block Chaining (CBC) mode, where the encryption of an 8 byte segment depend not only of the contents of the segment itself, but also on the result of encrypting the previous segment: the encryption of the previous segment becomes the initializing vector of the encryption of the current segment. Thus the encryption of every segment depends on the encryption key (which is secret) and the encryption of the previous segment, except the first segment which has to be provided with a first initializing vector. That vector could be chosen at random, or be counter of some kind. It does not have to be secret. The following example is drawn from the old FIPS 81 standard (replaced by NIST SP 800-38A), where both the plain text and the resulting cipher text is settled. We use the Erlang bitsyntax to define binary literals. The following Erlang code fragment returns `true'. Key = <<16#01,16#23,16#45,16#67,16#89,16#ab,16#cd,16#ef>>, IVec = <<16#12,16#34,16#56,16#78,16#90,16#ab,16#cd,16#ef>>, P = "Now is the time for all ", C = crypto:des_cbc_encrypt(K, I, P), C == <<16#e5,16#c7,16#cd,16#de,16#87,16#2b,16#f2,16#7c, <<"Now is the time for all ">> == The following is true for the DES CBC mode. For all decompositions P1 ++ P2 = P of a plain text message P (where the length of all quantities are multiples of 8 bytes), the encryption C of P is equal to C1 ++ C2, where C1 is obtained by encrypting P1 with Key and the initializing vector IVec, and where C2 is obtained by encrypting P2 with Key and the initializing vector l(C1), where l(B) denotes the last 8 bytes of the binary B. Similarly, for all decompositions C1 ++ C2 = C of a cipher text message C (where the length of all quantities are multiples of 8 bytes), the decryption P of C is equal to P1 ++ P2, where P1 is obtained by decrypting C1 with Key and the initializing vector IVec, and where P2 is obtained by decrypting C2 with Key and the initializing vector l(C1), where l(.) is as above. For DES3 (which uses three 64 bit keys) the situation is the same. crypto 1.5.1.1 Copyright © 1991-2008 Ericsson AB
{"url":"http://www.erlang.org/documentation/doc-5.6.1/lib/crypto-1.5.1.1/doc/html/crypto.html","timestamp":"2014-04-17T04:01:22Z","content_type":null,"content_length":"23507","record_id":"<urn:uuid:b0dba30f-40dd-481e-bccf-dbdbb8e285c0>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
TSQl: Select Current Date - 6 Days and set time to 12:01 AM up vote 2 down vote favorite I am trying to get current date - 6 days. That is easy. Now I am trying to get current date - 6 days + 12:01 AM. So if today is 3-2-2012 11:14 AM. I want to get 2-25-2012 12:01 AM These 2 selects will give me current date - 6, but will not reset the time to 12:01 AM • select getdate()-6 • SELECT DATEADD(day, -6, CURRENT_TIMESTAMP); The solution you have chosen is the worst of the surgestions – t-clausen.dk Mar 5 '12 at 8:50 add comment 4 Answers active oldest votes Using the following will give you the result in a datetime format: SELECT CAST(Convert(varchar(10), DateAdd(d, -6, getdate()), 101) + ' 12:01 AM' as datetime) Result: 2012-02-25 00:01:00.000 Once you have the datetime that you want, you can convert it to many different formats. up vote 3 down vote accepted Or you can do the following which is in a varchar format: select Convert(varchar(10), DateAdd(d, -6, getdate()), 110) + ' 12:01 AM' which results in 02-25-2012 12:01 AM add comment One less conversion that @Phil Helmer's solution: SELECT DATEADD(DAY, DATEDIFF(DAY, '19000101', GETDATE()), '1899-12-26T12:01:00') Since some people are apparently unaware that everything "to the right" of the element specified in that DATEADD/DATEDIFF pair is effectively taken from the right-hand constant. Everything "to the left" (and including the actual element) can be used to achieve offsetting effects. up vote 3 (The above left/right are assuming that the entire datetime value is being interpreted with year to the left and milliseconds to the right, with all intermediate values in "size" order) down vote Edited - I've also updated my answer to subsume the -6 into the right-hand value. Its possible to create all kinds of offsetting by picking suitable values for the two constants. The relationship between the two datetime constants specified in the expression ought to be expressed, at least in a comment alongside the usage. In the above, I'm using 1/1/1900 as a base point, and computing the number of midnight transitions between then and now (as DATEDIFF always works). I'm then adding that number of days onto the point in time 6 days earlier (e.g. 26/ 12/1899) at exactly 00:01 in the morning... That's exactly why I like the DADD solution (OP:sqlservercentral.com/articles/Date+Manipulation/69694), it's so flexible. It could be that after a certain point of "cleverness", a calender table has some advantages, but if the offsets, rules, etc. are prone to change, the DADD comes to the fore again. BTW, it should also be noted that the use of the date literal formats he uses here are always interpreted the same, regardless of collation/localization (see: social.msdn.microsoft.com/Forums/en/transactsql/thread/…) – Phil Helmer Mar 3 '12 at 3:36 add comment SELECT DATEADD(minute, 1, DATEADD(DAY, DATEDIFF(DAY, 0, GETDATE()) - 6, 0)) is equivalent to SELECT DATEADD(minute, 1, DATEADD(DAY, DATEDIFF(DAY, '19000101', GETDATE()) - 6, '19000101')) up vote 6 down vote I think you will find this option faster and more flexible than the varchar implementations. by keeping the data types as they are, you don't have to worry about the vagaries of the cast/ convert results. See Louis Davidson for one of the full explainations: http://sqlblog.com/blogs/louis_davidson/archive/2011/02/09/some-date-math-fun.aspx +1 much cleaner than concatenating ugly strings. I'd probably avoid GETDATE() - 6 though just to avoid perpetuating that date shorthand, since this methodology doesn't work with the new date/time types introduced in SQL Server 2008. – Aaron Bertrand Mar 2 '12 at 17:07 The -6 is applied to the offset, which is already an int. That actually highlights the one major downside to this technique: too many parentheses. – Phil Helmer Mar 2 '12 at 18:49 Yes, I understand that swapping in a DATE variable in place of GETDATE() won't break this specific code, I just think it's better practice to use explicit DATEADD for date math in general. It would probably more clear what was going on if you used a static date instead of 0; I use the 0 as well but I'm trying to break myself of that habit. – Aaron Bertrand Mar 2 '12 at 19:05 I understand now. Yes, I debated whether to add a 3rd dateadd to the code and I'm in the same boat re: 0 vs. 19000101. – Phil Helmer Mar 2 '12 at 19:28 @PhilHelmer - I've added my cheeky take on your answer. You can exploit the differences between the two constants and achieve a lot with a single DATEADD/DATEDIFF pair. – Damien_The_Unbeliever Mar 2 '12 at 19:40 add comment SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) + '12:01' up vote 0 down vote add comment Not the answer you're looking for? Browse other questions tagged sql sql-server sql-server-2008 sql-server-2005 tsql or ask your own question.
{"url":"http://stackoverflow.com/questions/9536623/tsql-select-current-date-6-days-and-set-time-to-1201-am?answertab=active","timestamp":"2014-04-18T01:13:34Z","content_type":null,"content_length":"83897","record_id":"<urn:uuid:142ed0d5-fc6c-4869-a16d-0b44b33718e0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
San Gabriel Algebra Tutor Find a San Gabriel Algebra Tutor ...I have a good grasp of commonly tested topics, and I have an organized systems based approach of teaching that integrates concepts in both fields. I have previously tutored students in strategizing for college applications. Based on their GPA and SAT score, desired location, and financial budge... 43 Subjects: including algebra 2, algebra 1, English, reading ...After a few weeks of classes, she managed to get As for her tests in both maths and sciences. My teaching philosophy is centered around patient guidance and encouragement for my students. I always work with my students to find the best way to teach. 25 Subjects: including algebra 2, algebra 1, calculus, Chinese ...I was born in Israel, and I have been in Jewish Day School for all of my life. My father is a rabbi, and I speak fluent Hebrew. I often read Torah at local synagogues. 8 Subjects: including algebra 1, reading, English, prealgebra ...I have been working in the field of Video Production for many years, mainly focusing on filming and video editing. I have filmed and edited multiple Corporate Projects, Electronic Press Kits, Video Commercials and Online Video Projects. I am very versed in Final Cut Pro, Pro Tool and Adobe Creative Suite and other tools used in video editing. 3 Subjects: including algebra 1, Adobe Photoshop, video production ...That translation is a basic skill that doesn’t get much attention in math class. It’s crucial for word problems - and the SAT has lots of word problems. Without math-translation, the SAT may as well be written in Greek. 24 Subjects: including algebra 1, calculus, algebra 2, physics
{"url":"http://www.purplemath.com/san_gabriel_algebra_tutors.php","timestamp":"2014-04-16T04:25:47Z","content_type":null,"content_length":"23769","record_id":"<urn:uuid:e553bec4-7ab8-47bf-b98f-884b711c8b2f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Number Representation Hi All I am having a go at a small program to draw a fractal, though I am not sure on how to represent the imaginary part of the complex number. Most examples I have seen use a pair of doubles, though I can't see how they are representing i. The one I am looking at now is coded in C++ as // Function that multiplies two complex numbers // the result is a modified object of the class (this) void Multiply(Complex_Number comp_num) double temp_a; double temp_b; temp_a = this->a * comp_num.a; temp_a += (this->b * comp_num.b)*(-1); temp_b = this->a * comp_num.b; temp_b += this->b * comp_num.a; this->a = temp_a; this->b = temp_b; then multiplying the b value by -1. I thought only was equal to -1. It's probably just me being thick, though if someone could explain how this is working that would be cool. Can feel it coming together.. Slowly but Surely
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=14066","timestamp":"2014-04-21T07:20:09Z","content_type":null,"content_length":"11166","record_id":"<urn:uuid:eed52bc9-5473-42d0-8e0e-66df9a081827>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Scalar-ndarray arguments passed to not_equal Friedrich Romstedt friedrichromstedt@gmail.... Thu Feb 11 14:21:23 CST 2010 Hi Keith, 2010/2/11 Keith Goodman <kwgoodman@gmail.com>: > Is there some way to tell numpy to use my __eq__ instead of its own? > That would solve my problem. I had a similar problem with __radd__ > which was solved by setting __array_priority__ = 10. But that doesn't > work in this case. It's quite simple, but hidden in the forest of documentation (though it mentiones it, and quite in detail). numpy.set_numeric_ops(equal = my_equal_callable_object) Note that you should _not_ simply use a function: def equal(a, b): 2010/2/11 Keith Goodman <kwgoodman@gmail.com>: > I wish I knew enough to reply to your post. Then I could return the > favor. You'll have to settle for a thank you. Thank you. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-February/048508.html","timestamp":"2014-04-16T22:56:02Z","content_type":null,"content_length":"4076","record_id":"<urn:uuid:3c306795-7c7a-42fe-8279-178b8a02b2cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
Applications of the Ergodic Theorem for Markov Chains November 22nd 2012, 06:04 AM #1 Apr 2007 Applications of the Ergodic Theorem for Markov Chains I need help to solve these 2 exercises: 1) A professor has N umbrellas. He walks to the office in the morning and walks home in the evening. If it is raining he likes to carry an umbrella and if it's fine he does not. Suppose that it rains on each journey with probability p, indipendently of past weather. What is the long run proportion of journeys on which the professor gets wet? 2) An opera singer is due to perform a long series of concerts. Having a fine artistic temperament, she is liable to pull out each night with probability 1/2. Once this has appened she will not sing again until the promoter convinces her of his high regard. This he does by sending flowers every day until she returns. Flowers costing x thousand pounds, 0<=x<=1 bring about a reconciliation with probability sqrt(x). The promoter stands to make 750£ from each successful concert. How much should she spend on flowers? These are taken from Markov Chains by J.R. Norris and should be solved using the ergodic theorem and the strong law of large numbers. I can't solve them Thanks in advance!!! Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/208166-applications-ergodic-theorem-markov-chains.html","timestamp":"2014-04-18T06:01:26Z","content_type":null,"content_length":"30223","record_id":"<urn:uuid:ec5670bb-cde5-4728-84ee-f7d91e2bed50>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
The read function reads input from a string, which must be completely consumed by the input process. The readFile function reads a file and returns the contents of the file as a string. The file is read lazily, on demand, as with getContents. The readIO function is similar to read except that it signals parse failure to the IO monad instead of terminating the program. readParen True p parses what p parses, but surrounded with parentheses. readParen False p parses what p parses, but optionally surrounded with parentheses. equivalent to readsPrec with a precedence of 0. Read the next value from the Chan. Lookup a constructor via a string Read an unsigned number in decimal notation. Reads an unsigned RealFrac value, expressed in decimal scientific notation. Read an unsigned number in hexadecimal notation. Both upper or lower case letters are allowed. Reads an unsigned Integral value in an arbitrary base. Read the value of an IORef A possible replacement definition for the readList method (GHC only). This is only needed for GHC, and even then only for Read instances readListPrecDefault. Show more results
{"url":"http://www.haskell.org/hoogle/?hoogle=read","timestamp":"2014-04-17T10:32:31Z","content_type":null,"content_length":"19333","record_id":"<urn:uuid:af219558-29c3-4f07-8779-e76acaf0ab7e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Hillside, IL Algebra 2 Tutor Find a Hillside, IL Algebra 2 Tutor ...I can also help students who are preparing for the math portion of the SAT or ACT. When teaching lessons, I put the material into a context that the student can understand. My goal is to help all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon. 12 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I would like to help teach GED to someone in need of it. My mother herself had to go through the entire process of taking the GED courses and finally taking the GED examination, and along the way, I helped tutor her in it. In the end, she was able to pass the GED exam with a 93% and obtain her GED diploma. 16 Subjects: including algebra 2, chemistry, English, geometry ...Rhetorical questions can be tricky, and even subjective. I've developed an approach to these types of questions that make them considerably easier for students to tackle. I truly enjoy helping students to master the Reading portion of the ACT test. 20 Subjects: including algebra 2, reading, English, writing ...I have volunteered for 2 years at an elementary school where I provide one-on-one tutoring for the kids that were falling behind. I have 100% success rate with all of the kids that I have helped. They all either caught up with the rest of their class, and some were even able to get ahead. 11 Subjects: including algebra 2, geometry, biology, precalculus ...I have my master's degree in mechanical engineering. Physics, chemistry, anatomy, and biology were all part of my curriculum to get where I am today. Overall, I received a 30 on my ACTs, with a 27 in science. 20 Subjects: including algebra 2, physics, statistics, calculus Related Hillside, IL Tutors Hillside, IL Accounting Tutors Hillside, IL ACT Tutors Hillside, IL Algebra Tutors Hillside, IL Algebra 2 Tutors Hillside, IL Calculus Tutors Hillside, IL Geometry Tutors Hillside, IL Math Tutors Hillside, IL Prealgebra Tutors Hillside, IL Precalculus Tutors Hillside, IL SAT Tutors Hillside, IL SAT Math Tutors Hillside, IL Science Tutors Hillside, IL Statistics Tutors Hillside, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Hillside_IL_algebra_2_tutors.php","timestamp":"2014-04-17T07:57:06Z","content_type":null,"content_length":"24113","record_id":"<urn:uuid:55e02494-2186-4ff8-a4e8-30ebada93da5>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Something to munch on while I take a long, succulent post out of the procrastination oven I’m convinced that the following diagram means something precise: My question is, what does it mean? Intuitively, it means that if your software package can solve SDP’s, then you can easily use it to solve LP’s; if it can solve LP’s, you can easily use it to invert matrices, and so on, but not vice versa. But it can’t mean (for example) that SDP’s are harder than LP’s in the usual complexity theory sense, since both problems are P-complete! Maybe it means that, if your axiom system is strong enough to prove SDP is in P, then it’s also strong enough to prove LP is in P, and so on — but not necessarily vice versa. But how would we show such a separation? (Sorry, no money this time. We’ll see if it makes any difference — I’m guessing that it doesn’t.) Anonymous Says: Comment #1 July 11th, 2006 at 6:12 pm did you make this diagram up yourself? just wondering.. Scott Says: Comment #2 July 11th, 2006 at 6:16 pm Eldar Says: Comment #3 July 11th, 2006 at 8:48 pm Well, perhaps diagrams like that can motivate the study of NC0 reductions, i.e. where every bit in the converted input depends on a bounded number of bits in the original (I’m not claiming that they exist here). Especially that NC0 is making quite an appearance (reappearance?) in relatively recent works (e.g. NC0 cryptography). Anonymous Says: Comment #4 July 11th, 2006 at 10:56 pm If you indeed think about all of these things (including SVD and matrix inversion) as convex programs, then one obvious meaning of the diagram is in terms of allowable syntax–i.e. the type of constraints you are allowed. (Even though, in principle, ML and x86 assembly language are both universal, I think people prefer high-level languages for most coding tasks.) Practically, we can solve LPs much, much faster than SDPs, and we can do matrix inversion and SVD faster than solving a generic LP. We are talking two orders of magnitude in the size of a program that can be feasibly solved (LP vs. SDP). But for your aesthetic, it’s probably best not to allow arbitrary P-time or log-space reductions; if you restrict your effort to syntactic translations, then you will get an actual hierarchy. Greg Kuperberg Says: Comment #5 July 12th, 2006 at 12:17 am NC0 is a little drastic. It seems possible that this diagram of problems becomes an inclusion diagram for complexity classes with respect to L reduction, say. Maybe even NC or SC. Anonymous Says: Comment #6 July 12th, 2006 at 12:03 pm Making the edges directed wud have made it much more clearer….. John Sidles Says: Comment #7 July 12th, 2006 at 12:45 pm Just to bring both a note of practicality and an extra intellectual dimension into the discussion (and speaking as someone who presently devotes about 3 GFlops to SVDs, all day, every day), the Mathematica implemation of SVD is pretty seriously broken, in the sense that Mathematica’s SingularValueDecomposition[] sporadically fails, returning a grossly incorrect result without generating an error message. Needless to say, from a purely algorithmic point of view, this should never happen. So why does it happen? The link with complexity theory has to do with social complexity … Wolfram Research has known and acknowledged their SVD problem for almost a year, but are disinclined to fix it (it being unprofitable to do so). This raises the practical question: as a citizen of a Jeffersonian democracy, should I meekly submit to market forces, or should I seek to obtain the four freedoms of Free-as-in-Freedom software, and therefore, abandon Mathematica for less-convenient free software packages? Up until recently, I would have considered this as a dilemma in economics and/or moral philosophy. And pragmatically, I am in the “meekly submitting” category … although I grit my teeth daily at the ugliness of the resulting code circumlocutions! But nowadays, and increasingly, it seems that “sociocomplexity” (which is broadly analogous to sociobiology) is emerging the most creative intellectual venue for analyzing these problems. The economist Amartya Sen is a leader in considering these issues. A recent essay of Sen’s ends with these words: As Buddha said, “there are some questions that can be asked of which there are no answer” and while I’ve given several answers, they are not final answers, and I very much hope to have more discussion on these topics. Am I the only one who sees, narrowly in dealing with bugs in SingularValueDecomposition[], broadly in Sen’s article, an emerging challenge in complexity theory? Anonymous Says: Comment #8 July 12th, 2006 at 7:00 pm John Sidles said: The link with complexity theory has to do with social complexity snip Are you practicing to do a Feynman? or maybe the other way — publish a social paper in a scientific journal. It certainly seems that you are practicing in their gobbling writing style. Being neither here nor there, being untrue to both fields. Anonymous Says: Comment #9 July 12th, 2006 at 9:12 pm John Sidles, I mean this in the least offensive way possible, but maybe you should consider getting your own blog? You appear to have much to say, and quite frequently it’s only vaguely, or not at all, related to the posting by Scott that you’re commenting on. Just an idea. John Sidles Says: Comment #10 July 13th, 2006 at 8:57 am I’ll hold it down to one comment per week … with links instead of prose! E.g., your post called to mind the mathematician John Lighton Synge’s preface to his (very hard to find) 1957 mathematical fantasy Kandelman’s Krim. Paul Beame Says: Comment #11 July 14th, 2006 at 9:40 am I agree with other posters that a sharper notion of reduction is likely useful to separate SDP from LP. However, isn’t there another distinction between the problems? In the case of LP with integral (or rational) inputs, isn’t it the case that the polytime algorithms can be modified to produce an exact rational expression for the optimum? In the case of SDP with the same inputs all we have is an approximation scheme with a running time that is polynomial in log (1/epsilon). Now if one wants a binary/decimal expression of the optimum then the two produce the same quality of answer but I don’t know of an efficient SDP algorithm that produces a closed form expression. (Obviously it is not rational in the SDP case but any closed-form expression, say radicals, would do.)
{"url":"http://www.scottaaronson.com/blog/?p=100","timestamp":"2014-04-20T08:14:40Z","content_type":null,"content_length":"15042","record_id":"<urn:uuid:ec949f0c-05cb-4541-aecd-6098b697feca>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Superposition Solution for Isc Superposition analysis to find the current through the short means we solve several simpler circuits (one for each source. Since there are three sources, we have three simpler circuits to solve. In each of these circuits, all sources but one have been deactivated (voltage sources are shorted, current sources are opened). Our total answer is the sum of the answers we get from each of the Isc = Isc15 + Isc12 + Isc20 First, let's combine the 8k, 3k, and 1k in series: R = 8k + 3k + 1k = 12k and also drop the 5k, since no current can flow through it. The 12k is shorted out by the horizontal wire, so it can be removed. Now we can see that the current through the 6k ohm resistor will be the current Isc. Thus, Isc15 = 15V/6k = 2.5mA That's not the final answer though! We have to find the other two parts of Isc. The 5k ohm is again left hanging, so we can remove it. The 8k, 3k, and 1k can be combined in series (12k). The 4k ohm is shorted out, so it can be removed. Isc12 = -12V/6k = -2mA We can remove the 4k and 6k since they are both shorted out. We can also combine the 8k and 3k in series (11k). Isc20 = 20mA Now we add the three sub-answers together to get our final answer: Isc = Isc15 + Isc12 + Isc20 = 2.5mA - 2mA + 20mA = 20.5 mA © Calvin College, 2000 This page was written and is maintained by Steve VanderLeest. It was last modified on 1 Mar 2000.
{"url":"http://www.calvin.edu/~svleest/circuitExamples/TheveninNorton/thev2.sp.ans.htm","timestamp":"2014-04-16T16:08:44Z","content_type":null,"content_length":"2842","record_id":"<urn:uuid:3e7cb124-c93e-4d5d-91e1-a101f5de3987>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
No non-trivial homomorphism to a group up vote 0 down vote favorite Here is a question I posted some months ago in Math.SE, and t.b. mentioned to the following question by Florent MARTIN which is somehow related to my question; Let $G$ be a compact Hausdorff topological group, and let $H$ be a torsion-free group satisfying the ascending condition, i.e. there are no infinite strictly ascending chains $H_1 < H_2<...$ of subgroups of $H.$ Prove that there is no non-trivial homomorphism of $G$ into $H.$ Note that, no topology is considered on $H$ and "homomorphism" simply means "group homomorphism." gr.group-theory topological-groups If $G$ is finite, it's false. So I guess you should take $G$ infinite. Do you want to show that there is no injective homomorphism $G \to H$? Because then you just need to show that $G$ fails to have the ascending condition. Hint: $G$ is uncountable. If that isn't what you mean, then take $G$ profinite and let $H$ be a finite quotient. Or am I missing something? – Richard Kent Dec 20 '11 at 1:22 2 Doesn't the answer to Florent's question answer yours? The acc implies finitely generated and by the answer to that question all fg images are finite. – Benjamin Steinberg Dec 20 '11 at 1:23 1 Richard, he said H is torsion-free so in particular not finite. – Benjamin Steinberg Dec 20 '11 at 1:24 Oh, thanks. Missed it. – Richard Kent Dec 20 '11 at 3:16 add comment 1 Answer active oldest votes There are no non-trivial homomorphisms from a compact group to a torsion-free finitely generated group by the theorem of Nikolov and Segal quoted in the answer by Andreas to Florent's question (mentioned by the OP above). Since ascending chain condition on subgroups implies finite generation, this answers the question. up vote 5 down vote accepted Nb. Please upvote Andreas's answer if you like this one. Hmmm. BTW is there a more down to earth approach? – Ehsan M. Kermani Dec 20 '11 at 3:03 Most likely not. – Benjamin Steinberg Dec 20 '11 at 3:30 add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory topological-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/83913/no-non-trivial-homomorphism-to-a-group?sort=newest","timestamp":"2014-04-18T02:58:49Z","content_type":null,"content_length":"57441","record_id":"<urn:uuid:07d63745-a7f1-4766-8a2e-34d158751e79>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
This data set contains the investigator-selected Key Parameters from the Magnetic Field instrument (MGF) on Geotail, as made available via NSSDC's interactive WWW interface CDAWeb. Selected references: Kokubun, S., et al., Magnetic field measurement (MGF), In Geotail Prelaunch Report, SES-TD-92-007SY, Institute of Space and Astronautical Science, SES Data Center, pp. 58-70, Apr. 1992. Nishida, A., et al., Geotail mission to explore earth's magnetotail, EOS, 73, No. 40, Oct. 1992. Kokubun, S., et al., The GEOTAIL Magnetic Field Experiment, J. Geomag. Geoelectr., 46, No. 1, 7-21,
{"url":"http://vmo.igpp.ucla.edu/search/search.jsx?facet=(observatorytype:Spacecraft+AND+observatoryname:(Geotail))","timestamp":"2014-04-19T10:09:51Z","content_type":null,"content_length":"131177","record_id":"<urn:uuid:adaa9201-229c-4fd1-809d-01d2b1d0fa1a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Devon Precalculus Tutor Find a Devon Precalculus Tutor ...I am currently solidifying my Attic Greek. With this knowledge, and with a good knowledge of many of the texts that come out of Ancient Greece (Homer, the Tragedians, Plato, Aristotle, and philosophy and math generally), I could proficiently tutor introductory students in Ancient Greek. I have ... 26 Subjects: including precalculus, English, writing, reading ...Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! 14 Subjects: including precalculus, physics, calculus, geometry ...However, I will help them with every step, offering tips and critiques based on the success of hundreds of students with whom I have worked. My main subjects have been upper levels of high school math and science, including calculus, pre-calculus (college algebra), trigonometry, chemistry and b... 32 Subjects: including precalculus, chemistry, English, biology ...I have experience with the uses of linear algebra and matrices. I have experience dealing with row reduction, multiplication of matrices. I have a bachelor's degree in mathematics and took a symbolic logic course in college passing with an A. 13 Subjects: including precalculus, calculus, geometry, GRE ...I have studied Bible in depth, with Hebrew commentaries, and study Talmud and Zohar regularly. I have worked for 30+ years in schools and colleges as a teacher, tutor and school principal. I have worked directly with ADD/ADHD students as a teacher, and supervised and supported other teachers in working with such students. 35 Subjects: including precalculus, statistics, geometry, GED
{"url":"http://www.purplemath.com/Devon_Precalculus_tutors.php","timestamp":"2014-04-18T21:50:47Z","content_type":null,"content_length":"23740","record_id":"<urn:uuid:f7908755-f9e0-4b2d-a1eb-42b0539fd655>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Hillside, IL Algebra 2 Tutor Find a Hillside, IL Algebra 2 Tutor ...I can also help students who are preparing for the math portion of the SAT or ACT. When teaching lessons, I put the material into a context that the student can understand. My goal is to help all of my students obtain a solid conceptual understanding of the subject they are studying, which provides a foundation to build upon. 12 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I would like to help teach GED to someone in need of it. My mother herself had to go through the entire process of taking the GED courses and finally taking the GED examination, and along the way, I helped tutor her in it. In the end, she was able to pass the GED exam with a 93% and obtain her GED diploma. 16 Subjects: including algebra 2, chemistry, English, geometry ...Rhetorical questions can be tricky, and even subjective. I've developed an approach to these types of questions that make them considerably easier for students to tackle. I truly enjoy helping students to master the Reading portion of the ACT test. 20 Subjects: including algebra 2, reading, English, writing ...I have volunteered for 2 years at an elementary school where I provide one-on-one tutoring for the kids that were falling behind. I have 100% success rate with all of the kids that I have helped. They all either caught up with the rest of their class, and some were even able to get ahead. 11 Subjects: including algebra 2, geometry, biology, precalculus ...I have my master's degree in mechanical engineering. Physics, chemistry, anatomy, and biology were all part of my curriculum to get where I am today. Overall, I received a 30 on my ACTs, with a 27 in science. 20 Subjects: including algebra 2, physics, statistics, calculus Related Hillside, IL Tutors Hillside, IL Accounting Tutors Hillside, IL ACT Tutors Hillside, IL Algebra Tutors Hillside, IL Algebra 2 Tutors Hillside, IL Calculus Tutors Hillside, IL Geometry Tutors Hillside, IL Math Tutors Hillside, IL Prealgebra Tutors Hillside, IL Precalculus Tutors Hillside, IL SAT Tutors Hillside, IL SAT Math Tutors Hillside, IL Science Tutors Hillside, IL Statistics Tutors Hillside, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Hillside_IL_algebra_2_tutors.php","timestamp":"2014-04-17T07:57:06Z","content_type":null,"content_length":"24113","record_id":"<urn:uuid:55e02494-2186-4ff8-a4e8-30ebada93da5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Composite function by definition April 27th 2010, 10:33 AM #1 Sep 2009 Composite function by definition I found this expression on Wikipedia: If $f: X \rightarrow Y$, then $f \circ id_X = id_Y \circ f$, where $id_X$ and $id_Y$ are identity functions. I wanted to see whether it's true. So I rewrite the expression using the definition of composite function. $<br /> (g \circ f)(x)=g(f(x))=g(y)=z<br />$ for $f:X \rightarrow Y$, $g: Y \rightarrow Z$, $x \in X, y \in Y$, and $z \in Z$. I replaced $f$ with $g$ in $f \circ id_X$ and replaced $id_X$ with $f(x)=x$, where $x \in X$. So I wrote the expression for the LHS Let $f:X\rightarrow X$ and $g:x\rightarrow Y$, where $x\in X$and $y \in Y$. $<br /> g\circ id = (g\circ f)(x)=g(f(x))=g(x)=y= g<br />$, but now my difficulty being that I don't know how to express the RHS $<br /> id_Y \circ g =?<br />$ I can't find any information anywhere on the web proving $f \circ id_X = id_Y \circ f$. Could someone please show me how to write the expression for the RHS? Use $i_X~\&~i_Y$ for the identity functions. The by definition if $(p,q)\in f\circ i_X$ then $\left( {\exists r} \right)\left[ {(p,r) \in i_X \wedge (r,q) \in f} \right]$ But $(p,r) \in i_X$ implies $p=r$. So $(p,q) \in f$. If $(p,q)\in i_Y\circ f$ then $\left( {\exists s} \right)\left[ {(p,s) \in f \wedge (s,q) \in i_Y} \right]$. $(s,q) \in i_Y$ implies that $s=q$, so $(p,q)\in f$. Does that help? That is so clever. April 27th 2010, 11:03 AM #2 April 27th 2010, 11:14 AM #3 Sep 2009
{"url":"http://mathhelpforum.com/discrete-math/141723-composite-function-definition.html","timestamp":"2014-04-19T02:36:44Z","content_type":null,"content_length":"43088","record_id":"<urn:uuid:cfcf1c7f-d4fc-4eda-bebe-3ff761cd5a7a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help What have you tried? Can you use L'Hopital's rule? -Dan If you're allowed to use L'Hospital's Rule, use it. If not, rewrite it as \displaystyle \begin{align*} \frac{\tan{(x)} - \sin{(x)}}{x^3} &= \frac{\tan{(x)}}{x^3} - \frac{\sin{(x)}}{x^3} \\ &= \frac{\ sin{(x)}}{x^3\cos{(x)}} - \frac{\sin{(x)}}{x^3} \\ &= \frac{\sin{(x)}}{x} \cdot \frac{1}{\cos{(x)}} \cdot \frac{1}{x^2} - \frac{\sin{(x)}}{x} \cdot \frac{1}{x^2} \end{align*} Go from there, using the well-known limit \displaystyle \begin{align*} \lim_{x \to 0} \frac{\sin{(x)}}{x} = 1 \end{align*}. Last edited by votan; October 1st 2013 at 05:14 AM.
{"url":"http://mathhelpforum.com/calculus/222445-limit.html","timestamp":"2014-04-16T07:51:48Z","content_type":null,"content_length":"61216","record_id":"<urn:uuid:15c09d3c-814f-423e-b044-d2b0e8a1945b>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Marana Algebra 2 Tutor Find a Marana Algebra 2 Tutor ...I hope to hear from you soon!I have tutored students in this subject for the last seven years in a classroom setting. I feel comfortable tutoring this subject. I am qualified to tutor in study skills because I was a tutor in a school for seven years where one of my main jobs was to keep the students organized and on task. 23 Subjects: including algebra 2, chemistry, geometry, biology ...I have a Masters Degree in Math and Math Education. My specialties are Algebra and Geometry. I am also proficient in Calculus. 13 Subjects: including algebra 2, calculus, geometry, statistics ...My experience ranges from delivering lectures and helping students understand important scientific concepts to working with students on a one-on-one basis. Many of the students I have worked with are non-science majors and finding a way to explain difficult scientific concepts to individual stud... 35 Subjects: including algebra 2, chemistry, physics, calculus ...I endeavor to help my students approach mathematical concepts in an intuitive way by relating new lessons to previously acquired skills. I also teach problem solving strategies and support general methods with illustrative examples. I keep students engaged by asking open ended questions and frequently giving them an opportunity to ask questions as well. 10 Subjects: including algebra 2, reading, writing, geometry ...I then earned a MEd with a major in Secondary Education and minor in mathematics. I finally earned a Masters Degree in Computer Science.Algebra is a first class in algebra that usually follows a pre-algebra class. It is important to understand the basic concepts of algebra before continuing to Algebra II. 7 Subjects: including algebra 2, geometry, algebra 1, precalculus
{"url":"http://www.purplemath.com/Marana_Algebra_2_tutors.php","timestamp":"2014-04-19T07:02:09Z","content_type":null,"content_length":"23684","record_id":"<urn:uuid:6fad959f-3ce6-4afa-84aa-0e148989f819>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 811.11014 Autor: Erdös, Paul; Sárközy, A.; Sós, T. Title: On sum sets of Sidon sets. I. (In English) Source: J. Number Theory 47, No.3, 329-347 (1994). Review: For a finite or infinite set A\subseteq N = { 1,2,...} let A(n) = |A\cap [1,n]| and 2A = {a+a' | a,a' in A}. A is called a Sidon set if all sums a+a' in 2A, a \leq a' are distinct. Sum sets 2A of Sidon sets A cannot consist of ``few'' generalized arithmetic progressions of the same difference. To be more precise let B[d] = {a in 2A | a-d\not in 2A} for d in N. There are absolute constants c[1], c[2] > 0 such that for all d in N we have |B[d]| > c[1]|A|^2 if A is a finite Sidon set and (*) limsup[N > +oo] B[d](N) (A(N))^-2 > c[2] if A is an infinite Sidon set. For the proof in the case of infinite A the generating function f(z) = sum[a in A] z^a, where z = e^-1/N e^2\pi i\alpha for large N in N and real \alpha is considered. Assuming the contrary of the proposition, ingenious estimates of I: = int[0]^1 |(1-z^d)f^2 (z)|^2 d\alpha lead to contradicting lower and upper bounds for I. By example it is shown that (A(N))^-2 in (*) cannot be replaced by (A (N))^-2 log^-1 N. While these results in the case d = 1 deal with blocks of consecutive elements in 2A for Sidon sets A, the next theorems give information about gaps between consecutive elements of 2A. Let 2A = {s [1],s[2],...}, s[1] < s[2] < ... . For n in N, n > n[0] there exists a Sidon set A\subseteq {1,2,..., n} such that s[i+1]-s[i] < 3\sqrt{n} for all s[i+1] in 2A\ {s[1]}. The prime number theorem is used for constructing such sets A. For infinite Sidon sets the probabilistic method of Erd\H os and Rényi is adapted to prove the following result: For \epsilon > 0 there is a Sidon set A such that s[i+1]-s[i] < \sqrt{s[i]} (log s[i])^(3/2)+\epsilon for all i > i[0] (\epsilon) and s[i] in 2A. Also given are lower estimates for s[i+1]- s[i]. A catalog of unsolved problems concerning Sidon sets and B[2][g] sets closes this part I. Reviewer: J.Zöllner (Mainz) Classif.: * 11B13 Additive bases Keywords: addtive bases; B[2]-sequences; sum sets of Sidon sets; infinite Sidon sets © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/81111014.htm","timestamp":"2014-04-20T18:50:15Z","content_type":null,"content_length":"6559","record_id":"<urn:uuid:fb9a4d31-6f08-4839-b437-5bb3d9a98692>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Surface Areas of Complex Shapes - Mathematics Surface Areas of Complex Shapes This chapter explores surface areas of complex shapes. The objectives of this chapter are being able to calculate the surface areas of pyramids, cones and spheres and simple combinations of these solids. No prior knowledge is required for this chapter. Finding the surface area of a pyramid Suppose we wanted to find the surface area of the following square based pyramid. First we find the area of one of the triangular faces first, it may be useful to draw out a separate triangle as shown below; …for the triangle above we know that; That means that the surface area is… Finding the radius of a cone from its net In this chapter section we shall explore how to find the radius of a cone from its net. Below is the net of a cone. It is the major sector of a circle radius 8. The major arc length is 31.4cm First we draw a sketch of the cone to help work out the new radius of the base of the cone as shown below; We know that 31.4 is the circumference of the base of the cone. Next we work out the radius; Finding the surface area of a cone using π r l This section explores finding the surface area of a cone using π r l. Below is a cone. It has a height of 4cm and a radius of 3cm. Suppose we wanted to find the surface area of the cone. We know that the surface area is given by; …where r is the radius of the base, and l is the slopping height. First we need to find the value of l we do this by using ‘Pythagoras’ theorem’ It is found to be 5 cm as shown below; Now we know that l=5 and r=3 so we can update the diagram as shown below; …now we can use the surface area formula; …if we include the base; Finding the net and surface area of a frustum This section explores finding the net and surface area of a frustum. The shape below is a frustum, it is like a lampshade. Below is how the net will look like from the shape above; The net is horseshoe shaped it is the sector of a large circle minus the sector of a small circle. Next we shall work out the area of the next. Below is the shape again with a small cone added onto the frustum to make a bigger cone. We need to find the height of the new large height. We can see that the radius goes from 4 to 2 across the height 5cm, it is doubled. That must mean that the height of the large cone must be 10cm. First we shall find out the surface area of the small cone. We know that the slope length of the small cone is the square root of 29 as shown below. Next we find the area of a small cone as shown below; Now we can also find the area of the large cone. Finally we can find the surface area of the frustum. Finding the surface area of a sphere This section offers a quick summary for finding the surface area of a sphere. Below is a sphere; The surface area of a sphere is similar and contains the formula for the area of a circle. The formula is; Leave a Reply Cancel reply
{"url":"http://mathematicsi.com/surface-areas-of-complex-shapes/","timestamp":"2014-04-20T23:31:47Z","content_type":null,"content_length":"68793","record_id":"<urn:uuid:2071c254-6749-4cf7-b3ed-3d23e7b516da>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Pico Heights, CA Math Tutor Find a Pico Heights, CA Math Tutor ...My approach to chemistry involves two techniques, which I describe below. When approaching problem-solving chemistry, you need to put pencil to paper and try different approaches. Often, it's not use to attempt to figure out the best approach in advance; you're better off trying several ways immediately, and seeing what you get. 17 Subjects: including algebra 1, linear algebra, algebra 2, calculus ...I've tutored approximately 30 students from various backgrounds, academic standings, and proclivities. All of my AP US students have all received at least a 4 on the exam. My track record is similar on the AP Lit and AP language exams. 20 Subjects: including algebra 2, Spanish, algebra 1, reading ...I have taken college courses in early childhood development and have given tutoring to children at Hancock Park Elementary School. I currently tutor fifth grade, second grade, fourth and 3rd grade students and can supply references. In college, I studied the mechanics of speech writing and have... 31 Subjects: including algebra 1, algebra 2, geometry, SAT math ...To pay my way through college I trained at the American Institute of Massage Therapy and graduated valedictorian with over 1,100 hours in anatomy, physiology and hands-on training. I became licensed and practiced massage in Colorado and California for many years. I've trained extensively as a reading, math, and science tutor at this level. 17 Subjects: including prealgebra, reading, English, writing ...Upon completion of my medical education I entered into a surgical residency. After completing a year of residency I decided that I no longer was interested in pursuing a career in surgery, and so I decided to enter another field of medicine that I am currently applying for. In the interim of my... 18 Subjects: including calculus, bodybuilding, fitness, algebra 1 Related Pico Heights, CA Tutors Pico Heights, CA Accounting Tutors Pico Heights, CA ACT Tutors Pico Heights, CA Algebra Tutors Pico Heights, CA Algebra 2 Tutors Pico Heights, CA Calculus Tutors Pico Heights, CA Geometry Tutors Pico Heights, CA Math Tutors Pico Heights, CA Prealgebra Tutors Pico Heights, CA Precalculus Tutors Pico Heights, CA SAT Tutors Pico Heights, CA SAT Math Tutors Pico Heights, CA Science Tutors Pico Heights, CA Statistics Tutors Pico Heights, CA Trigonometry Tutors Nearby Cities With Math Tutor Bicentennial, CA Math Tutors Cimarron, CA Math Tutors Dockweiler, CA Math Tutors Dowtown Carrier Annex, CA Math Tutors Farmer Market, CA Math Tutors Foy, CA Math Tutors Green, CA Math Tutors Lafayette Square, LA Math Tutors Miracle Mile, CA Math Tutors Oakwood, CA Math Tutors Rimpau, CA Math Tutors Sanford, CA Math Tutors Vermont, CA Math Tutors Westvern, CA Math Tutors Wilcox, CA Math Tutors
{"url":"http://www.purplemath.com/pico_heights_ca_math_tutors.php","timestamp":"2014-04-21T15:03:33Z","content_type":null,"content_length":"24173","record_id":"<urn:uuid:cbbcf3e3-7f25-4bf2-b1cf-7f1b7874095b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
About "MathNotations" Description: Math investigations, challenges, problems of the day, and standardized test practice that emphasize the development of conceptual understanding in mathematics. Marain's blog, which dates back December of 2006, also features dialogue on issues in mathematics education with a focus on standards, assessment, and pedagogy primarily at the 7-12 level through AP Calculus. Posts have included "Thoughts on Stay-at-Home Moms vs Preschool," "Reply to Euclidean geometry/Proof Question on Math," "A Special Case of the Random Triangle Problem," "Singapore Math Part III (Info from the 'Source')," "Multiple Representations (Rule of 4) in Algebra 2," "A Geometry Tribute to Cinco de Mayo," "Thank You, Prof. Escalante," "Pi-Squared over 6: The Algebraic Genius of Euler," "Using WarmUps in Middle School/HS to Develop Thinking and Review/Apply Skills," and "How Much Factoring in 1st Year Algebra?" Marain recently retired after 30 years in math education and supervision, having served as an Advanced Placement Calculus (BC) teacher, K-5 Chair of New Jersey Math Content Standards and Curriculum Frameworks, and member of Math Item Review Committee for New Jersey High School Proficiency Assessment.
{"url":"http://mathforum.org/library/view/74897.html","timestamp":"2014-04-17T04:38:22Z","content_type":null,"content_length":"6826","record_id":"<urn:uuid:9bf7f9e0-9740-4bf8-afd5-6bba1061c0b6>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Lake In The Hills Algebra 2 Tutor Find a Lake In The Hills Algebra 2 Tutor ...I graduated from the University of California, San Diego with a degree in Biochemistry in 2012. Since graduation, I've worked as a vision therapist for children ages 6-18, combining my love of optometry with one-on-one tutoring. I believe that students are able to learn anything with the right instruction and I know my passion for science and math will contribute greatly to this 25 Subjects: including algebra 2, chemistry, calculus, physics ...Finally, I have been using statistical concepts ever since, from teaching college-level physics to building courses for professional auditors. I was a student of standardized tests, achieving the highest possible score on the math section of the ACT, as well as high scores on the equivalent sect... 13 Subjects: including algebra 2, calculus, statistics, geometry ...I have been using MS word for 10 years now for work on a daily basis. I am very familiar with the MS product suite including Word, Powerpoint, Excel, Visio, One Note, MS project. I have an undergraduate degree in mechanical engineering. 29 Subjects: including algebra 2, reading, physics, English ...I am a resident of Roselle with my wife and newborn. I am a former mathematics teacher (and Mathletes Coach) and currently working as an Actuarial Analyst in downtown Chicago. I have over 15 years of tutoring experience. 10 Subjects: including algebra 2, calculus, geometry, statistics I currently teach algebra and geometry at Mchenry County College. Additionally, I tutor in the tutoring center, working with students in those subjects as well as statistics, calculus, finite math, fundamentals of math which includes logic, and math for elementary ed teachers. On my days off, I substitute teach at the local high schools. 10 Subjects: including algebra 2, calculus, statistics, geometry
{"url":"http://www.purplemath.com/lake_in_the_hills_algebra_2_tutors.php","timestamp":"2014-04-20T06:54:00Z","content_type":null,"content_length":"24430","record_id":"<urn:uuid:ed6f6c39-8c0c-4c22-961a-7f024cb06500>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Pythagoras on a Sphere Copyright © University of Cambridge. All rights reserved. 'Pythagoras on a Sphere' printed from http://nrich.maths.org/ You only need elementary trigonometry and scalar products Given any right-angled triangle $\Delta ABC$ on a sphere of unit radius, right angled at $A$, and with lengths of sides $a, b$ and $c$, then Pythagoras' Theorem in Spherical Geometry is $$\cos a = \ cos b \cos c.$$ Prove this result. Find a triangle containing three right angles on the surface of a sphere of unit radius. What are the lengths of the sides of your triangle? Use the Pythagoras' Theorem result above to prove that all spherical triangles with three right angles on the unit sphere are congruent to the one you found. To find out more about Spherical Geometry read the article 'When the Angles of a Triangle Don't Add Up to 180 degrees.
{"url":"http://nrich.maths.org/5616/index?nomenu=1","timestamp":"2014-04-17T18:48:28Z","content_type":null,"content_length":"3981","record_id":"<urn:uuid:5cda9280-bfb2-45ec-b428-f0993dcec7aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Castro Valley Find a Castro Valley Calculus Tutor ...I have enjoyed tutoring ever since. After taking chemistry I began tutoring it also and with these two subjects as well as working in the stockroom I was able to pay for most of my education. Most of the students I tutored came by word of mouth from my previous students. 19 Subjects: including calculus, chemistry, physics, statistics ...I've recently graduated from Santa Clara University with a Biochemistry degree. While I was obtaining it, I spent many hours helping my fellow students understand the material we learned in class. I would also love explaining the science I had learned to my friends. (Teaching them about subject... 24 Subjects: including calculus, chemistry, physics, geometry ...From higher level math right down to elementary school math - I love it all! As a high school student, I was a private math tutor for a 5th grader, and I also regularly helped students in our school's after school tutoring program. I have a lot of experience helping students of all levels! 27 Subjects: including calculus, chemistry, physics, geometry I just recently graduated from the Massachusetts Institute of Technology this June (2010) with a Bachelors of Science in Physics. While I was there, I also took various Calculus courses and courses in other areas of math that built on what I learned in high school. I'm a definite believer in the value of knowing the ways the world works, and the value of a good education. 6 Subjects: including calculus, physics, algebra 1, algebra 2 ...Physical science includes the branches of natural science and science that study non-living systems. The foundation of physical sciences (chemistry, physics, and earth science) rests on key concepts and theories which explains and / or models particular aspects of the behavior of nature. I tuto... 60 Subjects: including calculus, chemistry, reading, writing Nearby Cities With calculus Tutor Alameda calculus Tutors Albany, CA calculus Tutors Belmont, CA calculus Tutors Burlingame, CA calculus Tutors Danville, CA calculus Tutors Dublin, CA calculus Tutors Hayward, CA calculus Tutors Lafayette, CA calculus Tutors Newark, CA calculus Tutors Piedmont, CA calculus Tutors San Bruno calculus Tutors San Leandro calculus Tutors San Lorenzo, CA calculus Tutors San Ramon calculus Tutors Union City, CA calculus Tutors
{"url":"http://www.purplemath.com/castro_valley_ca_calculus_tutors.php","timestamp":"2014-04-16T07:23:58Z","content_type":null,"content_length":"24156","record_id":"<urn:uuid:f5e970f7-16b7-4c2e-9ef5-87e7e3121c8e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Functional Transformations September 21st 2013, 05:35 PM #1 Sep 2013 Functional Transformations Why is f(x) = 1/2 (x^3) a vertical shrink? Isn't is supposed to be a horizontal stretch, because f(x) = f(cx), 0<c<1? Re: Functional Transformations horizontal shrink would be ((1/2)x)^3 Re: Functional Transformations Re: Functional Transformations But then how do you make the distinction between $f\left(\tfrac{1}{2}x\right)$ and $\tfrac{1}{8} f(x)$ if $f(x) = x^3$? Or indeed, in OP's question, between $f\left(\tfrac{1}{2^{1/3}}x\right)$ and $\tfrac{1}{2} f(x)$? OP, don't hold me to this, but I think the transformation acts as both but I'm not 100% sure. September 21st 2013, 05:59 PM #2 Senior Member Sep 2013 September 21st 2013, 06:02 PM #3 Sep 2013 September 22nd 2013, 01:55 PM #4
{"url":"http://mathhelpforum.com/pre-calculus/222155-functional-transformations.html","timestamp":"2014-04-19T15:23:03Z","content_type":null,"content_length":"38375","record_id":"<urn:uuid:fdc89061-c250-40ec-aa43-fd17620bf81c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
- Dialogue. Amsterdam University , 1999 "... Discourse Understanding is hard. This seems to be especially true for mathematical discourse, that is proofs. Restricting discourse to mathematical discourse allow us, however, to study the subject matter in its purest form. This domain of discourse is rich and welldefined, highly structured, offers ..." Cited by 7 (6 self) Add to MetaCart Discourse Understanding is hard. This seems to be especially true for mathematical discourse, that is proofs. Restricting discourse to mathematical discourse allow us, however, to study the subject matter in its purest form. This domain of discourse is rich and welldefined, highly structured, offers a well-defined set of discourse relations and forces/allows us to apply mathematical reasoning. We give a brief discussion on selected linguistic phenomena of mathematical discourse, and an analysis from the mathematician’s point of view. Requirements for a theory of discourse representation are given, followed by a discussion of proofs plans that provide necessary context and structure. A large part of semantics construction is defined in terms of proof plan recognition and instantiation by matching and attaching. 1 - INT. WORKSHOP ON FIRSTORDER THEOREM PROVING (FTP'98), TECHNICAL REPORT E1852-GS-981 , 1998 "... ..." "... We study the relationship between the structure of discourse and the use of referring expressions in the mathematical domain. We address linguistic, algorithmic as well as representation issues. We show how referential expressions refer to mathematical statements and how a knowledge intensive approa ..." Add to MetaCart We study the relationship between the structure of discourse and the use of referring expressions in the mathematical domain. We address linguistic, algorithmic as well as representation issues. We show how referential expressions refer to mathematical statements and how a knowledge intensive approach, domain reasoning with the use of proof plans, are used for discourse understanding. We propose to represent discourse plans as underspecified discourse representation structures being selected and instantiated during discourse processing. Our main emphasis is on the handling of abstract discourse entities. 1 Motivation We have the following practical application in mind: the automatic verification of mathematical textbook proofs. Imagine a program that understands mathematical discourse. Such a device reads proofs, say mathematical arguments taken from textbooks on elementary mathematics, and is then able to communicate its knowledge about what it has read and analyzed. It answers ques... - Int. Workshop on FirstOrder Theorem Proving (FTP'98), Technical Report E1852-GS-981 , 1998 "... . Our long-range goal is to implement a program for the machine verification of textbook proofs. We study the task from both the linguistics and deduction perspective and give an in-depth analysis for a sample textbook proof. A three phase model for proof understanding is developed: parsing, str ..." Add to MetaCart . Our long-range goal is to implement a program for the machine verification of textbook proofs. We study the task from both the linguistics and deduction perspective and give an in-depth analysis for a sample textbook proof. A three phase model for proof understanding is developed: parsing, structuring and refining. It shows that the combined application of techniques from both NLP and AR is quite successful. Moreover, it allows to uncover interesting insights that might initiate progress in both AI disciplines. Keywords: automated reasoning, natural language processing, discourse analysis 1 Introduction In [12], John McCarthy notes that "Checking mathematical proofs is potentially one of the most interesting and useful applications of automatic computers". In the first half of the 1960s, one of his students, namely Paul Abrahams, implemented a Lisp program for the machine verification of mathematical proofs [1]. The program, named Proofchecker, "was primarily directed , 1999 "... We propose a promising research problem, the machine verification of textbook proofs. It shows that textbook proofs are a sufficiently complex and highly structured form of discourse, embedded in a well-defined and well-understood domain, thus offering an ideal domain for discourse analysis. Because ..." Add to MetaCart We propose a promising research problem, the machine verification of textbook proofs. It shows that textbook proofs are a sufficiently complex and highly structured form of discourse, embedded in a well-defined and well-understood domain, thus offering an ideal domain for discourse analysis. Because recognizing the structure of a proof is a prerequisite for verifying the correctness of a given mathematical argument, we define a four component model of discourse segmentation. 1 Introduction In order to advance our knowledge of discourse understanding, we have to 1. tackle real-world problems, that is study discourse that is sufficiently complex; 2. build ontologies and formalize knowledge about the domain of discourse; 3. seriously address representation issues; 4. apply reasoning techniques. This is nothing new. But did you ever see a natural language system where each of these four issues has been successfully addressed? Contrarily, many research resources has been spent on a family...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2497342","timestamp":"2014-04-17T19:45:59Z","content_type":null,"content_length":"22335","record_id":"<urn:uuid:f516fea1-436f-4e45-a0e6-dc39ad16c31d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Specification and analysis of parallel/distributed software and systems by Petri nets with transition enabling functions March 1992 (vol. 18 no. 3) pp. 252-261 ASCII Text x Y.E. Papelis, T.L. Casavant, "Specification and analysis of parallel/distributed software and systems by Petri nets with transition enabling functions," IEEE Transactions on Software Engineering, vol. 18, no. 3, pp. 252-261, March, 1992. BibTex x @article{ 10.1109/32.126774, author = {Y.E. Papelis and T.L. Casavant}, title = {Specification and analysis of parallel/distributed software and systems by Petri nets with transition enabling functions}, journal ={IEEE Transactions on Software Engineering}, volume = {18}, number = {3}, issn = {0098-5589}, year = {1992}, pages = {252-261}, doi = {http://doi.ieeecomputersociety.org/10.1109/32.126774}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Software Engineering TI - Specification and analysis of parallel/distributed software and systems by Petri nets with transition enabling functions IS - 3 SN - 0098-5589 EPD - 252-261 A1 - Y.E. Papelis, A1 - T.L. Casavant, PY - 1992 KW - parallel/distributed software; Petri nets; transition enabling functions; TEFs; specification; decision-making activities; distributed computing systems; analytical properties; expressive power; PNs; TEFs; parallel/distributed software; formal specification; parallel programming; Petri nets VL - 18 JA - IEEE Transactions on Software Engineering ER - An approach for visually specifying parallel/distributed software using Petri nets (PNs) extend with transition enabling functions (TEFs) is investigated. The approach is demonstrated to be useful in the specification of decision-making activities that control distributed computing systems. PNs are employed because of their highly visual nature that can give insight into the nature of the controller of such a system and because of their analytical properties. In order to increase the expressive power of PNs, the extension of TEFs is used. The main focus is the specification and analysis of parallel/distributed software and systems. A key element of this approach is a set of rules derived to automatically transform such an extended net into a basic PN. Once the rules have been applied to transform the specification, analytical methods can be used to investigate characteristic properties of the system and validate correct operation. [1] J. Agerwala, "A complete model for representing the coordination of asynchronous processes," Johns Hopkins Univ., Comp. Res. Rep. 32, 1974. [2] T. Agerwala, "Putting Petri Nets to work,"IEEE Computer, vol. 12, pp. 85-94, Dec. 1979. [3] A. L. Ambler and M. M. Burnett, "Influence of visual technology on the evolution of language environments,"IEEE Computer, vol. 22, pp. 9-22, Oct. 1989. [4] T. L. Casavant, W. Cheong, and A. Sajassi, "Complete specification of DDM mechanisms," Purdue Univ., Tech. Rep. TR-EE 88-24, 1988. [5] T. L. Casavant and J. G. Kuhl, "Analysis of three dynamic load balancing strategies with varying global information requirements," inProc. 7th IEEE Int. Conf. Distributed Comput. Syst., Sept. 1987, pp. 195-192. [6] T. L. Casavant and J. G. Kuhl, "Effects of response and stability on scheduling in distributed computing systems,"IEEE Trans. Software Eng., vol. 14, pp. 1578-1588, Nov. 1988. [7] T. L. Casavant and J. G. Kuhl, "A communicating finite automata approach to modeling distributed computation and its application to distributed decision-making,"IEEE Trans. Computers, vol. 39, pp. 628-639, May 1990. [8] G. Ciardo, "Toward a definition of modeling power for stochastic Petri net models," inProc. Int. Workshop on Petri Nets and Performance Models, Aug. 1987, pp. 54-62. [9] J. B. Dugan and G. Ciardo, "Stochastic Petri net analysis of a replicated file system,"IEEE Trans. Software Eng., vol. 15, pp. 394-401, Apr. 1989. [10] I. R. Forman, "Petri--a UNIX tool for the analysis of Petri nets," inProc. Fall Joint Comput. Conf., Nov. 1986, pp. 1092-1098. [11] H. J. Genrich and K. Lautenback, "S-invariance in predicate transition nets,"Appl. Theory of Petri Nets, vol. 66, pp. 98-111, Sept. 1982. (Proc. 3rd Eur. Workshop on Appl. and Theory of Petri [12] H. J. Genrich, "Predicate/transition nets," inPetri Nets Central Models and Their Properties, W. Brauer, Ed. Berlin, Heidelberg, New York: Springer-Verlag, 1987, pp. 207-247. [13] K. Jensen, "High-level Petri nets,"Appl. Theory of Petri Nets, vol. 66, pp. 166-180, Sept. 1982 (Proc. 3rd Eur. Workshop on Appl. and Theory of Petri Nets.) [14] K. Jensen, "Computer tools for construction, modification and analysis of Petri nets," inAdvances in Petri Nets, Part II(Lecture Notes in Comput. Sci., vol. 255). New York: Springer-Verlag, 1986, pp. 4-19. [15] N.G. Leveson and J.L. Stolzy, "Safety analysis using Petri nets,"IEEE Trans. Software Eng., vol. SE-13, no. 3, pp. 386-397, Mar. 1987. [16] T. Lehret al., "Visualizing performance debugging,"IEEE Computer, vol. 22, pp. 38-52, Oct. 1989. [17] C. Lin and C. Marinescu, "On stochastic high-level Petri nets," inProc. Int. Workshop on Petri Nets and Performance Models(Madison, WI), Aug. 1987. [18] E. T. Morgan and R. R. Razouk, "Interactive state-space analysis of concurrent systems,"IEEE Trans. Software Eng., vol. SE-13, pp. 1080-1091, Oct. 1987. [19] T. Murata, "State equation, controllability, and maximal matchings of Petri nets,"IEEE Trans. Automat. Contr., vol. AC-22, pp. 412-415, June 1977. [20] Y. E. Papelis and T. L. Casavant, "XPAT: an interactive graphical tool for synthesis of concurrent software using Petri nets," inProc. Int. Conf. Parallel Process., Aug. 12-16, 1991, vol. 2, p. [21] J. L. Peterson,Petri Net Theory and the Modeling of Systems. Englewood Cliffs, NJ: Prentice-Hall, 1981. [22] W. Reisig, "Petri nets: An introduction," inEATCS Monographs on Theoretical Computer Science. New York: Springer-Verlag, 1985. [23] G. Roman and K. C. Cox, "A declarative approach to visualizing concurrent computations,"IEEE Computer, vol. 22, pp. 25-37, Oct. 1989. Index Terms: parallel/distributed software; Petri nets; transition enabling functions; TEFs; specification; decision-making activities; distributed computing systems; analytical properties; expressive power; PNs; TEFs; parallel/distributed software; formal specification; parallel programming; Petri nets Y.E. Papelis, T.L. Casavant, "Specification and analysis of parallel/distributed software and systems by Petri nets with transition enabling functions," IEEE Transactions on Software Engineering, vol. 18, no. 3, pp. 252-261, March 1992, doi:10.1109/32.126774 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/ts/1992/03/e0252-abs.html","timestamp":"2014-04-24T17:29:30Z","content_type":null,"content_length":"57346","record_id":"<urn:uuid:a4ae6f38-7232-4c82-9e6d-83c2b101e733>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Optamizing Lateral Surface Area of a Cylinder February 15th 2009, 06:52 PM #1 Junior Member Feb 2009 Optamizing Lateral Surface Area of a Cylinder A right circular cone has a base radius 5 and altitude 12. A cylinder is to be inscribed in the cone so that the axis of the cylinder coincides with the axis of the cone. Given that the radius of the cylinder must be between 2 and 4 inclusive, find the value of that radius for which the lateral surface area of the cylinder in minimum. Justify your answer. (Note: The lateral surface of a cylinder does NOT include the bases.) Again I'm not good with Calculus word problems. Am I suppose to find the derivative of the lateral surface area of the cylinder? How do I handle the variables r and altitude? Help please. I really tried to get it, but I'm not use to them. A right circular cone has a base radius 5 and altitude 12. A cylinder is to be inscribed in the cone so that the axis of the cylinder coincides with the axis of the cone. Given that the radius of the cylinder must be between 2 and 4 inclusive, find the value of that radius for which the lateral surface area of the cylinder in minimum. Justify your answer. (Note: The lateral surface of a cylinder does NOT include the bases.) Again I'm not good with Calculus word problems. Am I suppose to find the derivative of the lateral surface area of the cylinder? How do I handle the variables r and altitude? Help please. I really tried to get it, but I'm not use to them. consider $a=2 \pi rh$ for volume we get $a=2\pi r(12-\frac{12r}{5}$ set it to zero find the derivative $\frac{da}{dr}=24\pi-\frac{48}{5}\pi r$ $\frac{48}{5} \pi r=24 \pi$ Last edited by TheMasterMind; February 16th 2009 at 06:05 AM. Can you please show me how did you get h? I know from r/h=5/12 you get h=12/5, but not the rest. thanks again! I'm having trouble on how to do this problem or where to go on this... I know what the picture would look like and that's about it. Any help would be really appreciated! Thank you. A right circular cone has base radius 5 and altitude 12. A cylinder is to be inscribed in the cone so that the axis the cylinder coincides with the axis of the cone. Given that the radius of the cylinder must be between 2 and 4 inclusive, find the value of that radius for which the lateral surface area of the cylinder is minimum. Justify your answer and note that the lateral surface of a cylinder does NOT include the bases. Draw a side-on diagram. Let the radius and height of the cylinder be r and h respectively. From similar triangles: $\frac{5 - r}{h} = \frac{5}{12}$ .... (1) Lateral surface area of cylinder: $S = \pi r^2 h$ .... (2) Use (1) to express $S$ as a function of $r$ only. Use calculus to find the value of $r$ that minimises $S$. Last edited by mr fantastic; March 18th 2009 at 05:38 AM. Reason: No edit - just flagging post as having been moved from thread with duplicate question. Draw the side view, being an isosceles triangle with a rectangle inside it. Note that the rectangle, in touching the sides of the triangle, cuts off smaller triangles which must be similar to half of the original triangle. Note that, whatever the value of the cylinder's radius "r" is, the base of the lower small triangle (let's look at the one on the right of the rectangle) is 5 - r. Also, the height "h" of this smaller triangle is related, by similarity, to the height of the original triangle by: . . . . . $\frac{h}{12}\, =\, \frac{5\, -\, r}{5}$ Solve this relation to get "h" in terms only of "r", noting that the height of the smaller triangle is also the height of the cylinder. Then try to create a formula for the surface area of the "sides" of the cylinder in terms only of "r", and optimize. Remember to check the endpoints of the given interval. Last edited by mr fantastic; March 18th 2009 at 05:38 AM. Reason: No edit - just flagging post as having been moved from thread with duplicate question. See here: http://www.mathhelpforum.com/math-he...ibed-cone.html That makes sense, but the only thing I dont understand is...how do you know whether r=2.5 is a max or a min? February 15th 2009, 07:40 PM #2 February 19th 2009, 10:21 AM #3 Junior Member Feb 2009 March 18th 2009, 12:32 AM #4 March 18th 2009, 05:07 AM #5 MHF Contributor Mar 2007 March 18th 2009, 05:31 AM #6 March 18th 2009, 05:38 PM #7 Nov 2008 March 18th 2009, 05:59 PM #8 March 18th 2009, 06:13 PM #9 Nov 2008 March 18th 2009, 06:15 PM #10 March 18th 2009, 07:22 PM #11 Nov 2008 March 19th 2009, 08:00 PM #12
{"url":"http://mathhelpforum.com/calculus/73816-optamizing-lateral-surface-area-cylinder.html","timestamp":"2014-04-21T16:03:49Z","content_type":null,"content_length":"75424","record_id":"<urn:uuid:6acf845c-a751-4496-b35b-20a210f376a1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Injectivity of Holomorphic function March 31st 2011, 02:53 PM #1 Nov 2009 Injectivity of Holomorphic function Let U be an open disk around the origin in $\mathbb{C}$. Suppose $f:U \rightarrow \mathbb{C}$ is holomorphic on $U$ , $f(0) = 0$ and $f'(0) = 1$. I want to show that there exists a neighborhood $V$ of $0$, $V \subset U$, so that $f$ is injective on $V$. Anybody can help? Last edited by EinStone; March 31st 2011 at 03:37 PM. Isn't there a nice complex analysis analogue of the implicit function theorem which takes care of this quite nicely? Yes, it seems like a general statement for a holomorphic function to be locally invertible if it has non vanishing derivative. Something like an inverse function theorem, but I cant find it anywhere, if someone has a proof of this fact or a reference would be great. There is a proof in this book, page 26. March 31st 2011, 03:45 PM #2 March 31st 2011, 09:55 PM #3 Nov 2009 March 31st 2011, 10:14 PM #4
{"url":"http://mathhelpforum.com/differential-geometry/176464-injectivity-holomorphic-function.html","timestamp":"2014-04-17T19:20:43Z","content_type":null,"content_length":"44343","record_id":"<urn:uuid:ff8dbdbb-77aa-423f-943c-a8b54c24bf8a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrices anyone October 13th 2012, 07:55 PM #1 Oct 2012 Matrices anyone Hi guys I came across this Inverse of matrices by reduction question and need you help Find the inverse of the matrix A= [row1(1 2) row2(5 3)] and use it to solve { x+2y=7, 5x+3y=28 ?????? Re: Matrices anyone What you have to do is augment A to the identity matrix ( [A | I ] ) and then use elementary row operations until you turn A in to I. By doing these row operation on I in the same order you preformed them on A you will end up with the inverse of A (that is [A | I] turns in to [I | A^-1] after using elementary row operations.) Then you can use the inverse to solve the system (turn the system in to its matrix form then multiply both sides by the inverse). I hope that made sense (and helps). Re: Matrices anyone But I'm struggling to apply the proper elementary row operations Assassim0071 do you know the sequence of operations i could possibly use?? Re: Matrices anyone Okay so you have a 2X2 matrix. to reduce that in to the identity you want to get 1's on the diagonals and zero's everywhere else. We can use row operation III (add a multiple of one row to another) to get turn the first entry of the second row to zero by adding -5 times the first row to the second. This elminates the 5 from the second row and turns the 3 in to a -7. Then you can use row operation II (multiply a row by a scalar) to turn the -7 in to a 1. Now we have 1's on the diagonal and only one other non zero entry, all we need to do is get rid of the 2 in the first row. We do this by using row operation III again and add -2 times the second row to the first. This should give you the identiy. So by preforming those exact operations on the identity you should obtain A inverse. October 13th 2012, 08:02 PM #2 October 13th 2012, 08:20 PM #3 Oct 2012 October 13th 2012, 08:37 PM #4
{"url":"http://mathhelpforum.com/calculus/205277-matrices-anyone.html","timestamp":"2014-04-17T19:41:05Z","content_type":null,"content_length":"37065","record_id":"<urn:uuid:e05f8254-5737-4eee-a1d8-8583272f1e62>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
The Higgs Confidence Game December 13, 2011 Can Nature play games against us? Peter Higgs once recorded an autobiographical lecture with the immortal title “My Life as a Boson.” This refers to the Higgs boson, which was named for him even though we don’t know whether it exists. We may, however, learn what is formally called evidence of its existence later today (Tuesday) from officials at CERN. This depends on assessment of confidence intervals projected by two teams of experimenters, and how the data from CERN’s Large Hadron Collider (LHC) measures up to them. Today I (Ken) wish to talk about statistics and social convention in “hard science” such as particle physics. Dick and I are curious whether the assumptions behind the confidence intervals can be violated on both sides: by humans owing to unexpected selection bias, and by Nature possibly acting like a cheating prover in an interactive protocol. Higgs is a Professor Emeritus of the University of Edinburgh in Scotland. He was actually beaten into print in 1964 on the Higgs mechanism by Robert Brout and François Englert. Higgs had tried to publish, but his paper had been rejected as “of no obvious relevance to physics.” He was, however, the first to point out that the prevailing theory of the mechanism required the existence of a new elementary particle. Note that there are other theories of the Higgs mechanism that lack or conceal the Higgs boson, as well as Higgsless models including this paper. The Higgs is massive compared to other elementary particles, but ironically may not be massive enough. At about 125 giga-electron-volts (GeV/${c^2}$ with ${c = 1}$ in natural units) it would be a featherweight in boxing weight class stated in pounds. If the top quark weighs in over 175, which used to be heavyweight but is now cruiserweight, the vacuum with its dark energy could well be only meta-stable. This means it could quantum-tunnel to a lower energy configuration, which would knock out our cosmos at the speed of light. Forget the Mayan calendar for 2012 or the latest doomsday preacher—the real rapture may devolve upon Geneva during today’s press conference. The G-d Particle Higgs himself believes neither the particle nor the mechanism should carry his sole name, and was happy that he, Brout, Englert, and the three authors of another 1964 paper (Gerald Guranik, Carl Hagen, and Tom Kibble) were all awarded the 2010 J.J. Sakurai Prize for this work. He may have gotten his wish, as the popular name “The God Particle” has stuck to the boson. This is the title of a 1993 book by Nobel prize-winning physicist Leon Lederman and science writer Dick Teresi. According to Higgs, Lederman had wanted to title the book The G*d*mm Particle to emphasize how elusive the boson was. His publisher declined to have a swear word in the title, but thought it fine to use just “God.” However, they could have settled on the Orthodox Jewish practice of writing “G-d” to avoid situations where the fully-written name might be erased or discarded. The title The G-d Particle could then be read with Lederman’s original meaning or not. Higgs is said to join many scientists regretting the “God Particle” name, more from concern over hype than irreverence. The Higgs mechanism explains how certain elementary particles acquire mass, via symmetry-breaking. It is not clear whether this determines mass for all particles. By wave-particle duality, the boson is a ripple in the Higgs field. If mass is good, then the boson could be called the “good particle” or the godfather of mass. Finding it—in the form hinted by the present experiments—would complete the set of building blocks for the Standard Model (SM) of particle physics. There is much less doubt that the Higgs field exists and is good, indeed providential. Confidence in Discovery What strikes us is that when and whether the SM Higgs boson is declared discovered depends on a social convention on assessed confidence intervals. Readings that are three standard deviations (${3\ sigma}$) away from limits required by the “null hypothesis” of no Higgs presence give 99.7% confidence, but may only be called “evidence.” It takes a ${5\sigma}$ event to claim discovery. We have blogged about the sigmas before in connection with the ${6\sigma}$ claim for faster-than-light neutrinos. Rumor has it that the ATLAS team will claim a ${3.5\sigma}$ deviation, and the CMS team will claim about ${2.5\sigma}$. These would aggregate to about ${4.3\sigma}$ if the results are independent. Whether independence holds between them is being argued, but we have a more basic question first: Where did the value(s) of “${\sigma}$” come from? When an experiment is repeatable, we can soon pin down the standard deviation and interpret accordingly. A first-time or one-off or rare event, however, heightens the inconvenient question discussed here by physicist and blogger Sean Carroll: What are the error bars on your error bars? I’ve confronted this issue in my chess research as also described in the post mentioned above. My probabilistic model of move choice projects confidence intervals based on representing legal moves as multinomial Bernoulli trials, after fitting playing-skill parameters to training data. Besides the Bernoulli error there is modeling error from imperfections in the computer chess analysis used as data and from assumptions such as all move decisions being independent that are not-quite-true. When applied to players accused of cheating by consulting computer programs during games, my model attempts to make judgments of the form, “for Player ${X}$ to have played 70% of the computer’s moves when his expectation was 61% over 200 relevant turns in his nine games is a ${3\sigma}$ deviation, hence 740-1 odds against the null hypothesis of no cheating.” In my case I can test the Bernoulli-projected ${\sigma}$ by generating (say) 10,000 random nine-game performances out of the 400-odd games (which make 800 game-plays) in the training data for a nearby skill level, and comparing how many actual agreement percentages with the computer fall within ${c\sigma}$ of their projections with ${z}$-intervals from normal distribution. These tests reveal that the projected ${\ sigma}$ given by my current model needs to be divided by about ${1.15}$. This would still allow claiming 2.61${\sigma}$ of “actual” deviation, for 220-1 odds. However, it can be argued that random subsets of nine games by various players are different in kind from nine games by the same player, which is why I am expanding the data-taking of performances by players. Supplementing this error analysis by other statistical methods of gauging confidence may be needed—note that this paper by Louis Lyons unabashedly presents the panoply of methods applicable to particle physics. The Littlewood and Look-Elsewhere Problems Two other problems being discussed have analogies in the chess work. One is that if I run my analysis on a tournament with over 220 players, I am likely to find a performance that my model would judge to be a 220-1 outlier. This is an example of Littlewood’s Law, named for John E. Littlewood who famously collaborated with Godfrey H. Hardy. Hence the situation requires other distinguishing features to support an accusation of cheating, such as plausible physical evidence or observation of wrongdoing. To be sure, the experimentalists have accounted for these factors. The related look-elsewhere problem, however, is being said to reduce the ATLAS deviation from ${3.5\sigma}$ to an effective ${2.2\sigma}$. This also has an analogy in my chess work. Suppose I have two 9-game tournaments with much the same roster of 200 players. Between the tournaments there are 10 ways I can take 9 consecutive games to observe. This is not the same as having 2,000 independent trials to range over, but it greatly improves the odds of 220-1 events or even 740-1 events being merely expected by normal chance. In the Higgs case the spread effect comes from sliding scales of experimental parameters that dovetail with trying to pinpoint the mass of the particle. Here is an example by physicist and blogger Matt Strassler. Even with these problems seemingly tamed, there are still many cases where reported ${3\sigma}$ phenomena “disappear” on further probing. This should happen only ${0.3\%}$ of the time, but seems to happen more often. The chess-playing quantum physicist Tommaso Dorigo detailed one recent important such case on his blog here. The explanation is that this was “really” only a ${1\sigma}$ deviation, which happens by chance 32% of the time, or 16% in a one-sided case. The high rate of disappearing significance overall is still puzzling. Its extent in human sciences was detailed exactly one year ago in a disturbing article by Jonah Lehrer for The New Yorker. If 250 researchers try the same experiment, one would expect 2 or so of them to get ${2.5\sigma}$ deviations (in either direction). The world will then see 2 or so published papers from them, but nary a peep from the 248 who failed and gave up quickly and forgot about it. Thus a significant result may appear independently confirmed when it was actually just by chance, and those failing to reproduce it will then peep up loudly. The effect is equally pernicious with 250 different experiments, especially given a fair chance of a lower-confidence positive from a test deemed related enough to corroborate the original. Does Nature Play Games? In physics, however, high-energy experiments are limited, and the problem of vanishing significance makes us consider stranger possibilities. We will not be the first. A cosmic conspiracy hypothesis involving the LHC and Higgs itself was featured in the New York Times in October 2009. Our thought merely asks whether situations of the kind introduced by Christos Papadimitriou in the paper “Games Against Nature” are “real.” This famous paper is considered one of the forerunners of interactive protocols. In one of several problems treated in the paper, Papadimitriou varied a standard model of random network faults by allowing the probability ${p(e,v)}$ of failure in a graph edge ${e}$ to depend also on the current vertex ${v}$ of a “Runner” on the graph. He showed that determining whether Runner has a 50% chance of reaching a goal node ${t}$ when the graph is allowed to conspire against him in this way is complete for ${\mathsf{PSPACE}}$. We ask simply, Can Nature do this? That is, can Nature alter probabilities ${p}$ of events ${e}$ after seeing information ${v}$ that is not directly local to ${e}$ but involves some goal ${t}$? Is the computational power needed to do this available? Note that the word “after” is suspect, but the time-symmetry of many processes involving particles also enhances the plausibility of the question. We invite those who know more about the pertinent aspects of physics and information theory to comment. For support at least by allusion, however, we note an article in the Nov. 16, 2011 issue of Nature by Gilles Brassard on quantum protocols for attempting to prove spatial position that fail under unexpected attacks. We say unexpected because a paper last year had seemed to prove their security, but this was broken as we noted here. The paper surveyed by Brassard proves instead that in some general settings no secure protocols can exist; a followup paper is here. Admittedly with little more than surmise, we are prompted to ask: can nature deliver unexpected attacks on the protocols involved in experiments? Can it induce us to accept false conclusions with high probability, in the manner of a “cheating prover”? Does it have the computational capability to do so, in real time? Open Problems To state the more positive side of our question, is there a general computational way to “extract independence” from experimental data to maximize confidence in the results? Does the Higgs boson exist? We note a poll mentioned here of several leading physicists before this month’s rumors, with widely varying and amusing responses. Update: The results announced at CERN were fairly close to rumors and expectations; for summaries and reactions with varying degrees of intensity see Tommaso Dorigo, Peter Gibbs, Matt Strassler, Quantum Diaries, Peter Woit, Luboš Motl. MSNBC Cosmic Log has a nice simple summary of facts and statistical issues. We also thank commenter Surya for this note which was our first alert about the Higgs news. Update 6 July 2012: The Higgs inched across the Discovery line in time for the 4th of July. John Huth wrote a humorous post for the blog “Quantum Diaries” channeling Mark Twain. This snippet expresses a point of my original post: [The CERN presenter], who seemed quite earnest, moved to what I gathered was the ‘punch-line,’ as I could sense the rapt attention of the pilgrims in the room. When he combined the results of two kinds of fireworks, lo-and-behold!, a magic barrier was crossed. Now, dear reader, you don’t need to know my opinion of statistics, but I will tell you that there is something called a ‘five-sigma’ effect. This manifestation of statistics is deemed by the cognoscenti to be a ‘discovery.’ My guide seemed to be glued to the screen, hanging on every word this gentleman uttered. When I inquired about the meaning of this ‘five-sigma’ miracle, he told me that if it was 4.9, it didn’t count. I was rather amazed that such a small fraction divides a miracle from a non-miracle, but he said it was thus. Humour aside, it’s convincing to me, hail to the two teams and everyone who built the collider! Meanwhile I described a concrete situation to which my concerns about probabilities can be transferred [Updated links post-press-conference, expanded sentence in intro on Higgsless models, July 6 update.] 1. December 13, 2011 9:29 am As Press et al. put it in their estimable Numerical Recipes: One would expect a measurement to be off by ${\pm 20\sigma}$ only one time out of ${2\times10^{88}}$. We all know that “glitches” are much more likely than that! Whenever we make predictions based on models that can be falsified by what Gödel’s Lost Letter and P=NP calls “burning arrows,” our confidence in those predictions tends to be too great in proportion to our expertise in that model; this effect belongs to a broad class of ubiquitous cognitive phenomena that psychologists call Dunning-Kruger Effects. For example, in the 1700s, what were the rational odds that Euclid’s Parallel Postulate would turn out to be false? In the 1800s, what were the rational odds that mathematics would turn out to be undecidable? In early 1900s, what were the rational odds that the state-space of Nature would turn out to be non-Newtonian? In the latter 1990s, what were the rational odds that the financial system would suffer an instability-driven meltdown? Nowadays, what are the rational odds that (e.g.) the state-space of Nature is non-Hilbert, or that oracle-defined complexity classes are In all of these cases, leading experts grossly underestimated the probability of burning-arrow model innovation, and in some cases leading experts even vehemently denied that burning-arrow model revisions were logically conceivable. Thus one more humbling lesson of history is that Dunning-Kruger effects bar us all — experts especially! — from giving reliable quantitative estimates for the probability of burning-arrow revisions of even our most cherished models. So is there a burning arrow that will falsify the physics of the Standard Model? Plenty of physicists hope so! What is the probability that this arrow will be found? Almost certainly human intelligence, no matter how expert, cannot reliably estimate that probability. 2. December 13, 2011 9:54 am Ken asks: Can Nature induce us to accept false conclusions with high probability, in the manner of a “cheating prover”? As a followup, historical examples of the false induction that Ken requests include: Example 1: the Darwinian evolution of fitness tempts us to embrace the postulate of intelligent design. Example 2: Cubic crystals exhibit isotropic thermodynamic properties even though the underlying atomic lattice is not isotropic. Example 3: Particle detectors mimic coarse-grained quantum jumps via fine-grained unitary evolution (that is, “there are no jumps in Nature”). An interesting question (that I cannot answer) is, are there any mathematical examples of Nature inducing us to accept false postulates? E.g.: Example 4: Nature endows humans with an ability to prove theorems sufficient to induce the dual illusions that (a) mathematics is decidable and (v) theorem-proving is in P. Even so gifted and expert a mathematician as David Hilbert (and many others) embraced the illusions of #4 … and even today, the enduring power of these illusions remains to be understood. :) 3. December 13, 2011 7:22 pm I am a big fan of your blog and have learned many interesting things from it. A short comment that may be of historical interest- there is a controversy with regards to priority in regards to the Higgs boson. The mechanism that gives rise to the Higgs particle was also proposed by PW Anderson in 1962. There is a nice discussion of this on Peter Woit’s blog: □ December 13, 2011 7:48 pm Thank you for pointing that out. I was aware of it, and I believe Anderson taught my first term of freshman physics at Princeton in 1977—I know P.J. Peebles taught my second term and I wrote a term paper on monopoles. I chose to skip it because I felt I’d need to mention differences such as Woit points out and why the Sakurai Prize left Anderson out. I think my sentence saying what Higgs was first in is correctly constructed—I could have added “relativistic” somewhere—and I did emphasize that Higgs feels he should not have sole credit for the mechanism (either). I economized the intro for counterpoint to the recent priority case with another Edinburgh person over matrix multiplication. 4. December 14, 2011 8:19 pm To state the obvious, if Nature turned out to behave like a “cheating prover”, that would be a vastly more important discovery than the Higgs boson itself! The whole fact that’s made physics possible since Galileo is that the universe appears to obey laws that are homogeneous across time and space, and that one can learn about via independently-repeated experiments. So suspecting a breakdown of those facts in the latest collider experiment, with not even a hint of motivation (so far, everything is perfectly consistent with a ~125GeV Higgs being there!), is sort of like telling someone: “Aha, you say you probably left your keys at the office, but what if you really can’t find them because they were stolen by trans-dimensional cyborgs?” I.e., it’s a logical possibility, but a ludicrously-unhelpful one, the sort of thing Sheldon from The Big Bang Theory might say. □ December 14, 2011 10:36 pm Hi, Scott. Among things on my mind when I wrote this last section was your followup comment here to your NY Times blog item (still latest on your blog as I write): “if both quantum mechanics and the prevailing conjectures in complexity theory are valid, then the physical universe can’t be feasibly simulated by a [classical] computer…” —going on to argue that the classical cellular automaton model of Wolfram and Fredkin et al. under-powers the Universe. Your position seems to be that the Universe is computationally more powerful, but not in ways that can actually do boshaft computation. I might agree, and hope to, but click my link to see what Gian-Carlo Rota and Fabrizio Palombi go on to say about “duplicitous behavior” and the natural sciences just below. My motivation comes not from the Higgs case in isolation but from the two paragraphs just above that section, about “disappearing significance” cases in physics (see e.g. this comment by Matt Strassler), and then in the human sciences where there’s an easier explanation. The nub is my attempt to expand a definition from Papadimitriou’s paper, trying to approach by simple means the way “emergent probability” is addressed for instance here, here, here (conservatively?), and by Bryce DeWitt here (pages). ☆ December 15, 2011 12:38 am I should add: Rota and Palombi are not talking about Nature “cheating” but about whether Nature “reduces” in the way they find paradoxical for math. Underlying their book and some of my other sources is philosophical phenomenology. ☆ December 15, 2011 2:57 am Yeah, as you point out, I think Rota and Palombi were talking about something completely different in that passage (which was lovely and interesting, though!). To clarify, I enjoyed your discussion of some of the subtle philosophical questions involved with when the collider physicists are allowed to claim a “discovery.” However, I was worried that you downplayed a simple but important point: that within a few years, CERN hopes to rack up enough sigmas that all these philosophical questions will basically be moot! And I see no reasons from science or history to suspect they won’t succeed. To put it more broadly: contrary to what’s often claimed, I don’t think skepticism is a universal good! To be productive, skepticism about an experiment should start with prosaic possibilities specific to that experiment—not speculations that could just as well overthrow millions of other experiments, like Nature “cheating” by solving a PSPACE-complete problem… ☆ December 15, 2011 3:26 am What I want to know is why is a respected physicist and blogger like Strassler writing things like this—? “…Most experimentalists I am talking to start to answer me by telling me a story about a 3 sigma result they know about that went away after more data was collected. Explain to me, please, how it could be, statistically, that the four ZZ events at CDF reported this summer at 325-327 GeV were a fluke. And yet it appears they were. ” This was after I wrote the post, though I had seen other reference to the CDF events. Why weren’t more sigmas racked up on it? I’ll take the “over” on a Higgs at 124–126, but I wonder if anyone has done a study on the frequency of upholding 3-sigma events that people thought were important. For a maybe-useful aspect, does the “Zoo” have a class C that quantifies the power needed to “cheat” a substantial protocol in some sense, maybe with evidence of C being disjoint from BQP? And just to state what I believe rather than speculate here, it goes (only) as far as suspecting that prior entanglements exist that affect probabilities on larger scales than commonly thought. ☆ December 15, 2011 6:44 am Ken asks: “I wonder if anyone has done a study on the frequency of upholding 3-sigma events that people thought were important.” Ken, two such studies (texts freely available on PubMed) are Emerson et al. “Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial” (PMID:21098355) and Lynch et al. “Commercially funded and United States-based research is more likely to be published; good-quality studies with negative outcomes are not” My esteemed Seattle medical colleague Seth Leopold played a leading role in both studies, and in many enjoyable conversations Seth taught me much about the extraordinary difficulties that are encountered in systematically investigating the cognitive biases associated to statistical likelihood estimation in the context of peer review. A pure physics study that reaches similar conclusions to Leopold’s (regrettably it is paywalled) is Gillies’ “The Newtonian gravitational constant: recent measurements and related studies” (1998); Gillies’ Fig. 1 in particular presents multiple measurements of Newton’s ${G}$ that differ by 5-sigma and more; this study too provides concrete evidence that even highly skilled research teams commonly fare poorly in assessing confidence levels. Despite these well-documented difficulties, almost everyone (including me) embraces the reasonable opinions that: (1) further observations likely will confirm a Higgs boson at 125 GeV — but maybe not! — and (2) almost certainly the Standard Model that predicted this boson is incomplete and will require major modifications in coming decades. This balance between confidence and humility is important to the vitality of science. As Freeman Dyson has put it: “If science ceases to be a rebellion against authority, then it does not deserve the talents of our brightest children.” And yet the continuing creative rebellion that is science must not devolve into anarchy, but rather must grapple effectively with the “hunger, poverty, desperation, and chaos” that remain too widespread in the world. ☆ December 16, 2011 11:12 am John, I love those quotes, thank you for them! □ December 14, 2011 11:10 pm Scott, with a view toward cultivating holiday-season cheer here on Gödel’s Lost Letter and P=NP, please let me recount for your consideration the following analysis of Newcomb’s Paradox (as set forth in your lecture that your post linked-to) in light of the principle of Nature’s Benign Mendacity: Smiling blithely, you walk into the room in which the Predictor has placed two boxes. As the proctors look on, you open both boxes, and to the proctors’ amazement, both boxes are filled with money. The proctors ask you to explain, and you say: “The Predictor and I both are rational and benign beings, and therefore I foresaw that the Predictor would do precisely what I would have done myself, namely, fill both boxes with money (of course fully appreciating that this action would constitude a Lipton-Regan ‘burning arrow.’)” The proctors press for a deeper explanation, and you say: “The Predictor too necessarily has free will — without it she cannot reliably predict my actions — and of course we both preferred your surprise to my disappointment. Truth is a trickster, you know!” The proctors press for a still-deeper explanation, and smiling you say: “Look inside the box! Perhaps the Predictor has left a cheerful holiday message there for you!” :) 5. December 14, 2011 9:13 pm Ken asks: Can Nature induce us to accept false conclusions with high probability, in the manner of a “nurturing parent”? Here I have altered “cheating prover” to “nuturing parent” because Ken’s question becomes much more interesting — without alteration of its overall thrust — when we ascribe benign motives to Nature’s penchant for mendacity. We discern in the past history of mathematics and science many instances of Nature’s benign mendacity; examples follow. Nature seemingly is translationally invariant in space and time; to shield our growing understanding from various pathologies associated to spatial and temporal infinity she benignly induces general relativity. Nature seemingly is relativistically (boost) invariant; to shield our growing understanding from various pathologies associated to infinite energy she benignly induces event horizons. Nature seemingly is infinitely divisible; to shield our growing understanding from various pathologies associated to the collapse of matter she benignly induces (nonrelativistic) quantum Nature seemingly is infinitely causally separable and localizable; to shield our growing understanding from various pathologies associated to … uhhh … exponential dimensions and multi-universes of Hilbert space (?) … uhhh … she benignly induces … uhhh … well that bit’s not clear at present, eh? :) But history gives us ample reason to hope and expect that Nature has some grand-yet-benign revelation prepared for us, of great mathematical subtlety and beauty, associated to the intersection of quantum mechanics and general relativity, that will help us grow out of our present too-naive understanding of locality, separability, and causality. With luck, observing the Higgs boson carefully, and thinking deeply about what we are seeing, will help us appreciate the benign surprise(s) that Nature has prepared for us. Because beyond doubt, we do live in a wonderful universe. And on that optimistic note, best wishes for a Happy Holiday Season are extended to all readers of Gödel’s Lost Letter and P=NP!, and thanks particularly to Dick and Ken for another great year of posts! :) □ December 16, 2011 11:18 am I suppose Nature also might be shielding others from us (perhaps gently or not so gently later on), i.e., one several flavors of response to Fermi’s Paradox. □ December 16, 2011 12:43 pm Delta, perhaps you will enjoy too Carl Sagan’s high-technology extension of the Judaic concept of gematria, from Sagan’s novel Contact (1995): “Whoever makes the universe hides messages in transcendental numbers so they’ll be read 15 billion years later when intelligent life finally evolves.” It’s fair to say that most mathematicians and scientists (and me too) think Sagan’s notion is mathematically nonsensical — an interesting idea pushed too far. On the other hand, the publishing house Simon & Schuster gave Sagan a $2 million advance based on his narrative idea, so maybe on some level Sagan’s hypergematria is not entirely crazy, eh? Perhaps one broad lesson is that engineering, science, and mathematics all gain greatly when mathematical ideals of naturality are formalized and fused with humanistic ideals of narrative. This process of formalization and fusion is ongoing, progressive and irreversible, and now seems to be accelerating in the early 21st century. Good. Indeed, it seems (to me) that there is scarcely any other avenue than a Sagan-style naturality-narrative fusion by which the STEM enterprise can “make sense.” So, more Carl Sagans please! :) ☆ December 16, 2011 1:11 pm John, I don’t know if Sagan would have liked to say he was extending “gematria”. However, I do allow that analogy when speaking of algorithmic probability, insofar as it is defined in terms of some textual representation of an object. I could add to what I said above about how much of this speculation I’ve actually adopted by saying I believe algorithmic probability plays a role in physical processes—and perhaps detectably in a digital domain such as the LHC chamber and data recording. There may, however, be obstacles to detecting it, by extension from this paper by Ulvi Yurtsever. ☆ December 16, 2011 6:23 pm Ken, I for one had not previously encountered algorithmic (Solomonoff) probability, and so I am both grateful for and interested in the references you supplied. Perhaps (hopefully) in 2012 Gö del’s Lost Letter and P=NP will discuss these interesting ideas further. ☆ December 16, 2011 6:35 pm Indeed!–the intent to do so was a motive for this post. I may even have a concrete connection to (faults in) Zobrist key schemes for hash tables. For now let me link just this nice primer on the concept. ☆ December 17, 2011 1:14 pm Great, glad this was brought up — look forward to seeing more. 6. December 17, 2011 4:10 pm There were some posts over my blog that are related to some interesting statistical questions raised here. Can we expect 95% of statistically-based results with significance level 95% are correct? Will it help raising the required significance level to 99.5%? How to add up p values of two independent experiments? Should we believe the same two results both with 4sigma significance where in one the effect is large and in the other the effect is small? Here are the links: http://gilkalai.wordpress.com/2009/08/26/test-your-intuition-9/ , http://gilkalai.wordpress.com/2009/09/06/answer-to-test-your-intuition-9/ and http://gilkalai.wordpress.com/ Recent Comments Henry Yuen on The More Variables, the B… The More Variables,… on Fast Matrix Products and Other… The More Variables,… on Progress On The Jacobian … The More Variables,… on Crypto Aspects of The Jacobian… The More Variables,… on An Amazing Paper The More Variables,… on Mathematical Embarrassments The More Variables,… on On Mathematical Diseases The More Variables,… on Who Gets The Credit—Not… John Sidles on Multiple-Credit Tests KWRegan on Multiple-Credit Tests John Sidles on Multiple-Credit Tests John Sidles on Multiple-Credit Tests Leonid Gurvits on Counting Is Sometimes Eas… Cristopher Moore on Multiple-Credit Tests Multiple-Credit Test… on Wait Wait… Don’t F…
{"url":"http://rjlipton.wordpress.com/2011/12/13/the-higgs-confidence-game/","timestamp":"2014-04-17T12:36:30Z","content_type":null,"content_length":"129528","record_id":"<urn:uuid:849f4bec-6915-4d51-88a5-43e89c2f15f2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Multidimensional binary search trees used for associated searching Results 11 - 20 of 861 , 2007 "... The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, have made graphics hardware acompelling platform for computationally demanding tasks in awide variety of application domains. In this report, we describe, summarize, and analyze the l ..." Cited by 319 (16 self) Add to MetaCart The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability, have made graphics hardware acompelling platform for computationally demanding tasks in awide variety of application domains. In this report, we describe, summarize, and analyze the latest research in mapping general-purpose computation to graphics hardware. We begin with the technical motivations that underlie general-purpose computation on graphics processors (GPGPU) and describe the hardware and software developments that have led to the recent interest in this field. We then aim the main body of this report at two separate audiences. First, we describe the techniques used in mapping general-purpose computation to graphics hardware. We believe these techniques will be generally useful for researchers who plan to develop the next generation of GPGPU algorithms and techniques. Second, we survey and categorize the latest developments in general-purpose application development on graphics hardware. - Proc. 13th VLDB Conf , 1987 "... The problem of indexing multidimensional objects is considered. First, a classification of existing methods is given along with a discussion of the major issues involved in multidimensional data indexing. Second, a variation to Guttman’s R-trees (R +-trees) that avoids overlapping rectangles in inte ..." Cited by 297 (33 self) Add to MetaCart The problem of indexing multidimensional objects is considered. First, a classification of existing methods is given along with a discussion of the major issues involved in multidimensional data indexing. Second, a variation to Guttman’s R-trees (R +-trees) that avoids overlapping rectangles in intermediate nodes of the tree is introduced. Algorithms for searching, updating, initial packing and reorganization of the structure are discussed in detail. Finally, we provide analytical results indicating that R +-trees achieve up to 50 % savings in disk accesses compared to an R-tree when searching files of thousands of rectangles. 1 , 1999 "... Two different techniques of browsing through a collection of spatial objects stored in an R-tree spatial data structure on the basis of their distances from an arbitrary spatial query object are compared. The conventional approach is one that makes use of a k-nearest neighbor algorithm where k is kn ..." Cited by 291 (19 self) Add to MetaCart Two different techniques of browsing through a collection of spatial objects stored in an R-tree spatial data structure on the basis of their distances from an arbitrary spatial query object are compared. The conventional approach is one that makes use of a k-nearest neighbor algorithm where k is known prior to the invocation of the algorithm. Thus if m#kneighbors are needed, the k-nearest neighbor algorithm needs to be reinvoked for m neighbors, thereby possibly performing some redundant computations. The second approach is incremental in the sense that having obtained the k nearest neighbors, the k +1 st neighbor can be obtained without having to calculate the k +1nearest neighbors from scratch. The incremental approach finds use when processing complex queries where one of the conditions involves spatial proximity (e.g., the nearest city to Chicago with population greater than a million), in which case a query engine can make use of a pipelined strategy. A general incremental nearest neighbor algorithm is presented that is applicable to a large class of hierarchical spatial data structures. This algorithm is adapted to the R-tree and its performance is compared to an existing k-nearest neighbor algorithm for R-trees [45]. Experiments show that the incremental nearest neighbor algorithm significantly outperforms the k-nearest neighbor algorithm for distance browsing queries in a spatial database that uses the R-tree as a spatial index. Moreover, the incremental nearest neighbor algorithm also usually outperforms the k-nearest neighbor algorithm when applied to the k-nearest neighbor problem for the R-tree, although the improvement is not nearly as large as for distance browsing queries. In fact, we prove informally that, at any step in its execution, the incremental... , 1995 "... An overview is presented of the use of spatial data structures in spatial databases. The focus is on hierarchical data structures, including a number of variants of quadtrees, which sort the data with respect to the space occupied by it. Suchtechniques are known as spatial indexing methods. Hierarch ..." Cited by 287 (13 self) Add to MetaCart An overview is presented of the use of spatial data structures in spatial databases. The focus is on hierarchical data structures, including a number of variants of quadtrees, which sort the data with respect to the space occupied by it. Suchtechniques are known as spatial indexing methods. Hierarchical data structures are based on the principle of recursive decomposition. They are attractive because they are compact and depending on the nature of the data they save space as well as time and also facilitate operations such as search. Examples are given of the use of these data structures in the representation of different data types such as regions, points, rectangles, lines, and volumes. - In: Computer Graphics (SIGGRAPH 91 Proceedings , 1991 "... The number of polygons comprising interesting architectural models is many more than can be rendered at interactive frame rates. However, due to occlusion by opaque surfaces (e.g., walls), only a small fraction of atypical model is visible from most viewpoints. We describe a method of visibility pre ..." Cited by 281 (15 self) Add to MetaCart The number of polygons comprising interesting architectural models is many more than can be rendered at interactive frame rates. However, due to occlusion by opaque surfaces (e.g., walls), only a small fraction of atypical model is visible from most viewpoints. We describe a method of visibility preprocessing that is efficient andeffective foraxis-aligned oril.ria / architectural m[}dels, A model is subdivided into rectangular cc//.$whose boundaries coincide with major opaque surfaces, Non-opaque p(~rtc~/.rare identified rm cell boundaries. and used to form ana~ju{~’n~y,q)f~/>//con nectingthe cells nfthesubdivisicm. Next. theccl/-r/~-cc/ / visibility is computed for each cell of the subdivisirrn, by linking pairs of cells between which unobstructed.si,q/~t/inr. ~exist. During an interactive ww/krhrm/,q/~phase, an observer with a known ~~sition and\it)M~~~)~t>mov esthrc>ughthe model. At each frame, the cell containingthe observer is identified, and the contents {]fp{> tentially visible cells areretrieved from storage. The set of potentially visible cells is further reduced by culling it against theobserver’s view cone, producing the ~)yt>-r~]-t(>// \ i,$ibi/ify, The contents of the remaining visible cells arc then sent to a graphics pipeline for hidden-surface removal and rendering, Tests onmoderatelyc mnplex 2-D and 3-D axial models reveal substantially reduced rendering loads, - IEEE TRANSACTIONS ON MEDICAL IMAGING , 2000 "... The large size of many volume data sets often prevents visualization algorithms from providing interactive rendering. The use of hierarchical data structures can ameliorate this problem by storing summary information to prevent useless exploration of regions of little or no current interest within ..." Cited by 274 (3 self) Add to MetaCart The large size of many volume data sets often prevents visualization algorithms from providing interactive rendering. The use of hierarchical data structures can ameliorate this problem by storing summary information to prevent useless exploration of regions of little or no current interest within the volume. This paper discusses research into the use of the octree hierarchical data structure when the regions of current interest can vary during the application, and are not known a priori. Octrees are well suited to the six-sided cell structure of many volumes. A new space-efficient design is introduced for octree representations of volumes whose resolutions are not conveniently a power of two; octrees following this design are called branch-on-need octrees (BONOs). Also, a caching method is described that essentially passes information between octree neighbors whose visitation times may be quite different, then discards it when its useful life is over. Using the application of octrees to isosurface generation as a focus, space and time comparisons for octree-based versus more traditional "marching" methods are presented. , 1987 "... The problem of indexing multidimensional objects is considered. First, a classification of existing methods is given along with a discussion of the major issues involved in multidimensional data indexing. Second, a variation to Guttman's R-trees (R -trees) that avoids overlapping rectangles in inter ..." Cited by 259 (14 self) Add to MetaCart The problem of indexing multidimensional objects is considered. First, a classification of existing methods is given along with a discussion of the major issues involved in multidimensional data indexing. Second, a variation to Guttman's R-trees (R -trees) that avoids overlapping rectangles in intermediate nodes of the tree is introduced. Algorithms for searching, updating, initial packing and reorganization of the structure are discussed in detail. Finally, we provide analytical results indicating that R -trees achieve up to 50% savings in disk accesses compared to an R-tree when searching files of thousands of rectangles. 1 Also with University of Maryland Systems Research Center. 2 Also with University of Maryland Institute for Advanced Computer Studies (UMIACS). This research was sponsored partialy by the National Science Foundation under Grant CDR-85-00108. 1. Introduction It has been recognized in the past that existing Database Management Systems (DBMSs) do not ... "... ... process a set S of points in so that the points of S lying inside a query R region can be reported or counted quickly. Wesurvey the known techniques and data structures for range searching and describe their application to other related searching problems. ..." Cited by 256 (40 self) Add to MetaCart ... process a set S of points in so that the points of S lying inside a query R region can be reported or counted quickly. Wesurvey the known techniques and data structures for range searching and describe their application to other related searching problems. , 2008 "... In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The ..." Cited by 237 (4 self) Add to MetaCart In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The problem is of significant interest in a wide variety of areas. - In ACM CIKM , 1993 "... – main idea; file structure – algorithms: insertion/split – deletion – search: range, nn, spatial joins – performance analysis – variations (packed; hilbert;...) 15-721 Copyright: C. Faloutsos (2001) 2 Problem • Given a collection of geometric objects (points, lines, polygons,...) • organize them on ..." Cited by 220 (16 self) Add to MetaCart – main idea; file structure – algorithms: insertion/split – deletion – search: range, nn, spatial joins – performance analysis – variations (packed; hilbert;...) 15-721 Copyright: C. Faloutsos (2001) 2 Problem • Given a collection of geometric objects (points, lines, polygons,...) • organize them on disk, to answer spatial queries (range, nn, etc) 15-721 Copyright: C. Faloutsos (2001) 3 1 (Who
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2556&sort=cite&start=10","timestamp":"2014-04-21T00:44:22Z","content_type":null,"content_length":"38661","record_id":"<urn:uuid:f24b4260-5b8a-455a-8699-6f8f24505cf5>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Cemekl Up: Additive Model Components Previous: Bremss c6mekl is a multi-temperature mekal model using sixth-order Chebyshev polynomial for the differential emission measure. The DEM is not constrained to be positive.. The switch parameter determines whether the mekal code will be run to calculate the model spectrum for each temperature or whether the model spectrum will be interpolated from a pre-calculated table. The former is slower but more accurate. The reference for this model is Singh et al. (1996, ApJ, 456, 766). c6pmekl differs by using the exponential of the 6^th order Chebyshev polynomial c6mekl and c6pmekl use abundances relative to the Solar abundances set by the abund command The variants c6vmkl and c6pvmkl with polynomial and exponential polynomial respectively allow the user to specify 14 elemental abundance. For c6mekl and c6pmkl the parameters are: par1-6 Chebyshev polynomial coefficients par7 H density (cm^-3) par8 abundance wrt to Solar par9 Redshift par10 1 norm Normalization While for c6vmkl and c6vpmkl the parameters are: par1-6 Chebyshev polynomial coefficients par7 H density (cm^-3) par8-21 Abundances of He, C, N, O, Ne, Na, Mg, Al, Si, S, Ar, Ca, Fe, Ni wrt Solar (defined by the abund command) par22 Redshift par23 1 norm Normalization Next: Cemekl Up: Additive Model Components Previous: Bremss
{"url":"http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/XSmodelC6mekl.html","timestamp":"2014-04-21T15:34:47Z","content_type":null,"content_length":"14943","record_id":"<urn:uuid:7c8a2761-04a5-4f5a-b40f-9e36675790f5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Rosemary C. Farley, PHD Rosemary C. Farley, PHD Associate Professor Department : Mathematics Email : rosemary.farley@manhattan.edu Phone : 718-862-7380 Office : RLC 2FL PHD, New York University PHD, New York University MS, New York University BS, College of Mount St. Vincent At the present, I am particularly interested in the use of technology in the mathematics classroom. The introduction of technology into the mathematics classroom has created an opportunity for creative teaching methods. I am particularly proud of my participation in the Conferences on Technology in Collegiate Mathematics. This annual conference has been cited by the Mathematical Association of Americas as being a particularly important annual event. It is at these conferences that ideas are shared and information about successes and failures are disseminated. Each year, I learn so much. Our department participates in the Spuyten Duyvil Undergraduate Mathematics Conference each year. I am an active mentor for the students who present the results of mathematical research conducted during the semester. The day is a learning experience for all of us. Publications & Professional Activities Recent Publications: • Farley, Rosemary. Using The Computer Algebra System Maple To Generate Research Questions For Pre-Service Teachers In A Capstone Course. PRIMUS, Special Issue on the Capstone Course. May 10, 2013, pp. 367-376. • Farley, Rosemary & Tiffany, P. A Four Step Method for Creating Animations. Computers in Education Journal. Vol. 4 No. 3 July--September 2013, 59-62. • Farley, Rosemary. Visualizing Linear Transformations Using the Condition Number of a Matrix. Computers in Education Journal. Vol. 1 No. 1 January-March 2010, 71-76. Selected Presentations: • Student Projects to Visualize Iteration Patterns of Matrices with Complex Eigenvalues was presented at the Joint Meeting of the Mathematical Association of America and the American Mathematical Society, Baltimore, Maryland in January, 2014. • A Pilot Program Using an e-book and Online Homework in the Calculus Sequence (with Patrice Tiffany) was presented at the International Conference of Technology in Collegiate Mathematics in April, • Using MyMathLab in Calculus (with Patrice Tiffany) was presented at a Pearson professional development opportunity event in April, 2013. • Creating Animations (with Patrice Tiffany) was presented at the International Conference on Technology in Collegiate Mathematics in March, 2012. • A Capstone Course for Secondary Education Students was presented at the Joint Meeting of the Mathematical Association of America and the American Mathematical Society, Boston, Massachusetts in January, 2012. • Linear Transformations and the Condition Number of Associated Matrices was presented at the Maple Conference Waterloo in July, 2006. Honors & Awards Lasallian Educator of the Year 2001-2002. Professional Memberships Mathematical Association of America Courses Taught/Teaching MATH 151 Modern Mathematics MATH 153 Linear Math Analysis MATH 154 Calculus for Business Decisions MATH 185 Calculus I MATH 186 Calculus II MATH 187 Honors Calculus I MATH 188 Honors Calculus II MATH 230 Elementary Statistics MATH 243 Foundations for Higher Mathematics MATH 272 Linear Algebra I MATH 285 Calculus III MATH 286 Differential Equations MATH 287 Honors Calculus III MATH 331 Probability MATH 385 Vector Calculus MATH 422 Seminar for Mathematics Education MATH 432 Statistical Inference MATH 471 Linear Algebra II MATH 490 Complex Analysis
{"url":"http://manhattan.edu/faculty/rosemaryfarley","timestamp":"2014-04-20T04:09:29Z","content_type":null,"content_length":"24299","record_id":"<urn:uuid:20220005-cd97-43d9-9db8-2195fbef1de4>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Topology Question January 18th 2009, 04:31 PM Topology Question On the plane $\Re^2$ $\beta = {(a, b) * (c, d) \subset \Re^2 | a < b, c<d }$ (a) Show that $\beta$ is a basis for a topology on $\Re^2$ January 18th 2009, 11:01 PM My attempt to this problem is as follows: To show $\beta$ is a basis, we need to show that (1) For each $x \in \Re^2$, there is at least one basis element $\beta$ containing x. Let x be $(i,j) \in \Re^{2}, i,j \in \Re$. Then there is a basis element containing x such that $(i - \frac{1}{n}, i + \frac{1}{n}) \times (j - \frac{1}{n}, j+\frac{1}{n})$, n is a positive (2) If x belongs to the intersection of two basis elements $\beta_{1}$ and $\beta_{2}$ and then there is a basis element $\beta_{3}$ containg x such that $\beta_{3} \subset \beta_{1} \cap \beta_ There are several cases of intersections, we find the basis element satisfying the above. $\beta_{1} = (a, b) \times (c, d) \subset \Re^2 , (a < b, c<d )$ $\beta_{2} = (i, j) \times (p, q) \subset \Re^2 (i < j, p<q)$ For instance, $a \leq i \leq b \leq j$ and $c \leq p \leq d \leq q$, and if x belongs to the above $\beta_{1} \cap \beta_{2}$, then x belongs to the below $\beta_{3}$ $\beta_{3} = (i, b) \times (p, d) \subset \Re^2 , (i < b, p<d )$ In each cases, $\beta_{3}$ can be described as $\beta_{3} = (l, m) \times (n, o) \subset \Re^2 , (l < m, n<o )$. Thus, $\beta$ is a basis for $\Re^{2}$.
{"url":"http://mathhelpforum.com/advanced-algebra/68769-topology-question-print.html","timestamp":"2014-04-18T17:26:55Z","content_type":null,"content_length":"10545","record_id":"<urn:uuid:d2d89cf6-d857-4b75-9f39-6d429e44db51>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Picking a random element from a frequency list A week or so ago, I wrote Lorem Markdownum: a small webapp to generate random text (like the many lorem ipsum generators out there), but in markdown format. This blogpost explains a mildly interesting algorithm I used to pick a random element from a frequency list. It is written in Literate Haskell so you should be able to drop it into a file and run it – the raw version can be found here. > import Data.List (sortBy) > import Data.Ord (comparing) > import qualified Data.Map.Strict as M > import System.Random (randomRIO) The problem Lorem ipsum generators usually create random but realistic-looking text by using sample data, on which a model is trained. A very simple example of that is just to pick words according to their frequencies. Let us take some sample data from a song that fundamentally changed the music industry in the early 2000s: Badger badger badger Mushroom mushroom Badger badger badger Panic, a snake Badger badger badger Oh, it’s a snake! This gives us the following frequency list: > badgers :: [(String, Int)] > badgers = > [ ("a", 2) > , ("badger", 9) > , ("it's", 1) > , ("mushroom", 2) > , ("oh", 1) > , ("panic", 1) > , ("snake", 2) > ] The sum of all the frequencies in this list is 18. This means that we will e.g. pick “badger” with a chance of 9/18. We can naively implement this by expanding the list so it contains the items in the given frequencies and then picking one randomly. > decodeRle :: [(a, Int)] -> [a] > decodeRle [] = [] > decodeRle ((x, f) : xs) = replicate f x ++ decodeRle xs > sample1 :: [(a, Int)] -> IO a > sample1 freqs = do > let expanded = decodeRle freqs > idx <- randomRIO (0, length expanded - 1) > return $ expanded !! idx This is obviously extremely inefficient, and it is not that hard to come up with a better definition: we do not expand the list, and instead use a specialised indexing function for frequency lists. > indexFreqs :: Int -> [(a, Int)] -> a > indexFreqs _ [] = error "please reboot computer" > indexFreqs idx ((x, f) : xs) > | idx < f = x > | otherwise = indexFreqs (idx - f) xs > sample2 :: [(a, Int)] -> IO a > sample2 freqs = do > idx <- randomRIO (0, sum (map snd freqs) - 1) > return $ indexFreqs idx freqs However, sample2 is still relatively slow when we our sample data consists of a large amount of text (imagine what happens if we have a few thousand different words). Can we come up with a better but still elegant solution? Note that lorem ipsum generators generally employ more complicated strategies than just picking a word according to the frequencies in the sample data. Usually, algorithms based on Markov Chains are used. But even when this is the case, picking a word with some given frequencies is still a subproblem that needs to be solved. Frequency Trees It is easy to see why sample2 is relatively slow: indexing in a linked list is expensive. Purely functional programming languages usually solve this by using trees instead of lists where fast indexing is required. We can use a similar approach here. A leaf in the tree simply holds an item and its frequency. A branch also holds a frequency – namely, the sum of the frequencies of its children. By storing this computed value, we will be able to write a fast indexing this method. > data FreqTree a > = Leaf !Int !a > | Branch !Int (FreqTree a) (FreqTree a) > deriving (Show) A quick utility function to get the sum of the frequencies in such a tree: > sumFreqs :: FreqTree a -> Int > sumFreqs (Leaf f _) = f > sumFreqs (Branch f _ _) = f Let us look at the tree for badgers (we will discuss how this tree is computed later): Once we have this structure, it is not that hard to write a faster indexing function, which is basically a search in a binary tree: > indexFreqTree :: Int -> FreqTree a -> a > indexFreqTree idx tree = case tree of > (Leaf _ x) -> x > (Branch _ l r) > | idx < sumFreqs l -> indexFreqTree idx l > | otherwise -> indexFreqTree (idx - sumFreqs l) r > sample3 :: FreqTree a -> IO a > sample3 tree = do > idx <- randomRIO (0, sumFreqs tree - 1) > return $ indexFreqTree idx tree There we go! We intuitively see this method is faster since we only have to walk through a few nodes – namely, those on the path from the root node to the specific leaf node. But how fast is this, exactly? This depends on how we build the tree. Well-balanced trees Given a list with frequencies, we can build a nicely balanced tree (i.e., in the sense in which binary tries are balanced). This minimizes the longest path from the root to any node. We first have a simple utility function to clean up such a list of frequencies: > uniqueFrequencies :: Ord a => [(a, Int)] -> [(a, Int)] > uniqueFrequencies = > M.toList . M.fromListWith (+) . filter ((> 0) . snd) And then we have the function that actually builds the tree. For a singleton list, we just return a leaf. Otherwise, we simply split the list in half, build trees out of those halves, and join them under a new parent node. Computing the total frequency of the parent node (freq) is done a bit inefficiently, but that is not the focus at this point. > balancedTree :: Ord a => [(a, Int)] -> FreqTree a > balancedTree = go . uniqueFrequencies > where > go [] = error "balancedTree: Empty list" > go [(x, f)] = Leaf f x > go xs = > let half = length xs `div` 2 > (ys, zs) = splitAt half xs > freq = sum $ map snd xs > in Branch freq (go ys) (go zs) Huffman-balanced trees However, well-balanced trees might not be the best solution for this problem. It is generally known that few words in most natural languages are extremely commonly used (e.g. “the”, “a”, or in or case, “badger”) while most words are rarely used. For our tree, it would make sense to have the more commonly used words closer to the root of the tree – in that case, it seems intuitive that the expected number of nodes visited to pick a random word will be lower. It turns out that this idea exactly corresponds to a Huffman tree. In a Huffman tree, we want to minimize the expected code length, which equals the expected path length. Here, we want to minimize the expected number of nodes visited during a lookup – which is precisely the expected path length! The algorithm to construct such a tree is surprisingly simple. We start out with a list of trees: namely, one singleton leaf tree for each element in our frequency list. Then, given this list, we take the two trees which have the lowest total sums of frequencies (sumFreqs), and join these using a branch node. This new tree is then inserted back into the list. This algorithm is repeated until we are left with only a single tree in the list: this is our final frequency tree. > huffmanTree :: Ord a => [(a, Int)] -> FreqTree a > huffmanTree = go . map (\(x, f) -> Leaf f x) . uniqueFrequencies > where > go trees = case sortBy (comparing sumFreqs) trees of > [] -> error "huffmanTree: Empty list" > [ft] -> ft > (t1 : t2 : ts) -> > go $ Branch (sumFreqs t1 + sumFreqs t2) t1 t2 : ts This yields the following tree for our example: Is the second approach really better? Although Huffman trees are well-studied, for our example, we only intuitively explained why the second approach is probably better. Let us see if we can justify this claim a bit more, and find out how much better it is. The expected path length L of an item in a balanced tree can be very easily approached, since it is just a binary tree and we all know those (suppose N is the number of unique words): However, if we have a tree we built using the huffmanTree, it is not that easy to calculate the expected path length. We know that for a Huffman tree, the path length should approximate the entropy, which, in our case, gives us an approximation for the path length for item with a specified frequency f: Where F is the total sum of all frequencies. If we assume that we know the frequency for every item, the expected path length is simply a weighted mean: This is where it gets interesting. It turns out that the frequency of words in a natural language is a well-researched topic, and predicted by something called Zipf’s law. This law tells us that the frequency of an item f can be estimated by: Where s characterises the distribution and is typically very close to 1 for natural languages. H is the generalised harmonic number: If we substitute in the definition for the frequencies into the formula for the expected path length, we get: This is something we can work with! If we plot this for s = 1, we get: It is now clear that the expected path length for a frequency tree built using huffmanTree is expected to be significantly shorter than a frequency tree built using balancedTree, even for relatively small N. Yay! Since the algorithm now works, the conclusion is straightforward. Lorem markdownum constitit foret tibi Phoebes propior poenam. Nostro sub flos auctor ventris illa choreas magnis at ille. Haec his et tuorum formae obstantes et viribus videret vertit, spoliavit iam quem neptem corpora calidis, in. Arcana ut puppis, ad agitur telum conveniant quae ardor? Adhuc arcu acies corpore amplexans equis non velamina buxi gemini est somni. Thanks to Simon Meier, Francesco Mazzoli and some other people at Erudify for the interesting discussions about this topic!
{"url":"http://jaspervdj.be/posts/2013-11-21-random-element-frequency-list.html","timestamp":"2014-04-19T17:01:42Z","content_type":null,"content_length":"23908","record_id":"<urn:uuid:d5c9a1c4-bbdd-44e1-a2ad-c69e4a6fefd1>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
A Characterization of 4-Centralizer Groups Chinese Journal of Mathematics Volume 2013 (2013), Article ID 871072, 2 pages Research Article A Characterization of 4-Centralizer Groups Department of Mathematics, North-Eastern Hill University, Permanent Campus, Shillong, Meghalaya 793022, India Received 23 August 2013; Accepted 9 October 2013 Academic Editors: M. Coppens, W. Shi, and Z. Wang Copyright © 2013 Jutirekha Dutta. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A finite or infinite group is called an -centralizer group if it has numbers of distinct centralizers. In this paper, we prove that a finite or infinite group is a 4-centralizer group if and only if is isomorphic to . This extends a result of Belcastro and Sherman. 1. Introduction Given a finite or infinite group and , the set is called the centralizer of in . The set of all centralizers in is denoted by . A group is called an -centralizer group if . It is easy to see that one-centralizer groups are precisely the abelian groups. Characterization of finite groups in terms of the number of distinct centralizers has been an interesting topic of research in recent years (see [1–8]). In [7], Belcastro and Sherman have proved the nonexistence of finite -centralizer group for . However, their proof also shows the nonexistence of infinite -centralizer groups. Belcastro and Sherman [7] also characterize all finite 4-centralizer groups. More precisely, they proved that a finite group is a 4-centralizer group if and only if is isomorphic to , where is the center of and is the cyclic group having two elements. In this paper, we extend the same characterization for infinite groups using elementary techniques of group theory. Throughout this paper will denote a finite or infinite group. Recall that for any group , its center is the intersection of all centralizers in . Also is the union of all the centralizers of noncentral elements of . It may be mentioned here that a finite or infinite group can not be written as union of two of its proper subgroups. These facts have important role in proving the main theorem of this paper. 2. Main Result In this section, we proof the following main theorem of this paper. Theorem 1. A finite or infinite group is a 4-centralizer group if and only if . Proof. Let be a 4-centralizer finite or infinite group and , where , , and are noncentral elements of . Then , since is the union of its proper centralizers. Let us consider the centralizer . Then will be one of , , , or . If , then . This implies that for some . Therefore, we get , a contradiction. If , then gives Therefore, . Hence, , a contradiction, as a group can not be written as union of two of its proper subgroups. Similarly, it can be seen that . Thus, and so . In a similar way it can be seen that and so . We will now show that . Clearly, . Let . Then Therefore, and so . Thus, In a similar way, it can be seen that , and . Let us consider the right cosets , , , and , where . As is a noncentral element of it follows that , , and . If , then we have for some . Therefore, we get , a contradiction. Again, if , then which gives , a contradiction. Similarly, it can be seen that . Thus the cosets , , , and are mutually disjoint. Clearly, . Let then . Suppose that then without any loss of generality we may assume that . Let us consider the centralizer . Then is one of , , , or . Case 1. Let . In this case, and so which gives . Therefore, , a contradiction. Case 2. Let . In this case, which gives . Therefore, , a contradiction. Case 3. Let . In this case, which gives . Therefore, , a contradiction. Hence, , since is a 4-centralizer group. Now, implies that which gives since . Thus, . Also, implies that . Hence, and so for some . In other words, . Therefore, . Hence, . This shows that . Finally, the fact that is nonabelian gives . Conversely, let . Therefore, there are four right cosets of in , namely, , , , and , where , , and are distinct noncentral elements of . So, for any element , either or or or . If , then . If , then for some , therefore . Similarly, gives and gives . Hence, has at most four centralizers, namely, , , , and . Since we have . This completes the proof. We conclude this paper by the following remark. In [7], Belcastro and Sherman proved Theorem 1 for finite groups only. Note that their proof can not be extended to infinite groups as they have used the finiteness of the group extensively. Belcastro and Sherman [7] have also obtained the structure of 5-centralizer finite groups as follows. A finite group is a 5-centralizer group if and only if or , where is the cyclic group having three elements and the symmetric group on three symbols. Note that the if part of the above result also holds for infinite groups. The same proof of Belcastro and Sherman [7] holds for infinite groups. However, the proof of the only if part of the above result is not known for infinite groups. It may be interesting to study the infinite -centralizer groups for . This paper is a part of the author’s M. Phil. thesis done under the supervision of Professor A. K. Das. The author would like to thank him for his helpful suggestions. The author is grateful to all the referees for their valuable comments and suggestions. 1. A. Abdollahi, S. M. J. Amiri, and A. M. Hassanabadi, “Groups with specific number of centralizers,” Houston Journal of Mathematics, vol. 33, no. 1, pp. 43–57, 2007. View at Scopus 2. A. R. Ashrafi, “On finite groups with a given number of centralizers,” Algebra Colloquium, vol. 7, no. 2, pp. 139–146, 2000. View at Scopus 3. A. R. Ashrafi, “Counting the centralizers of some finite groups,” The Korean Journal of Computational & Applied Mathematics, vol. 7, no. 1, pp. 115–124, 2000. 4. A. R. Ashrafi and B. Taeri, “On finite groups with a certain number of centralizers,” Journal of Applied Mathematics and Computing, vol. 17, no. 1-2, pp. 217–227, 2005. View at Scopus 5. A. R. Ashrafi and B. Taeri, “On finite groups with exactly seven element centralizers,” Journal of Applied Mathematics and Computing, vol. 22, no. 1-2, pp. 403–410, 2006. View at Scopus 6. S. J. Baishya, “On finite groups with specific number of centralizers,” International Electronic Journal of Algebra, vol. 13, pp. 53–62, 2013. 7. S. M. Belcastro and G. J. Sherman, “Counting centralizers in finite groups,” Mathematics Magazine, vol. 67, no. 5, pp. 366–374, 1994. 8. M. Zarrin, “On element-centralizers in finite groups,” Archiv der Mathematik, vol. 93, no. 6, pp. 497–503, 2009. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/cjm/2013/871072/","timestamp":"2014-04-16T12:14:03Z","content_type":null,"content_length":"163064","record_id":"<urn:uuid:bccf59e8-eba0-40e2-b584-87e17293cc93>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
7 Unit Lists: Conversion to Sums of Units Outside of the SI, it is sometimes desirable to convert a single unit to a sum of units—for example, feet to feet plus inches. The conversion from sums of units was described in Sums and Differences of Units, and is a simple matter of adding the units with the ‘+’ sign: You have: 12 ft + 3 in + 3|8 in You want: ft * 12.28125 / 0.081424936 Although you can similarly write a sum of units to convert to, the result will not be the conversion to the units in the sum, but rather the conversion to the particular sum that you have entered: You have: 12.28125 ft You want: ft + in + 1|8 in * 11.228571 / 0.089058524 The unit expression given at the ‘You want:’ prompt is equivalent to asking for conversion to multiples of ‘1 ft + 1 in + 1|8 in’, which is 1.09375 ft, so the conversion in the previous example is equivalent to You have: 12.28125 ft You want: 1.09375 ft * 11.228571 / 0.089058524 In converting to a sum of units like miles, feet and inches, you typically want the largest integral value for the first unit, followed by the largest integral value for the next, and the remainder converted to the last unit. You can do this conversion easily with units using a special syntax for lists of units. You must list the desired units in order from largest to smallest, separated by the semicolon (‘;’) character: You have: 12.28125 ft You want: ft;in;1|8 in 12 ft + 3 in + 3|8 in The conversion always gives integer coefficients on the units in the list, except possibly the last unit when the conversion is not exact: You have: 12.28126 ft You want: ft;in;1|8 in 12 ft + 3 in + 3.00096 * 1|8 in The order in which you list the units is important: You have: 3 kg You want: oz;lb 105 oz + 0.051367866 lb You have: 3 kg You want: lb;oz 6 lb + 9.8218858 oz Listing ounces before pounds produces a technically correct result, but not a very useful one. You must list the units in descending order of size in order to get the most useful result. Ending a unit list with the separator ‘;’ has the same effect as repeating the last unit on the list, so ‘ft;in;1|8 in;’ is equivalent to ‘ft;in;1|8 in;1|8 in’. With the example above, this gives You have: 12.28126 ft You want: ft;in;1|8 in; 12 ft + 3 in + 3|8 in + 0.00096 * 1|8 in in effect separating the integer and fractional parts of the coefficient for the last unit. If you instead prefer to round the last coefficient to an integer you can do this with the --round (-r) option. With the previous example, the result is You have: 12.28126 ft You want: ft;in;1|8 in 12 ft + 3 in + 3|8 in (rounded down to nearest 1|8 in) When you use the -r option, repeating the last unit on the list has no effect (e.g., ‘ft;in;1|8 in;1|8 in’ is equivalent to ‘ft;in;1|8 in’), and hence neither does ending a list with a ‘;’. With a single unit and the -r option, a terminal ‘;’ does have an effect: it causes units to treat the single unit as a list and produce a rounded value for the single unit. Without the extra ‘;’, the -r option has no effect on single unit conversions. This example shows the output using the -r option: You have: 12.28126 ft You want: in * 147.37512 / 0.0067854058 You have: 12.28126 ft You want: in; 147 in (rounded down to nearest in) Each unit that appears in the list must be conformable with the first unit on the list, and of course the listed units must also be conformable with the unit that you enter at the ‘You have:’ prompt. You have: meter You want: ft;kg conformability error ft = 0.3048 m kg = 1 kg You have: meter You want: lb;oz conformability error 1 m 0.45359237 kg In the first case, units reports the disagreement between units appearing on the list. In the second case, units reports disagreement between the unit you entered and the desired conversion. This conformability error is based on the first unit on the unit list. Other common candidates for conversion to sums of units are angles and time: You have: 23.437754 deg You want; deg;arcmin;arcsec 23 deg + 26 arcmin + 15.9144 arcsec You have: 7.2319 hr You want: hr;min;sec 7 hr + 13 min + 54.84 sec In North America, recipes for cooking typically measure ingredients by volume, and use units that are not always convenient multiples of each other. Suppose that you have a recipe for 6 and you wish to make a portion for 1. If the recipe calls for 2 1/2 cups of an ingredient, you might wish to know the measurements in terms of measuring devices you have available, you could use units and enter You have: (2+1|2) cup / 6 You want: cup;1|2 cup;1|3 cup;1|4 cup;tbsp;tsp;1|2 tsp;1|4 tsp 1|3 cup + 1 tbsp + 1 tsp By default, if a unit in a list begins with fraction of the form 1|x and its multiplier is an integer, the fraction is given as the product of the multiplier and the numerator; for example, You have: 12.28125 ft You want: ft;in;1|8 in; 12 ft + 3 in + 3|8 in In many cases, such as the example above, this is what is wanted, but sometimes it is not. For example, a cooking recipe for 6 might call for 5 1/4 cup of an ingredient, but you want a portion for 2, and your 1-cup measure is not available; you might try You have: (5+1|4) cup / 3 You want: 1|2 cup;1|3 cup;1|4 cup 3|2 cup + 1|4 cup This result might be fine for a baker who has a 1 1/2-cup measure (and recognizes the equivalence), but it may not be as useful to someone with more limited set of measures, who does want to do additional calculations, and only wants to know “How many 1/2-cup measures to I need to add?” After all, that's what was actually asked. With the --show-factor option, the factor will not be combined with a unity numerator, so that you get You have: (5+1|4) cup / 3 You want: 1|2 cup;1|3 cup;1|4 cup 3 * 1|2 cup + 1|4 cup A user-specified fractional unit with a numerator other than 1 is never overridden, however—if a unit list specifies ‘3|4 cup;1|2 cup’, a result equivalent to 1 1/2 cups will always be shown as ‘2 * 3|4 cup’ whether or not the --show-factor option is given. Some applications for unit lists may be less obvious. Suppose that you have a postal scale and wish to ensure that it's accurate at 1 oz, but have only metric calibration weights. You might try You have: 1 oz You want: 100 g;50 g; 20 g;10 g;5 g;2 g;1 g; 20 g + 5 g + 2 g + 1 g + 0.34952312 * 1 g You might then place one each of the 20 g, 5 g, 2 g, and 1 g weights on the scale and hope that it indicates close to You have: 20 g + 5 g + 2 g + 1 g You want: oz; 0.98767093 oz Appending ‘;’ to ‘oz’ forces a one-line display that includes the unit; here the integer part of the result is zero, so it is not displayed. A unit list such as cup;1|2 cup;1|3 cup;1|4 cup;tbsp;tsp;1|2 tsp;1|4 tsp can be tedious to enter. The units program provides shorthand names for some common combinations: hms hours, minutes, seconds dms angle: degrees, minutes, seconds time years, days, hours, minutes and seconds usvol US cooking volume: cups and smaller Using these shorthands, or unit list aliases, you can do the following conversions: You have: anomalisticyear You want: time 1 year + 25 min + 3.4653216 sec You have: 1|6 cup You want: usvol 2 tbsp + 2 tsp You cannot combine a unit list alias with other units: it must appear alone at the ‘You want:’ prompt. You can display the definition of a unit list alias by entering it at the ‘You have:’ prompt: You have: dms Definition: unit list, deg;arcmin;arcsec When you specify compact output with --compact, --terse or -t and perform conversion to a unit list, units lists the conversion factors for each unit in the list, separated by semicolons. You have: year You want: day;min;sec Unlike the case of regular output, zeros are included in this output list: You have: liter You want: cup;1|2 cup;1|4 cup;tbsp
{"url":"http://www.gnu.org/software/units/manual/html_node/Unit-Lists.html","timestamp":"2014-04-17T07:17:19Z","content_type":null,"content_length":"14581","record_id":"<urn:uuid:4deb2f1e-663c-412c-8aae-8e59760a6ab2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
January 13th 2010, 01:19 AM can someone explain this proof to me? let sn be a convergent seq with sn -> l as n tends to infintiy. then every sunseq pf sn converges to l. from my notes, it states that by induction, you can easily establise that nk > k for all k.... sorry, im new to the whole idea of proofs so im not sure how do i use induction to prove that nk > k.. January 14th 2010, 01:05 AM I guess you mean: if $(n_k)_{k\geq 0}$ is a strictly increasing integer-valued sequence, then $n_k\geq k$ for all $k\in\mathbb{N}$. Since $n_0\in\mathbb{N}$, we have $n_0\geq 0$, this is the base case. Let $k\in\mathbb{N}$. Assume that $n_k\geq k$. Let us prove that $n_{k+1}\geq k+1$. Because $(n_k)_k$ is strictly increasing, $n_{k+1}>n_k$, hence $n_{k+1}>n_k\geq k$, and $n_{k+1}\in\mathbb{N}$, thus $n_{k+1}\geq k+1$. This concludes the induction.
{"url":"http://mathhelpforum.com/number-theory/123541-subsequences-print.html","timestamp":"2014-04-16T16:21:33Z","content_type":null,"content_length":"6875","record_id":"<urn:uuid:e707127a-1f98-4a3d-ad4d-ed1a98f629ff>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Opposite Vertices Copyright © University of Cambridge. All rights reserved. 'Opposite Vertices' printed from http://nrich.maths.org/ Charlie has been exploring squares with vertices drawn on the points of a square dotty grid. Unfortunately, he rubbed out some of his work and only left behind one side of each square. Can you recreate the squares he drew? Is there more than one possibility? Could any line joining two points be the side of a square whose vertices lie on grid points? How can you be sure? Alison has been drawing squares and their diagonals. Here are some of the diagonals she drew: Can you recreate the squares she drew from her diagonals? Is there more than one possibility? Can you find a method to draw a square when you are just given the diagonal? Could any line joining two points be the diagonal of a square whose vertices lie on grid points? Can you find a way to help Alison decide whether a given line could be the diagonal of such a square? Charlie and Alison played around with rhombuses next. Charlie said "Whenever I join two points to make a line, I can use my line as a side of several different rhombuses". Do you agree with him? When you are given a line, is there a quick way to work out how many rhombuses can be drawn using that line as one of the sides? Alison said "When I draw a rhombus, it shares its diagonal with infinitely many other rhombuses." Do you agree with her? Not all lines can be the diagonal of a rhombus. Is there a quick way to decide which lines could be the diagonal of a rhombus?
{"url":"http://nrich.maths.org/7381/index?nomenu=1","timestamp":"2014-04-18T23:37:43Z","content_type":null,"content_length":"5327","record_id":"<urn:uuid:dd94868c-b56f-4079-a5a3-84ff464ca801>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Mill Valley Science Tutor ...I am well regarded as an excellent instructor and am able to deal with students with a wide range of abilities in math, finance and economics. I worked a number of years as a data analyst and computer programmer and am well versed in communicating with people who have a variety of mathematical a... 49 Subjects: including electrical engineering, physics, calculus, geometry ...I think this is a wonderful combination: I can relate to students, understand their frustrations and fears, and at the same time I deeply understand math and take great joy in communicating this to reluctant and struggling students, as well as to able students who want to maximize their achieveme... 20 Subjects: including psychology, geometry, biology, statistics ...As the instructor for a required Civics class, I've become fascinated with the foundations of our country and its Constitution, the challenges of maintaining a democratic government. Most importantly, I've become convinced that all teaching involves creating a compelling story in which the stude... 32 Subjects: including anthropology, English, reading, writing ...I've taught 7th and 8th graders in all subjects, 11th and 12th graders in English and I currently teach ESL to adults from different international backgrounds. I've earned two AmeriCorps Education awards for teaching high school in inner city schools in New York and I have over three years experience coaching high school basketball. I'm also a professionally represented novelist. 31 Subjects: including psychology, biology, philosophy, algebra 1 ...I have a Bachelor of Science. I majored in Biology/Pre-med with a minor in Spanish and completed the honors program. I have been a tutor in college for many topics and also co-taught a class. 33 Subjects: including sociology, ecology, Spanish, physical science
{"url":"http://www.purplemath.com/Mill_Valley_Science_tutors.php","timestamp":"2014-04-18T23:49:31Z","content_type":null,"content_length":"24017","record_id":"<urn:uuid:e96deb07-8aea-4c9d-a0ca-f989f3d5ca88>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
In Fig. 29-70, A Long Circular Pipe With Outside ... | Chegg.com In Fig. 29-70, a long circular pipe with outside radius R = 2.12 cm carries a (uniformly distributed) current i = 11.3 mA into the page. A wire runs parallel to the pipe at a distance of 3.00R from center to center. Find the magnitude of the current in the wire in milliamperes such that the ratio of the magnitude of the net magnetic field at point P to the magnitude of the net magnetic field at the center of the pipe is 2.17, but it has the opposite direction.
{"url":"http://www.chegg.com/homework-help/questions-and-answers/fig-29-70-long-circular-pipe-outside-radius-r-212-cm-carries-uniformly-distributed-current-q1396582","timestamp":"2014-04-25T04:48:20Z","content_type":null,"content_length":"21622","record_id":"<urn:uuid:3b7bfd16-d45e-4ab7-8785-ea47eda5aae2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
binomial theorem A method of expanding the binomial expression (x + y)^n into a finite or infinite series of powers of x and y, where n is a number either integral or fractional, positive or negative, rational or irrational; thus (x + y)^n = x^n + a[n-1]x^n-1y + a[n-2]x^n-2y^2 + ... + y^n The coefficients a[i] are called binomial coefficients. The binomial theorem was discovered by Isaac Newton about 1666, and was first published in 1704 in the second appendix to Newton's Optics. That particular case of the theorem when n is a positive integer was known to mathematicians prior to Newton (e.g., Briggs and Pascal), and Newton himself gave no demonstration of the truth of his theorem. Related category
{"url":"http://www.daviddarling.info/encyclopedia/B/binomial_theorem.html","timestamp":"2014-04-16T19:00:56Z","content_type":null,"content_length":"6450","record_id":"<urn:uuid:d8f143b6-be13-4169-af7f-2c5aa5f33635>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigenvalues of a skew symmetric matrix August 9th 2013, 10:39 PM #1 Junior Member Sep 2012 Eigenvalues of a skew symmetric matrix The eigenvalues of a real skew symmetric matrix can be zero or imaginary. I figured out that they can be zero but i can't figure out, how can they be imaginary ? I don't know why, the idea doesn't seem to fit in my head even though it is true. If someone could explain this to me it would be a great help. Thanks in advance. Re: Eigenvalues of a skew symmetric matrix The eigenvalues of a real skew symmetric matrix can be zero or imaginary. I figured out that they can be zero but i can't figure out, how can they be imaginary ? I don't know why, the idea doesn't seem to fit in my head even though it is true. If someone could explain this to me it would be a great help. Thanks in advance. I think there's a good proof of this in page 6-14 of this document(btw, this lecture note is great August 10th 2013, 02:46 AM #2 Feb 2010 in the 4th dimension....
{"url":"http://mathhelpforum.com/advanced-algebra/221118-eigenvalues-skew-symmetric-matrix.html","timestamp":"2014-04-17T07:30:40Z","content_type":null,"content_length":"35257","record_id":"<urn:uuid:4340ff26-5a40-4573-805f-e574135affdc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Sizing Science: The Geometry of M&Ms Key concepts Has an adult ever caught you munching on candy and asked, "How much candy have you eaten?" Instead of saying, "I don't know," and possibly receiving a scolding, wouldn't you rather respond, "I ate precisely 10.7 cubic centimeters of candy"? In this activity, you will investigate which mathematical formula is most accurate for estimating the volume of an M&M. Figure this out and the next time you are discovered while snacking on sweets, you might make a better impression. Geometry uses math to describe and investigate different points, lines and shapes. A shape is described in geometry using a formula, which is simply a mathematical way to calculate different properties, such as its size, area or volume. Volume is a property of three-dimensional shapes—such as cubes and spheres—and takes into account the space the shapes take up in each of the three different directions. The challenge of using geometric formulas in the real world is that these mathematical formulas often describe "perfect" or "ideal" shapes. A sphere is an "ideal" three-dimensional shape that is perfectly circular in all three directions. Even though a tennis ball, for example, is spherical in shape, it is not a perfect sphere (think of the lines that mark its surface). Most real-world objects are not simple shapes and require complex geometry to be calculated. The properties of real-world shapes can, however, be approximated, or estimated with a geometric formula. This is called "making a geometric model." The most important part of making a good geometric model is choosing the formula that best describes the object. Using that formula, you can geometrically model all kinds of irregular objects, such as cars, airplanes, toys and M&Ms. • Two pieces of paper • Table or countertop • Clay or Play-Doh. Use a small amount that you do not mind ruining. • Metric measuring glass with milliliter markings • Water • 110 M&Ms. One seven-ounce (198-gram) bag holds about 210 M&Ms. • Ruler with centimeter markings • A pen or pencil • A computer with an Internet connection • Place a sheet of paper on a clean table or countertop. • On top of the paper, place a small amount of clay or Play-Doh. Flatten it and stretch it out into a line about 20 centimeters (cm) long by three cm wide. The line should be only a couple cm high. You will be measuring the M&Ms on this. • The clay or Play-Doh may be ruined by the M&Ms' dyed coating, so use a portion that you will not mind ruining. • Fill the metric measuring glass with 100 milliliters (ml) of water. If you intend to eat some of the wet M&Ms after this activity is completed, remember to clean the measuring glass carefully before you fill it with water. • Line up 10 M&Ms on their flat side, end to end, on the clay. Make sure the line is straight and flat and that each M&M is touching the next, with no gaps in between. You can poke them into the clay to keep them in a neat row. • Measure the line of M&Ms in cm and divide this number by 10. This gives you the long diameter of a single M&M candy. What is the long diameter of an M&M candy? Write this measurement down. • Divide the long diameter by two. This gives you the long radius of a single M&M candy. What is the long radius of an M&M candy? Write this down. • Remove the M&Ms from the clay. • Now line up the 10 M&Ms on their side so that you are measuring across the short side. Again, use the clay to hold them in a neat row in which each M&M touches the next. • Measure the line of M&Ms and divide this number by 10. This gives you the short diameter of a single M&M candy. What is the short diameter of an M&M candy? Write this down. • Divide the short diameter by two. This gives you the short radius of a single M&M candy. What is the short radius of an M&M candy? Write this down. • Now you are ready to measure the actual volume of the M&M with a water-displacement test. Make sure that the metric measuring glass has exactly 100 ml of water in it (check by looking at the bottom of the meniscus). • Dump 100 M&Ms into the glass of water. (If you want to eat soggy M&Ms later, use fresh M&Ms during this stage since the ones you used previously may have clay on them.) What is the new water level of the glass? Write this down. • Subtract 100 ml from the new water level. Divide this number by 100. This is the actual volume of a single M&M in ml (the same as cubic centimeters). What is the volume of one M&M? Write this down. • You will now be making some volume calculations using different formulas to see which one best calculates the volume of an M&M. Go to the Unit-Free Volume Calculators page at Mississippi State • Click on the "Full Sphere" link. For the "Radius" box, enter the long radius you measured for a single M&M and click "Calculate." What is the M&M volume using its long radius? Write this down. • Now for the "Radius" box, enter the short radius instead. What is the M&M volume using its short radius? Write this down. • Go back to the Unit-Free Volume Calculators page and click on the "Cylinder" link. For the "Outer Radius" box, enter the long radius; for the "Inner Radius" box, enter zero; and for the "Height" box, enter the short diameter. What is the M&M volume using the cylinder formula? Write this down. • Go back and click on the "Ellipsoid" link. For the "Major Axis" and "Minor Axis" boxes, enter the long diameter (for both). And for the "Vertical Axis" box, enter the short diameter. What is the M& M volume using the ellipsoid formula? • How do each of the different calculated volumes compare to the actual volume that you measured? Which ones were more and which ones were less? Which calculation came the closest? Which formula do you think is the best one to use for an M&M candy? • Extra: Another way to look at your data is to calculate the percent difference between each calculation and the actual volume measurement. You can do this by dividing your answer using each formula by the actual volume. Which formulas give you an answer that is closest to, or most different from, the actual volume based on percent difference? • Extra: You can use this same experiment to find the best formula to calculate any other volume. Try using it for an egg, a football, an apple, a bar of soap, other types of candies or any other irregularly shaped object. Just make sure that you choose an object that can be safely submerged in water! Which formula is the best? • Extra: The shape of a candy can affect how well many of those candies pack together. Use the water displacement test on a couple differently shaped candies to determine the actual volume of a single candy. Then fill a measuring glass with a certain amount of each type of candy, one type at a time (without water). Divide the volume the candy took up by the number of candies to determine how much space one candy took up, on average, when taking packing into account. How much space does each type of candy take up in the measuring glass (when packing is taken into account) compared with the actual volume of one candy? In other words, which types of candies pack together the best? How do you think their shape affects this? Observations and results Did you find the ellipsoid formula to give the closest answer to the actual volume you measured for one M&M candy? Using the water displacement test, you should have found the actual volume of a single M&M candy to be about 0.60 to 0.65 cubic centimeter (milliliter). (Adding 100 M&Ms to 100 ml of water should have caused the water level to rise to about 160 to 165 ml.) When using the sphere formula with the long radius, the calculation gives a volume (about 1.3 cubic centimeters) that is a little bigger than the actual volume of the M&M candy. If you look closely, you will see that the volume of an M&M is not quite perfectly round but is shaped like a sphere that has been squished on one side. If it were not "squished," the sphere formula with the long radius would fit. The cylinder formula also gives a volume (about one cubic centimeter) that is too big because it assumes that the entire length of the M&M is as wide as its short diameter is, but it actually tapers around the edges. The sphere formula with the short radius gives a volume (about 0.2 cubic centimeter) that is much smaller than the actual volume of the M&M. The ellipsoid formula should give a volume (about 0.6 cubic centimeter) that is very close to the actual volume of the candy. An M&M indeed has an ellipsoid shape, specifically, a type called an oblate spheroid. • If portions of the clay or Play-Doh have been dyed by the M&Ms' coating, you can try to pinch these parts out and throw them in the trash. • If you want to eat the soggy M&Ms when you are done with this activity, you may do so, but you should not eat the M&Ms that were in contact with the clay or Play-Doh. More to explore Agricultural and Biological Engineering: Tools: Unit-Free Volume Calculators from Mississippi State University Unique Shape of M&M's Interests Scientists from NPR's Talk of the Nation's Science Friday Math Tables: Areas, Volumes, Surface Areas from Math2.org Geometry from MathIsFun.com M&M Geometry from Science Buddies This activity brought to you in partnership with Science Buddies
{"url":"http://www.scientificamerican.com/article/bring-science-home-mm-geometry/","timestamp":"2014-04-17T15:42:03Z","content_type":null,"content_length":"67691","record_id":"<urn:uuid:562d0b6a-9766-431f-8dfb-91a1a8b56e96>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: bicubic spline 3D fitting algorithm Replies: 2 Last Post: Dec 11, 1996 12:38 PM Messages: [ Previous | Next ] Re: bicubic spline 3D fitting algorithm Posted: Dec 6, 1996 3:21 AM > I'm looking for a routine that can create a cubic spline fit from an > arbitrary set of points in 3 dimensions, represented by $(x,y,z,v)_i$ > (preferrably in a weighted least squares sense). > For 2 dimensions (i.e. for surfaces with points $(x,y,v)_i$ )these > routines already exist. For example such a routine is given by NAGs > E02DAF. Is it possible that you can call the 2-D function to calculate $(x,y,v)$ and call it again for $(z,0,v)$? Mathematically, it makes sense because you are dealing with linearly independant functions, ie. the value of the spline for $(z,0,v)$ should have ZERO effect on the values for The only problems I can see for using this idea is if the functions require that the known values of $(x,y,v)$ are stored in a two-dimensional array instead of three one-dimensional array. A better idea is if there is code to produce a one-dimensional spline $(x,v)$, call it three times, for $(x,v)$, $(y,v)$, $(z,v)$. This is also valid because x, y, and z are linearly independant, but it is a saivngs in work done by the computer because calculating $(z,0,v)$ involves a waste of work because the function will calculate a spline for $(0, v)$ > Secondly, has anybody experience with these kind of representations. I've been toying with it, but I don't have any code that works completely, because I have been trying to code a spline funtion by thomas delbert wilkinson 038 henday lister hall university of alberta If god were perfect, why did He create discontinuous functions? Date Subject Author 12/6/96 Re: bicubic spline 3D fitting algorithm thomas delbert wilkinson 12/11/96 Re: bicubic spline 3D fitting algorithm brad@apl.washington.edu
{"url":"http://mathforum.org/kb/message.jspa?messageID=1601352","timestamp":"2014-04-17T07:17:33Z","content_type":null,"content_length":"18911","record_id":"<urn:uuid:cf823747-15f1-416e-8cf7-e8ebc96bd9e7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
The cost of Capital ( The cost of Capital (part b) View all ACCA Paper F9 lectures >> This ACCA F9 lecture is based on OpenTuition course notes, view or download lecture notes here>> 1. Hi Mr Moffat,,just wanted small clarification,,if in a certain scenario we were given both return on capital employed or return on equity,,which one should we use in relation to Gordons growth □ This is rather hypothetical – the examiner has never given both for these purposes! (In fact I can only remember two times – both of them a very long time ago – when Gordons growth model was even relevant!) However, what we need is the rate of return that the company gets on reinvestment. In theory it would be the return on equity, but if he did give both then you would get credit for using either (provided you stated your assumption). ☆ Really appreciate it, you have cleared my doubts. 2. Hi,John.In example 6 question C , Shouldn’t the answer to be 280(1.0675)^3?According to the share price with constant growth rate model,the numerator should be D0(1+g) and I think D0 is 20 (1.0675)^2 so D1is 20(1.0675)^3.Therefore the price in 2 years time should be 280(1.0675)^3.Is it my opinion correct? Thanks □ No – the answer in the notes and the lecture is correct. The share price will be 280 (.0675)^2 Although the numerator will indeed be 20(1.0675)^3, you are forgetting that the numerator for the current share price will be 20(1.0675). So…..the numerator will be (1.0675)^2 times the current numerator. Always (in theory) the share price will increase at the same growth rate p.a. as the dividend growth rate. 3. Sir John, for part b, why is the cost of capital not the return of 18% in the question? Isn’t required rate of return = cost of capital? □ I assume you are meaning part (b) of example 6. Here, the shareholders are requiring a return of 14.375% (as calculated in part (b) – this is why they are prepared to pay $2.80 per share on the stock exchange). As a result the company needs to give them 14.375% and so the cost of equity is 14.375% (and here also the cost of the capital is 14.375% because there is no debt borrowing – this is dealt with in the next lecture). The company therefore needs to make sure that they invest the money to get a return of at least 14.375% – if they can earn more then great, if they earn less from any investment then they should not invest. In this case they are earning 18% from investing the money which is great. (In the long term it is the case that if the company is always managing to earn 18% on investments, then shareholders will eventually want a return of 18% themselves (this is due to risk and is covered in a later chapter), but from year to year this certainly need not be the case.) I hope that makes some sense ☆ Oh thank you very much! This really help! Your explanation is great, thanks Sir John. 4. Hi John: Could you kindly clarify how you apply “before and after taxes”. Do appreciate □ You will have to be a bit more specific as to what you mean. For cost of capital we always want the cost to the company. Because debt interest is tax allowable, the cost to the company is the cost after tax relief (so we take the after tax interest when calculating the cost of debt). ☆ Hi John: I have a much better understanding of “after-tax” cost of capital now. Can you clarify a few things for me;- 1. how do you calculate the cost of preference share, dec. 2010 question # 4c? I’m not sure if I grasp the concept from the answer. 2. Dec 2012 # 2a, is it a error that current receivables days should be 60 and not 30? 3. Dec 2012 # 2b, i note in the answer the holding cost was not discounted. why wasn’t it? 4. Dec 2009 # 1a, (ASOP Co) tax benefit was applied to the licencing fee. Why so? The question asked for “financing” cash flow, what is different about a financing cash flow? Thank you ☆ The cost of preference share is simply the dividend/market value. They pay a constant dividend, and there is no tax relief on the dividends. The dividend is quoted as a percentage of the nominal value. The answer does not mention the current receivables days. It is not necessary to calculate them because we know what current receivables are from the balance sheet. (our current terms of sales are payment within 30 days, but that does not mean that everyone does pay within 30 days.) We never discount the holding cost in the exam. (Maybe in real life, we should, but it would be a big problem because the cost is spread day-by-day over the year and we normally only discount for whole years.) It is reasonable to assume (from F6) that the licencing fee is tax allowable. The reason it is needed when we are considering the cost of financing, is that although it is not relevant if we lease, it is relevant if we buy. So if we are comparing the cost of leasing with that of buying we need to take it into account. ☆ from the info provided the examiner said we should “assume” 60 days ……. Earlier when I downloaded the examiners reports, Dec 2012 was not accessible. Thanks again 5. sir i didnt get example 6 part c in chapter 17 □ The market value increases at the same rate dividend increases.(refer to the eg on pg94). In part a, you had calculated it and its 6.75%. Hence, market value of share in 2yrs time = 2.80(1+0.0675)^2 6. both comments are right but i see the driving factor is the interet rate because this seems to be the determing factor in the sense that an increase in interest rate will automatically increase SH/H reguired return but a decrease will also decrease share holders return though a decrease is more demotivating to investors, since they expect their money to be worthy investing hence will bring conflict here. 7. Thank you for your comments. You are quit right – in practice the growth in the share price is unlikely to be the same as the dividend growth rate, The main reason for this is that the shareholders required rate of return is likely to change – partly because general interest rates change (if interest rates in general go up then shareholders will require a higher rate of return, and vice versa), and also because the riskiness of the business might change (if the company becomes more risky then shareholders required rate of return will increase, and vice versa). However, if we are trying to predict the future share price, then we cannot predict what will happen to the interest rates or to the riskiness and therefore all we can do is assume that the remain the same – in which case the share price should increase at the same rate as the growth in dividends. 8. Absolutely awesome. By the way, to answer to the 6 c I have actually used the modified version of dividend growth model. ie if p0 = d0(1+g)/ Ke – g , then p2 should be equal to d2(1+g)/Ke-g OR the same as saying p2= d3/Ke-g. As d3 is equal to d0(1+g)^3, we can get the answer to p2 which is the market value of share in two years time. But your explanation about market value is the present value of future dividends discounted at shareholder’s expected return and if dividends are growing at a certain growth rate and the market value should be growing at the same growth rate, says it all. But in practise, I believe market value and dividends are not growing exactly at the same growth rate. Can I please have your comments about this Mr John. You must be logged in to post a comment.
{"url":"http://opentuition.com/acca/f9/the-cost-of-capital-part-b/","timestamp":"2014-04-16T14:42:19Z","content_type":null,"content_length":"60963","record_id":"<urn:uuid:8c6edff8-2ea6-41f2-8e55-5dcae9fb9e1d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Positive (k.k)-form? up vote 0 down vote favorite A real $(p,p)$-form $\Omega$ on complex manifold $X$ is called strictly positive $(p,p)$-form if $$(-\sqrt{-1})^{p}\Omega(v_1,\bar{v}_1,...,v_p,\bar{v}_p)>0$$ for any nonzero $v_1,v_2,...,v_p\in T^ {1,0}(X)$.If $\omega$ its Hermitian form. Does the $\omega^p$ is strictly positive $(p,p)$-form? add comment 2 Answers active oldest votes Actually, you don't have definition quite right. You have to assume that $v_1,v_2,\ldots,v_p$ are linearly independent over $\mathbb{C}$, not just nonzero, otherwise the quantity you have displayed will certainly be zero. up vote 6 With this corrected definition, though, one clearly does have $\omega^p$ is positive for any real, positive $(1,1)$-form $\omega$ on a complex manifold. This follows immediately, for down vote example, from the fact that the symmetry group of a positive Hermitian inner product on a complex vector space $V$ acts transitively on the set of complex $p$-planes in $V$. add comment I will try to clarify a little bit your question and at the same time to give you an answer. All that I say, you can find on Demailly's book "Complex Analytic and Differential Geometry". GENERAL THEORY Let $V$ be a complex vector space of dimension $n$ and $(z_1,\dots,z_n)$ coordinates on $V$. We denote by $(\partial/\partial z_1,\dots,\partial/\partial z_n)$ the corresponding basis of $V$, by $(dz_1,\dots,dz_n)$ its dual basis in $V^*$ and consider the exterior algebra $$ \Lambda V^*_\mathbb C=\bigoplus\Lambda^{p,q}V^*,\quad \Lambda^{p,q}V^*=\Lambda^p V^*\otimes\Lambda^q\ overline{V^*}. $$ Let us first observe that V has a canonical orientation, given by the $(n, n)$-form $$ \tau(z)=idz\wedge d\bar z_1\wedge\cdots\wedge idz_n\wedge d\bar z_n=2^n dx_1\wedge dy_1\ wedge\cdots\wedge dx_n\wedge dy_n, $$ where $z_j=x_j+iy_j$. In fact, if $(w_1,\dots,w_n)$ are other coordinates, we find $$ dw_1\wedge\cdots\wedge d w_n=\det(\partial w_j/\partial z_k)dz_1\ wedge\cdots\wedge dz_n, $$ $$ \tau(w)=|\det(\partial w_j/\partial z_k)|^2\tau(z). $$ Definition. A $(p,p)$-form $u\in\Lambda^{p,p}V^*$ is said to be positive if for all $\alpha_j\in V^*$, $1\ le j\le q=n-p$, then $$ u\wedge i\alpha_1\wedge\bar\alpha_1\wedge\cdots\wedge i\alpha_q\wedge\bar\alpha_q $$ is a positive $(n,n)$-form. A $(q,q)$-form $v\in\Lambda^{q,q}V^*$ is said to be strongly positive if $v$ is a convex combination $$ v=\sum\gamma_si\alpha_{s,1}\wedge\bar\alpha_{s,1}\wedge\cdots\wedge i\alpha_{s,q}\wedge\bar\alpha_{s,q} $$ where $\alpha_{j,s}\in V^*$ and $ \gamma_s\ge 0$. It is straightforward to see that strongly positive implies positive and, moreover, the concepts of positive and strongly positive coincide in bidegree $(0,0)$, $(1,1)$, $(n-1,n-1)$ and $(n,n) vote 4 Now, you have that all positive form $u$ are real, that is satisfy $u=\bar u$. In particular, in terms of coordinates, if $$ u=i^{p^2}\sum_{|I|=|J|=p}u_{I,J}dz_I\wedge d\bar z_J, $$ then the down coefficients satisfy the hermitian symmetry relation $\overline u_{I,J}=u_{J,I}$. A form $u=i\sum_{j,k}u_{jk}dz_j\wedge d\bar z_k$ of bidegrees $(1,1)$ is positive if and only if $\xi\mapsto\sum u_{jk}\xi_j\bar\xi_k$ is a semi-positive hermitian form on $\mathbb C^n$. Proposition. If $u_1,\dots,u_s$ are positive forms, all of them strongly positive (resp. all except perhaps one), then $u_1\wedge\cdots\wedge u_s$ is strongly positive (resp. positive). Proof. The proof is immediate from the very definition. What you are asking follows straightforwardly from what I have written (in particular the last proposition), taking all $u_j=\omega$. WITHOUT GENERAL THEORY If $\omega$ comes from a (positive definite) hermitian metric, then you can always choose coordinates such that your form $\omega$ is given by $\omega=i\sum_{j=1}^n dz_j\wedge d\bar z_j$. Then $$ \omega^p= i^{p^2}p!\sum_{|I|=p}dz_I\wedge d\bar z_I. $$ I leave you the pleasure to compute $\omega^p(v_1,\dots,v_p,\bar v_1,\dots,\bar v_p)$ and to discover how this quantity is related with the determinants of the order $p$ minors of the matrix whose columns are given by $v_1,\dots,v_p$. Thanks dear diverietti,Robert Bryant – Gran Murra Dec 26 '11 at 0:54 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/84257/positive-k-k-form?sort=oldest","timestamp":"2014-04-20T18:24:46Z","content_type":null,"content_length":"56012","record_id":"<urn:uuid:83f40ace-efab-4b77-b726-054fd1e3db9a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Wilmington, MA Algebra Tutor Find a Wilmington, MA Algebra Tutor ...I specialize in biology, but am also qualified to tutor other science courses. I also offer SAT prep and writing help. My content knowledge: Science - In undergraduate, I majored in biology and graduated as valedictorian of my class. 22 Subjects: including algebra 2, algebra 1, reading, writing ...Thanks.My fascination with math really began when I started studying calculus. For more then a decade I have been using calculus to solve a wide variety of complex problems. As a math phd student I have particularly excelled in the field of analysis which is largely just a more rigorous and abstract formulation of traditional calculus concepts. 14 Subjects: including algebra 1, algebra 2, calculus, geometry ...Because of this experience, I know multiple techniques for students to learn algebra skills, and I'm able to adapt our sessions to your student's strengths and weaknesses until we find the learning methods that work best for a particular session. Algebra is also the shared worldwide language of ... 23 Subjects: including algebra 1, algebra 2, chemistry, calculus ...I conduct exciting, cutting-edge research in a materials chemistry laboratory. Since entering graduate school, I have been a general chemistry lab teaching assistant, physical chemistry II (quantum) teaching assistant, and advanced physical chemistry (graduate level) teaching assistant. I have ... 10 Subjects: including algebra 1, algebra 2, chemistry, prealgebra ...Math is a challenging subject for many people out there! Even Einstein was a poor mathematician. My approach to teaching math includes hands-on lessons, strategies, math games and problem 8 Subjects: including algebra 1, algebra 2, calculus, linear algebra Related Wilmington, MA Tutors Wilmington, MA Accounting Tutors Wilmington, MA ACT Tutors Wilmington, MA Algebra Tutors Wilmington, MA Algebra 2 Tutors Wilmington, MA Calculus Tutors Wilmington, MA Geometry Tutors Wilmington, MA Math Tutors Wilmington, MA Prealgebra Tutors Wilmington, MA Precalculus Tutors Wilmington, MA SAT Tutors Wilmington, MA SAT Math Tutors Wilmington, MA Science Tutors Wilmington, MA Statistics Tutors Wilmington, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Wilmington_MA_Algebra_tutors.php","timestamp":"2014-04-18T05:43:32Z","content_type":null,"content_length":"24010","record_id":"<urn:uuid:23e0ef2a-6afa-4ae8-ace6-acbc8da45bc1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help June 27th 2006, 06:12 PM #1 Hi All, taking a summer course on linear algebra. I'm generally good with math, but this prof is not making it easy. I've been watching the MIT videos online, and it has helped, but I need more Does anyone know of any resources? I'm having a difficult time wrapping my head around spaces, and they way they relate to one another. My final is a week and a half away. I HAVE to be ready. So what I'm asking for is links, book recommendations, etc. Well, you've done well to find the MIT videos online. If you are taking an undergraduate linear algebra course in the states, I think this should be comprehensive enough, particularly if you supplement it with such sources as planetmath, wikipedia, and mathworld. The only other resource you should really need is consistency in studying. If you are having a problem with specific topics, then you should post for resources, but I think for such a general request, you are on your way with what you have at hand. Best of luck! Last edited by TXGirl; July 8th 2006 at 08:10 AM. July 5th 2006, 05:39 AM #2 Junior Member Jul 2006 Darmstadt, Germany
{"url":"http://mathhelpforum.com/advanced-algebra/3847-resources.html","timestamp":"2014-04-16T09:15:29Z","content_type":null,"content_length":"30669","record_id":"<urn:uuid:efa2c994-8833-4bd5-bf33-36c865f3b6fd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Post a New Question | Current Questions 1st lesson of the year... Hi, I am wondering if anyone has any ideas or suggestions for "getting to know you" games for the first day of classes that I can utilize. Also any relevant sites would be greatly appreciated. Thursday, August 28, 2008 at 6:12pm Can someone give me or tell me how I can go about writing a good attention getter 1st sentence for my 5-paragraph essay on the book The Crucible? Thanks Wednesday, August 27, 2008 at 8:37pm Animal Farm thanks... I'm not sure if that is in the 1st chapter though...I have one idea that maybe one is that because Old Major is a pig Tuesday, August 26, 2008 at 5:28pm That makes sense I need to leave out the 1st and the fifth number and use just the three numbers that are not calculated. Bob you are a life saver thank you. Tuesday, August 12, 2008 at 9:39pm reading 4th grade looking for someone to tell me where I can find anything on "Tales of a Fourth Grade Nothing" Monday, August 11, 2008 at 10:20pm 5th grade expectations my son is going to the fifth grade. What do I need to do to make sure he is ready? Friday, July 18, 2008 at 8:39am I just failed the 10th grade by 1 credit. And I feel like I could do much better in the 11th and 12 grade. Is it possible for me to still go to college? Wednesday, July 16, 2008 at 7:18pm thnx and if it says the sum of the squares of three consecutive positive integers is 194. Find the integers. is it x=1st x+3= 2nd x+5= 3rd? and would adding all these give me a result of 194? Sunday, July 13, 2008 at 10:07pm how would you solve this one? the sum of the squares of two consecutive even integers is 452. Find the integers. i think i know that.. x=1st # and x+2= 2nd #? Sunday, July 13, 2008 at 9:59pm Problem Solving I got the 0.925 by subtracting 0.075 from 1. By my calculations, the grade only goes up from 75 to 75.75. Rounded to the nearest percent = 75.75 = 76. A final grade of 76% makes sense because the exam is only a small part of the final grade. Saturday, June 14, 2008 at 5:23pm Physics **PLEASE HELP** plz ignore the 1st part just plz help with this: what's the relation between resistance and voltage? Thursday, May 22, 2008 at 8:49pm That depends on the design of the machine. Any design will violate at least one law of thermodynamics (1st, 2nd, or 3rd). Having said that, most people think that it is the 1st or 2nd law that would have to be violated in designing a perpetual motion machine. The first law ... Sunday, May 18, 2008 at 8:45pm parallelogram 1st. square next. then look at a pic of a trapezoid and you'll figure out where teh triangles go. it's fairly simple. You can do it. Tuesday, May 13, 2008 at 6:52pm 1st quarter: $8,000 * 0.06 = $480. $480 / 4 = $120 Interest for 1st quarter 2nd quarter: $8,000 + $120 = $8,120 $8,120 * 0.06 = $487.20 $487.20 / 4 = $121.80 Interest for 2nd quarter 3rd quarter: $8,120 + $121.80 = $8,241.80 I think you can finish the 3rd quarter and do the ... Saturday, May 10, 2008 at 8:42pm You can look up the 1st ionization potential of all the metals at www.webelements.com Saturday, April 19, 2008 at 2:36pm English Expression In the U.S., first grade means the first grade of only elementary school. Children in first grade are 6 or 7 years old. The last two are common responses for 7th graders. We usually use grade, rather than class for elementary and middle school students. Class is more common in... Friday, April 18, 2008 at 12:33pm English Expression What grade/year are you in? I'm in the first grade in middle school. I'm a first grader in middle school. I'm in the 7th grade. I'm a 7th grader. ----------------------- Are the answers all acceptable? Which one is commonly used? Can we use both 'grade'... Friday, April 18, 2008 at 12:25pm Im not sure where to find a website that can help me write a "hook" in my 1st paragraph in my essay please Thursday, April 17, 2008 at 8:45pm sorry drbob..i just noticed you answered the 1st question what about the balancing question.. Monday, April 14, 2008 at 11:20pm AED 201 8:15 -- Arrive in my classroom, greet students, and organize the day's materials 8:30 -- Begin 1st period class by taking role -- Correct homework (see below) -- Present the day's lesson, discuss. Often I showed 4 - 6 slides that pertain to the lesson -- Allow about 20... Monday, April 7, 2008 at 5:50pm How do I find the charge of the 1st listed metal? HgCl2 PbO2 Cu(OH)2 Cr2O3 Fe3(PO4)2 Tuesday, March 11, 2008 at 4:07pm linear algebra i'm not sure how to go about this one. any help would be appreciated! find all x's that would make the 2x2 matrix below = to a shear-dilation: 1st row: 1, x 2nd row: 3, 4 Saturday, March 8, 2008 at 10:24pm Use the Binomial Theorem to find the fifth term in the expansion (2x+3y)^5. 5!/(5-k)^x^5-kyk 5!/(5-4)!4!^x^5-4y4 (5*4*3*2)/(4*3*2*1) 120/24 5xy^4 (answer) You may not understand the 1st 2 lines I'm sorry just let me know how to make it better. Tuesday, March 4, 2008 at 3:41pm A King sends for 3 prisoners. HE has 3 black hats and 2 white hats. He blindfolds the prisoners & puts a hat on each. He then removes the blindfolds & allows them to look at each other. He tells them that if they can tell him the color of the hat they have on, he will set them... Tuesday, March 4, 2008 at 12:54pm new website design I think they're should be a section, for different grade levels. Like k-12. That way we would no what grade level people are in. :) Tuesday, March 4, 2008 at 11:58am Algebra II(can you explain) I don't understand what you did after the 1st line. Sunday, March 2, 2008 at 5:41pm working with data josephs final averages in science class are shown in the table below. homework % toward final grade-20% average-93 quizzes % toward final grade-20% average-92 tests % toward final grade-40% average-85 final exam % toward final grade-20% average-? what is the minimum score ... Sunday, February 24, 2008 at 4:45pm I really thought I had that one. My 1st thought was D but I figured since sqrt 5/3 was negative then so would my final answer. Friday, February 15, 2008 at 5:15pm 8th grade math You're welcome. I can't believe it either. I didn't learn the quadratic in the 8th grade for sure. Thursday, February 7, 2008 at 10:50pm 8th grade math Yes, this helped. It took me all the way back to high school. I can't believe they're doing the quadratic formula in the eighth grade. Thanks for your help. Thursday, February 7, 2008 at 10:40pm what's the future of 'se leve' with accent aigu over 1st 'e' of 'leve' Thursday, February 7, 2008 at 6:07pm Organic Chem I think that the order should be: -CH2CH3 = Priority #4 -CH2NH2 = 3rd priority -CH2Br = 1st priority -CH2OH = 2nd priority Does this look right??? Monday, February 4, 2008 at 1:45am African Amer. Study Who was the 1st American woman to address male audience in public? Tuesday, January 29, 2008 at 10:46pm African Amer. Study Where and When was the 1st female anti- slavery established? Tuesday, January 29, 2008 at 10:44pm A person's body is thronw outward as a car rounds a curve on the highway is this newton's 1st 2nd or 3rd law Thursday, January 24, 2008 at 9:47pm physics i suck could you explain simpler . i get condfused easily im in 10th grade taking 12 grade classes its tough. any help will do Saturday, January 12, 2008 at 6:56pm Movies & Science I am doing a paper that entails my knowledge of physics to prove errors in the movie Transformers. There is one scene in the movie where Shia (Sam, the hero) falls in Optimus Prime's hand, from around 20 stories high. Obviously Sam should ooze out of Prime's hand, but ... Monday, December 31, 2007 at 6:21pm Film and Physics I am doing a paper that entails my knowledge of physics to prove errors in the movie Transformers. There is one scene in the movie where Shia (Sam, the hero) falls in Optimus Prime's hand, from around 20 stories high. Obviously Sam should ooze out of Prime's hand, but ... Monday, December 31, 2007 at 4:41pm Grade 3 math Yes - that was what I ws thinking too so I guess I am right - I just second guess myself - I mean some of this for a Grade 3 seems rather difficult. Sunday, December 30, 2007 at 8:42pm word unscramble i think you are missing some letyters to me it looks like the 1st one says christmas wreath the 2nd looks like it would be holy water Friday, December 21, 2007 at 9:16pm social studies 1st Amendment; freedom of religion Why is this amendment important to America? What are 2 examples of how the amendment protects the citizens? Monday, December 17, 2007 at 11:59pm I`ve already been on google but none of these site seem to tell me what the 1st Division Canadian Infrantry did during the phony war. They seem to say they didnt do much and skip to 1942. Sunday, December 16, 2007 at 9:34pm Sunday, December 16, 2007 at 9:11pm The "Phony War" ended April 1940. However, many Canadians were not sent before 1942. What did the ones that were sent with the 1st Division Canadaian infrantry do in Britain until 1942? Sunday, December 16, 2007 at 9:05pm Algebra II the 2nd equation equals the 1st so there are many solutions to the system so its consistent and dependent. Sunday, December 16, 2007 at 2:58pm The 1st group of Candians that were sent over sea to ritian were sent in December of 1939. Where were they sent in Canada to prepare for WWII? Saturday, December 15, 2007 at 6:13pm SCI 275 What are the challenges of managing Reducing solid waste? What human activities contribute to the problem? I can answer the 2nd question but the 1st one i am having truble with Monday, December 10, 2007 at 1:29pm english for 1st grader The ou sound, if I understand the question. Wednesday, November 28, 2007 at 9:03pm english for 1st grader what is the special sound, for, round,shout and what. Wednesday, November 28, 2007 at 8:52pm another sentence i need to fix. It's all in 1st person, but see if you like this any better: At first I didn t believe that I could relate to my classmates in any way: Each of us is unique in terms of backgrounds and social Monday, November 26, 2007 at 4:43pm The "we" sounds wrong because there's a shift in the sentence from 3rd person to 1st. How can you rephrase it and get rid of "we"? What would you substitute? Monday, November 26, 2007 at 4:17pm HUM 130 Consider the interrelatedness of everything in the cosmos as it is expressed in many indigenous religions. this is the 1st question then the top part is part 2 Sunday, November 18, 2007 at 10:58pm SCI 275 ok I know the 1st part. but I can not find the answer to the 2nd part. Please help Thursday, November 15, 2007 at 10:25pm The US constitution it is the fourth! the 1st gives you the right to religion, speech, press, petition, and assembly the 4th is search and sezure which includes privacy Wednesday, November 14, 2007 at 4:52pm can u tell me of any laboratory acid with 6 letters 1st letter is n 4th letter is r and 6th letter is u, i dont no, it is for a wordsearch Tuesday, November 6, 2007 at 2:03pm D+T; (Control systems); mechanisms is aircraft wings 1st class? if not then what is the answer? Monday, October 29, 2007 at 11:01am Would the let statements go like this then: Let x= 1st C.I.= 6 Let x+1= 2nd C.I.= 7 Let x+1= 3rd C.I.= 7 b/c the second and 3rd aren't C.i.s, so...???????? Saturday, October 27, 2007 at 12:24pm ETH 125 where would I look things up at? I can not find anything to answer the 1st 4 questions Wednesday, October 24, 2007 at 9:19pm honestly i don't think i can help because i really don't understand but look up "e^cx derivative" on google and the 1st and 2nd links just might help Sunday, October 21, 2007 at 7:06pm Algebra II 1st#=x-2 2nd#=x 1/2(x-2)+x=41 x-2+2x=82 3x=84 x=28 Thursday, October 18, 2007 at 8:13pm Define your numbers first: 1st number = x 2nd number = x +1 Problem says "Product of 1", so multiply x(x+1)=1 x^2+x=1 x^2+x-1=0 From here use Quadratic Formula or complete the square to solve. Tuesday, October 2, 2007 at 12:50am just search their names on google... I found info on my 1st try Tuesday, September 11, 2007 at 7:56pm setup 1st object D=M/V so D=50g/25cm^3 2nd object D= 75g/25cm^3 Thursday, September 6, 2007 at 7:28pm Social Studies this website has one it's kinda weird though and i don't know how much it will help. All i did was google atlas and i picked the 1st one. Wednesday, September 5, 2007 at 4:45pm If your speaking of moles in the 1st step No you don't have 1mole... starting with grams you find moles: 10gCuSO4 (1mol/159.62g)= 30gNH3 (1mol/17.034g)= Thursday, August 23, 2007 at 11:05pm #13 should be the 1st name of a U.S. president #14 Martha Wednesday, August 22, 2007 at 7:22pm physics-energy conservation some people refer to heat as "low grade energy. a)how is low grade energy different from high grade energy? b)give two example of high grade energy When heating a cup of tea, which would be preferable...a stove with a burner on top, or a heating pad? High grade energy is ... Tuesday, May 8, 2007 at 4:41am I dunno. Why don't u wait 4 a teacher 2 answer that? I guess any grade that can type and ask Qs. :p any grade. Monday, May 7, 2007 at 6:27pm Example Essays Mkay, I have a five paragraph essay due by tommorow. Its to be an example essay, but I don't have a idea OR a plan on what to write. I am confused {confuzzled} and am looking for a fairly easy example essay to write. I will need to use the Seven Sentence Skeleton which is ... Thursday, March 1, 2007 at 5:55pm what is the difference between translation , rotation, relections, and dilation in 8 grade work In 8th grade talk, I recommend you check the examples in your book. OR, if you put your definitions here, I can critique them. Sunday, January 28, 2007 at 10:59am slope (-3,3),(1,3) well 1st here is the forum_search_20140419la y2-y1/x2-x1 then 3-(-3)=-6 then -3-1=-4 slope=-6/-4 0 _ -2 __ Monday, January 8, 2007 at 7:47pm math / linear programming Two factories manufacture 3 different grades of paper. The company that owns the factories has contracts to supply at least 16 tons of low grade, 5 tons of medium grade, and at least 20 tons of high grade paper. It costs $1000 per day to operate the first factory and $2000 per... Tuesday, January 2, 2007 at 6:44pm Data management . In Lotto 6/49, you must pick 6 numbers and pay $2.00 for each ticket. Six (6) numbers are drawn randomly. The following table summarizes the winning prizes if you match: 6 numbers (1st prize) Tuesday, November 7, 2006 at 12:06am music thoery how do you write out the 1st three sharp scales plus there relative minorer's please help e`mail me at blondie5ive thanks a bunch!!!!!!!!!!! whats a andante? and unison? and Accent? and Mezzo Forte? and A Soli? Tuesday, August 29, 2006 at 9:18pm red from 1st ---> 8/18 = 4/9 red from 2nd ---> 10/13 prob(2res) = (4/9)(10/13) = 40/117 Thursday, April 17, 2014 at 9:36pm The account balance on April 1st is $50.51. On April 15th a payment of $15.00 is made. On April 25th a purchase of $19.27 is made. The annual rate is 18%. What is the finance charge using the previous balance method? What is the new balance? Monday, April 14, 2014 at 11:39pm HELP! 5th grade math Trying to help my child in 5th grade. Shannon pours 4 different liquid ingredients into a bowl The sum of the liquid ingredients is 8.53 liters. Two of her measurements are in milliliters and two of her measurements are in liters. Give an example of possible measurements for ... Sunday, April 13, 2014 at 10:43pm The account balance on April 1st is $50.51. On April 15th a payment of $15.00 is made. On April 25th a purchase of $19.27 is made. What is the finance charge if the annual rate is 18%? Saturday, April 12, 2014 at 7:48pm (1500(1+.04/4)^(4*5) + 500)(1+.05/4)^(4*5) = 2987.51 2987.51 - (1500+500) = 987.51 330.29 on the 1st deposit 657.22 on the 2nd deposit Friday, April 11, 2014 at 5:21pm algebra 2 sigma upper limit 28 algebraic expression (8i-13) lower limit 1 index=i 28 ∑ 8i-13 i=1 You can see from the formula that the sum increases by 8 with each additional term (8i) The first term is 8*1-13 = -5 Now recall that the sum of the 1st n terms of an arithmetic ... Wednesday, April 9, 2014 at 11:54am not - trigonometry Why are you calling this trig ? assuming his first deposit is NOW end of 1st year: 1000(1.06) + 1000 = 2060 end of 2nd year: 2060(1.06) + 1000 = 3183.60 ... end of 5th year: .......... = 6637.09 fill in the rest using your calculator Tuesday, April 1, 2014 at 9:23pm first one: add them, 2y = 6 y = 3 sub into the 2nd: (could have used the 1st, makes no difference) 2x - 3 = 1 2x = 4 x = 2 so x=2, y=3 do the 2nd one by adding them, follow my example Tuesday, April 1, 2014 at 8:41pm Patterns and rules unit test 7th grade I go to connection acdeamy and I am in the 7th grade regular math Mrs. Janus I just need some help with this test Find the next three terms of the sequence -2,-12,-72,-432... A. -1,728,-6,912,-27,648 Monday, March 31, 2014 at 8:36am Calculus Help Please!!! looking at a diagram, if A is a away from Q and B is b away from Q, then √(a^2+144) + √(b^2+144) = 39 a/√(a^2+144) da/dt + b/√(b^2+144) db/dt = 0 Now just plug in da/dt = 3.5 a = 5 b = 23.065 (from 1st equation when a=5) and solve for db/dt Monday, March 31, 2014 at 12:27am MATH HELP!! from the 2nd: 2c + 6d = -34 c + 3d = -17 c = -3d-17 plug into the 1st: 3(-3d-17) - 7d = -3 -9d - 51 - 7d = -3 -16d = 48 d = -3 then c = -3(-3) - 17 = -8 c = -8 , d = -3 Sunday, March 23, 2014 at 2:42pm Algebra II double the 2nd c - 6d = 16 add to the first 3c = 30 c = 10 back into the 1st 20 + 6d = 14 6d = -6 d = -1 c = 10 , d = -1 Sunday, March 23, 2014 at 2:40pm I'm writing about hearing a certain song can take you back to a moment in your past. I know that the 1st paragraph is thesis and the 2nd is explain what song reflects on past and why. I don't know what i should in include in the 3rd paragraph Friday, March 21, 2014 at 11:08pm 6n + 2b = 23.52 3n + 4b = 25.53 multiply the 1st equation by 2 and you have 12n + 4b = 47.04 3n + 4b = 25.53 now subtract to get 9n = 21.51 n = 2.39 Now you can get b Thursday, March 20, 2014 at 11:34pm Home work:- Write a program to read 5 records consists of first name, last name, id and grade. Assign the grade according to the following specifications: 0<g<= 50 F 50<g<=60 D 60< g<= 70 C 70<g<=90 B 90<g <=100 A Use Stringtokenizer and buffering... Thursday, March 20, 2014 at 12:49pm algebra 1 1st number ---> x 2nd number -- > x+1 3rd number --> x+2 x+x+1+x+2 = -204 3x = -207 x = -69 largest is -67 or let the 3 numbers be (x-1) , x , (x+1) x-1 + x + x+1 = -204 3x = -204 x = -68 largest number is -68+1 = -67 notice that in my second solution, the equation ... Monday, March 17, 2014 at 10:01pm Science 6th grade I wish I could help you I'm in 6th grade and when I get my answer correct or incorrect I just write the correct answers down :D and I tell other 6th graders the correct answers :D Wednesday, March 12, 2014 at 4:24pm Math ~ Check Answers ~ So... is the 1st one is correct? A rectangular prism has a width of 92 ft and a volume of 240 ft^3. Find the volume of a similar prism with a width of 23 ft. Round to the nearest tenth, if necessary. 3.8 ft^3 60 ft^3 <<<<<< 15 ft^3 10.... Wednesday, March 12, 2014 at 2:46pm This is Algebra. Haha.. You can take algebra between 8th and 11th grade. Something like that.. Not really a grade thing.. Just a subject. Why do you ask? Tuesday, March 11, 2014 at 8:49pm Determine the equation of a line that is perpendicular to the line 2x+3y=7 and passes through the point (0,6) What is the volume if the prism in terms of y? 1st dimension: 2y+1 2nd dimension: y 3rd dimension: y+3 Monday, March 3, 2014 at 6:14pm just start fitting in the facts. 1st 2 divisible by 5 means they end in 0 or 5 last 2 divisible by 9 and all different means not 09,45,54 or 90 units = twice tens means they are 36 so, the number is xx36. That means the first two digits also add to 9, which means the number is... Monday, March 3, 2014 at 12:45pm I m a 4-digit number. My 1st 2 digits from the left are divisible by 5. My 3rd and 4th digits from the left are divisible by 9. The sum of my digits is 18. Each of my digit is different. I m divisible by 4. I m less than 6000. My units digit is twice my tens ... Monday, March 3, 2014 at 7:21am The slevator starts at 1st floor and accelerated 1m/s^2 for 6 seconds and continues at a constant velocity for 12 seconds more and is then stopped in 4 seconds with constant deceleration. If the floors are 3 meters apart at what floor does the elevator stops? Sunday, March 2, 2014 at 5:51am 4TH GRADE MATH MS T BOUGHT CUPCAKES FOR HER 3RD GRADE CLASS. MS. CASA BOUGHT TWICE AS MANY CUPCAKES AS MS T, AND 4 TIMES AS MANY CUPCAKES AS MS. G. MS. G BOUGHT 241 CUP CAKES. A. HOW MANY CUPCAKES DID MS T BUY? B. MS.. B CAME LATE TO THE PARTY AND HALF OF THE CUPCAKES HAD ALREADY BEEN EATEN... Wednesday, February 26, 2014 at 5:03am Did you notice one parabola opens up, the other one down To have only one point of intersection, they must have a common tangent, that is the slope of one equals the slope of the other for the 1st: dy/dx = 2x+6 for the 2nd dy/dx = -2x - 6 2x + 16 = -2x -6 4x = 12 x = 3 when x... Sunday, February 23, 2014 at 10:00pm Algebra 1 #1 ok #2 ok #3 8 (x^2 is the highest power, so goes first) Although since you have 5x-2x I suspect a typo. If you meant -2x^3, then your answer is ok. #4 ok #5 ok, if x is replaced by w #6 ok #7 no: -6-(-8) = +2 #8 ok #9 no. (3x-4)(3x-4) = 9x^2-24x+16 square the 1st and last ... Friday, February 21, 2014 at 4:16pm Pages: <<Prev | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | Next>>
{"url":"http://www.jiskha.com/1st_grade/?page=16","timestamp":"2014-04-23T20:56:07Z","content_type":null,"content_length":"35934","record_id":"<urn:uuid:230273f6-c1ce-41ef-b683-0c93f0b3e183>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving Trig Identity: Sine Difference August 30th 2009, 04:17 PM #1 Mar 2007 Proving Trig Identity: Sine Difference In this figure, the triangle has hypotenuse 1. I need to use it to show that: $<br /> sin(A - B) = sinAcosB - sinBcosA<br />$ The hint is : $<br /> sin(A - B) = \frac{X}{h}<br />$ Find other expressions for X and h and substitute them into this equation. I know this is easy. But I'm just stuck. I've tried working out other side lengths and looking at other trig identities but I'm just at a brick wall without a clue why. Thanks for any help. Greetings. You don't need any other trig identities. Everything is on the diagram Step 1. Fill in the angles, and you'll see that the 3rd from left triangle and the large triangle are similar. Step 2. Find the scale factor. In this case, the large triangle has a hypotenuse of 1 and the small triangle has a hypotenuse of $sinA-hsinB$ Hence scale factor(Q): $Q=\frac{1}{sinA-hsinB}$ Step 3. Apply this to the other side of the triangle, opposite the angle of $(90-A)$. Hence: $X=cosAsinA-hcosAsinB$ Put into And you get: $sin(A-B)=\frac{cosAsinA-hcosAsinB}{h}$ Simplify, then look at your triangle and realize that Substitute and you arrive at the formula Excellent solution I-think! Cleared that right up. Thank you so much. August 30th 2009, 09:19 PM #2 September 4th 2009, 04:02 AM #3 Mar 2007
{"url":"http://mathhelpforum.com/trigonometry/99846-proving-trig-identity-sine-difference.html","timestamp":"2014-04-16T07:36:39Z","content_type":null,"content_length":"37037","record_id":"<urn:uuid:2c0e5d53-d2d0-471c-a2b4-627e7b4e7242>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
II.A. Monte Carlo simulation system II.A.1. X-ray sources and energy II.A.2. Compensators II.A.3. Phantoms II.A.4. Imaging geometry II.B. Scatter spatial frequency II.C. Scatter distribution estimation from limited photon and projection simulations II.C.1. Scatter corrected CBCT reconstruction III.A. Scatter spatial frequency spectrum III.A.1. Cylinder III.A.2. Anthropomorphic phantoms III.B. Scatter distribution estimation from limited photon simulations III.B.1. Scatter corrected CBCT reconstruction
{"url":"http://scitation.aip.org/content/aapm/journal/medphys/40/11/10.1118/1.4822484","timestamp":"2014-04-19T13:33:18Z","content_type":null,"content_length":"118719","record_id":"<urn:uuid:1aa92441-e339-4cd5-a6da-ed5530206dc9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - The wrong turn of string theory: our world is SUSY at low energies mitchell porter There is also a secondary "rule" that you can add two ordered pairs which both have nonzero weak isospin, only if one has isospin +1/2 and the other has isospin -1/2. It seems reasonable, as then we can look for some symmetrization argument to justify the idea. But is is also peculiar. It means that the uu and dd combinations only happen for R type quarks. Looking at the reference of Huerta, I note that in the previous section he takes some pains to discuss the adjoint representation of U(1) and its role in the hypercharge. A subltle point here is that U(1)-hypercharge is still chiral (as Distler likes to stress) and then it needs complex representations, while U(1) electromagnetism is not.
{"url":"http://www.physicsforums.com/showpost.php?p=3468592&postcount=93","timestamp":"2014-04-18T13:47:20Z","content_type":null,"content_length":"8737","record_id":"<urn:uuid:092021b0-c5e4-44cf-8293-9304700ac530>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Retail Formulas? Retail Formulas are mostly used by retail employees, retail managers, retail buyers and other retail business related individuals. It is widely used to evaluate the current inventory status, inventory purchasing plans, sales analysis, markup and markdown prices etc. Performing these retail formulas often requires familarity with formulas, We've included all the required retail related formulas in this website from where you can use the equations to track merchandise, measures sales performance and help create pricing strategies. Acid-Test measures the ability of a company to use its quick assets to immediately extinguish its current liabilities. Average Inventory calculates the inventory Turn Rate and Gross Margin Return on Investment. Cost of goods sold (COGS) includes the direct costs attributable to the production of the goods sold by a company Break Even Analysis is a process of determining the number of units that must be sold at a given price to recover cost and make a profit. Contribution Margin is a marginal profit per unit sale. Margin Percentage (%) is calculated with Retail price and Cost of goods. Initial markup is the average markup required on all products to cover the cost of all items, incidental expenses, and to obtain a reasonable profit. Gross Margin Return on Investment shows the relationship between total operating profits and the average inventory investment (at cost) by combining profitability and sales-to-stock measures. Gross Margin is the difference between the sales and the production costs including the overhead. Inventory turnover is an equation that measures the number of times inventory is sold or used over a period. Markup is the difference between the cost of a good or service and its selling price Open to buy is the dollar amount budgeted by a business for inventory purchases for a specific time period Reductions is the amount or rate by which a product is reduced. Sales per Square Foot (Sales Per Unit Area) is a standard and usually the primary measurement of store success This page uses content from the English Wikipedia. The content of Wikipedia is available under the GNU Free Documentation License. Contact Us RetailCare Pty Ltd Level 1, 240 Chapel Street Prahran,VIC 3181 Web: www.retailcare.com.au
{"url":"http://www.retailformulas.com.au/","timestamp":"2014-04-18T23:31:26Z","content_type":null,"content_length":"14859","record_id":"<urn:uuid:b0ad7681-e92f-4275-8e8d-71dcec82fe9c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Hightstown SAT Math Tutor Find a Hightstown SAT Math Tutor ...My students usually feel much better about the algebra and then they have a solid foundation for future math courses/proficiency tests. "Pre-algebra" skills are the foundation for all high school mathematics. This grouping includes numerical operations, basic geometry, identifying patterns, dat... 10 Subjects: including SAT math, geometry, algebra 2, algebra 1 ...I look forward to working with you/your child! Other Hobbies:I am an avid traveler, and enjoy spending time with my family and friends.I am currently a tutor for a student struggling in Algebra I. The student's grade significantly improved upon beginning tutoring. 26 Subjects: including SAT math, chemistry, physics, calculus I am a graduate of The College of New Jersey with BA in Mathematics and Secondary Education, and am a certified K-12 teacher of mathematics. I have experience with test prep including: SSATSAT ACT Praxis Mathematics Content Knowledge I have tutoring experience with students as young as 5 years old ... 11 Subjects: including SAT math, calculus, geometry, algebra 1 ...He has been a full-time teacher for 2 years, including 2 years as a substitute classroom teacher in Middlesex County for grades K-12. Uri also tutored Spanish to children and adults of all levels, as well as other subjects at Middlesex County College. He loves to cook, watch documentaries, listen to all kinds of music, and traveling. 19 Subjects: including SAT math, Spanish, algebra 2, statistics ...Though I majored in biology and economics, my specialty for years has been tutoring Math -- for high school classes, SAT prep, and GMAT prep. With my help, my students have increased their scores on standardized tests, not only because they better understand the material but also because they be... 14 Subjects: including SAT math, geometry, statistics, Microsoft Excel
{"url":"http://www.purplemath.com/hightstown_sat_math_tutors.php","timestamp":"2014-04-20T21:07:29Z","content_type":null,"content_length":"24043","record_id":"<urn:uuid:9f28e66c-bdea-4fd4-aa31-75d2174aa38b>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Normal Distribution... April 7th 2009, 10:19 AM #1 Given that the random variable, X, follows a normal distribution with mean μ and variance (sigma)^2 , show that the random variable Z = (x - μ) / sigma , that is a linear transformation of X, has a mean of 0 and variance equal to 1.. Thanks for the help! If you just have to prove that the mean is 0 and the variance is 1 : Then use the fact that $\mathbb{E}(aX+b)=a\mathbb{E}(X)+b$ Use the fact that $\text{Var}(aX+b)=a^2 \text{Var}(X)$ If you want to prove that Z has a normal distribution with mean 0 and variance 1 : Have you ever dealt with characteristic functions ? They characterise a distribution. And for a normal distribution, $\psi_X(t)=e^{it\mu-\frac{\sigma^2t^2}{2}}$ \begin{aligned}<br /> \psi_{\frac{X-\mu}{\sigma}}(t)<br /> &=\mathbb{E}\left(\exp\left(it \cdot \frac{X-\mu}{\sigma}\right)\right) \\<br /> &=\mathbb{E} \left(\exp\left(i \cdot \frac t \sigma \ cdot X\right) \exp\left(\frac{-it\mu}{\sigma}\right)\right) \\<br /> &=\exp\left(\frac{-it\mu}{\sigma}\right) \mathbb{E}\left(\exp\left(i \cdot \frac t\sigma \cdot X\right)\right)<br /> \end But $\mathbb{E}\left(\exp\left(i \cdot \frac t\sigma \cdot X\right)\right)=\psi_X \left(\frac t \sigma\right)=\exp\left(\frac{it\mu}{\sigma}-\frac{\sigma^2t^2}{2 \sigma^2}\right)=\exp\left(\frac Thus $\psi_{\frac{X-\mu}{\sigma}}(t)=\exp\left(\frac{-it\mu}{\sigma}\right) \exp\left(\frac{it\mu}{\sigma}-\frac{t^2}{2}\right)=\exp\left(\frac{-t^2}{2}\right)$ and this is the characteristic function for a normal distribution with mu=0 and sigma=1. That was a little extra April 7th 2009, 10:32 AM #2
{"url":"http://mathhelpforum.com/advanced-statistics/82724-normal-distribution.html","timestamp":"2014-04-17T12:52:36Z","content_type":null,"content_length":"38666","record_id":"<urn:uuid:9317b13a-0a1b-4c7e-9f48-3c7fc92691c8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Leland Teschler's Editorial: The Statistics of Dumb Mistakes Back in the 1940s, economist Milton Friedman was part of the war effort. He performed statistical studies on high-temperature alloys for jet engines. Friedman, who eventually won a Nobel Prize in economics, used regression analysis on data about alloy strength versus temperature. His statistics predicted that a couple of as-yet-untried alloys would last about 200 hours, noteworthy because those tried thus far had failed after only about 20 hours. Surprise: When metallurgists cooked up the new alloys, they went to pieces in less than 3 hours. The lesson in Friedman’s experience is that you can’t derive engineering facts from statistics alone. That is a point amplified by Steve Ziliak, an economics professor at Roosevelt University who coauthored a book called The Cult of Statistical Significance. Ziliak is among a number of researchers who warn that statistical significance — given by the student T test and p values — is sometimes misused as a proxy for important scientific results. And as Milton Friedman discovered early in his career, reliance on statistics alone can often lead to astoundingly bad conclusions. Ziliak says confusion about statistical significance is widespread even among researchers who should know better. He reached this conclusion by combing through papers published in a number of prestigious economics, operations research, and medical journals. He found numerous instances of researchers who used statistical significance as if it was the same as correlation. “They confuse the probability measure with a measure of correlation of effect size. But they are two very different things. It is almost embarrassing because it is such an elementary point,” he says. Ziliak’s discovery is much more than just pedantic statistical minutia. In medical research, for example, confusion about significance levels can lead to rejecting good drugs in favor of others that have less oomph. “Suppose you have two diet pills which differ only in the size of the effect they have on dieters,” he says. “One pill takes off 20 pounds, plus or minus 10. The other takes off 5 pounds, plus or minus a half pound. Ninety percent of scientists in medicine would choose the second pill because they think its effects are more significant, though the first pill takes off more weight. That’s because the first pill has a signal-to-noise ratio of just two (20/10) while the second pill’s ratio is 10 (5/0.5).” The irony is that researchers interested in losing weight would likely have no trouble picking the pill that was most effective, low signal-to-noise or not. People can effortlessly solve a problem in a social setting but struggle when it is presented as an abstract dilemma. Interestingly enough, engineering research tends to be free of such misconceptions. “One reason engineers didn’t go down this path is that they use Monte Carlo and other types of simulations as well as different quantitative methods that don’t require inferential statistics,” says Ziliak. “And even when engineers do use inferential statistics, their practices have been shaped by people like W. Edwards Deming and other engineers who were around at the birth of modern statistics. Deming in particular saw that significance testing was not going to be relevant for most engineering purposes.” All in all, if you find yourself wondering why the latest economic theories seem to work no better than Milton Friedman’s high-temperature-alloy predictions, consider the possibility they were hatched by someone unable to recognize an effective a diet pill from its statistics. — Leland Teschler, Editor
{"url":"http://machinedesign.com/print/engineering-education/leland-teschlers-editorial-statistics-dumb-mistakes","timestamp":"2014-04-21T00:51:57Z","content_type":null,"content_length":"17821","record_id":"<urn:uuid:8d3abace-785f-44ea-a1ed-ee5cb58f68ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Newton's Three Laws Newton's Third Law All forces result from the interaction of two bodies. One body exerts a force on another. Yet we haven't discussed what force, if any, is felt by the body giving the original force. Experience tells us that there is in fact a force. When we push a crate across the floor, our hands and arms certainly feel a force in the opposite direction. In fact, Newton's Third Law tells us that this force is exactly equal in magnitude and opposite in direction of the force we exert on the crate. If body A exerts a force on body B, let us denote this force by F [AB] . Newton's Third Law, then, states Stated in words, Newton's third law proclaims: to every action there is an equal and opposite reaction. This law is quite simple and generally more intuitive than the other two. It also gives us a reason for many observed physical facts. If I am in a sailboat, I cannot move the boat simply by pushing on the front. Though I do exert a force on the boat, I also feel a force in the opposite direction. Thus the net force on the system (me and the boat) is zero, and the boat doesn't move. We need some force, like wind, to move the boat. Though this law seems obvious and unnecessary, we will see its importance when we apply Newton's laws Newton's third law also gives us a more complete definition of a force. Instead of merely a push or a pull, we can now understand a force as the mutual interaction between two bodies. Whenever two bodies interact in the physical world, a force results. Whether it be two balls bouncing off each other or the electrical attraction between a proton and an electron, the interaction of two bodies results in two equal and opposite forces, one acting on each body involved in the interaction. Amazingly enough, Newton's Three Laws provide all the necessary information to describe the motion involved in any given situation. We will soon study the applications of Newton's Laws, but we first need to take care of the units of force. Units of Force The unit of force is defined, quite appropriately, as a Newton. What is a Newton in terms of fundamental units? Given that acceleration (a) = m/s ^2 and mass = 1 kg, we can find out from Newton's Second Law: F = ma implies that a Newton, N = kg (m/s ^2 ) = (kg ƒ m)/s ^2 . Therefore, one Newton causes a one kilogram body to accelerate at a rate of one meter per second per second. Our definition of units becomes important when we get into practical applications of Newton's Laws. Summary of Newton's Laws We can now give an equation summary of Newton's Three Laws: First Law: If F = 0 then a = 0 and v = constant Second Law: F = ma Third Law: F [AB] = - F [BA]
{"url":"http://www.sparknotes.com/physics/dynamics/newtonsthreelaws/section3.rhtml","timestamp":"2014-04-17T07:31:19Z","content_type":null,"content_length":"55258","record_id":"<urn:uuid:b38f6c7b-5c10-4c6f-be7d-3b670e06665c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
A rod 55 long and 1.0 in radius carries a 2.0 charge distributed uniformly over its length Number of results: 26,601 A 1.13 kg ball is connected by means of two ideal strings to a vertical, rotating rod. The strings are tied to the rod and are taut. The upper string is 24.0 cm long and has a tension of 31.5 N, and it makes an angle θ2 = 51.0° with the rod, while the lower string makes ... Monday, September 30, 2013 at 9:21pm by Charlie A uniform thin rod of length 0.55 m and mass 5.5 kg can rotate in a horizontal plane about a vertical axis through its center. The rod is a rest when a 3.0-g bullet traveling in the horizontal plane of the rod is fired into one end of the rod. As viewed from above, the ... Wednesday, April 9, 2014 at 12:33pm by Tsion A nonuniform 2.0-kg rod is 2.0 m long. The rod is mounted to rotate freely about a horizontal axis perpendicular to the rod that passes through one end of the rod. The moment of inertia of the rod about this axis is 4.0 kg m2. The center of mass of the rod is 1.2 m from the ... Sunday, October 28, 2012 at 7:35am by Anonymous There is a rod that is 50cm long and has a radius of 1cm that carries a charge of 2uC distributed uniformly over its length. What would be the value of the magnitude of the electric field. (a) 4.0 mm from the rod surface, not either end So far I have 2(8*10^-6)(9*10^9)/.55=... Tuesday, August 25, 2009 at 12:50pm by Sammy A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. N/C toward the rod away from the rod Monday, September 16, 2013 at 9:49pm by suhani college physics A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. N/C toward the rod away from the rod Wednesday, September 18, 2013 at 4:14am by suhani 2. Rod A, which is 30 cm long, expands by 0.045 cm when heated from 0°C to 100°C. Rod B, also 30 cm long expands by 0.075 cm for the same change in temperature. A third Rod C, also 30 cm long is made up of the materials of rod A and B, connected end to end. It expands by 0.065... Monday, February 13, 2012 at 9:33am by Anonymous Physics (please help!!!!) A thin uniform rod (length = 1.3 m, mass = 4.1 kg) is pivoted about a horizontal frictionless pin through one of its ends. The moment of inertia of the rod through this axis is 1/3 m L2. The rod is released when it is 55° below the horizontal. What is the angular acceleration ... Friday, March 25, 2011 at 7:36pm by HELP!!!! An aluminum rod is 21.5 cm long at 20°C and has a mass of 350 g. If 12500 J of energy is added to the rod by heat, what is the change in length of the rod? Monday, November 8, 2010 at 9:20am by Erica--help! A 1.2 -long steel rod with a diameter of 0.50 hangs vertically from the ceiling. An auto engine weighing 4.7 is hung from the rod. By how much does the rod stretch? Thursday, November 22, 2012 at 11:40pm by Coco A 1.2m -long steel rod with a diameter of 0.50cm hangs vertically from the ceiling. An auto engine weighing 4.7kN is hung from the rod. By how much does the rod stretch? Wednesday, March 5, 2014 at 7:46pm by Georgey a long fishing rod is kept upright .A boat engine causes the rod to vibrate.Mary speculates that the end of the rod moves to and fro 20 times in 1 minute.The amplitude of the vibrtion is 0.15m.Calculate the speed of the vibration at the end of the fishing rod Wednesday, June 15, 2011 at 9:37am by Sivashen A 500g iron (Fe) rod was left in air for a long period of time, and got covered with rust (Fe2O3) as a result of reaction with atmospheric oxygen. The mass of the rusty rod was determined as 560g. a) calculate the theoretical yield of Fe2O3(s), that is the mass of Fe2O3(s) ... Sunday, March 18, 2012 at 8:47pm by Riley Obviously I do not know what your axes are. lengths 2 and 1 masses 2 and 1 if along the long rod: I = (1/3)(1) * 1^2 = 1/3 if along the short rod I = (1/12)(2) * 2^2 = 8/12 = 2/3 if perpendicular to the plane of the rods at intersection I = 3 (1/3) (1)(1^2) = 1 = 3/3 so ... Saturday, November 3, 2012 at 10:15pm by Damon According to the Guinness Book of World Records (1990 edition, p. 169), the highest rotary speed ever attained was 2010 m/s (4500 mph). The rotating rod was 15 cm (5.9 in) long. Assume the speed quoted is that of the end of the rod. What is the centripetal acceleration of the ... Sunday, January 13, 2013 at 9:42pm by Emily A graphite rod is 1.0 m long, with a 1.0 cm by 0.5 cm rectangular cross sectional area. What is the electrical resistance between its long ends? What must be the diameter of a circular 2.0 m long copper rod if its resistance is to be the same? Friday, March 9, 2012 at 10:33am by pakilina Resonance of sound waves can be produced within an aluminum rod by holding the rod at its midpoint and stroking it with an alcohol-saturated paper towel. In this resonance mode, the middle of the rod is a node while the ends are antinodes; no other nodes or antinodes are ... Monday, December 6, 2010 at 1:56pm by Erica--help! AP Physics A long, straight wire carries a current of 740 mA. A thin metal rod 45 cm long is oriented perpendicular to the wire and moves with a speed of 1.8 m/s in a direction parallel to the wire. What are the size and direction of the emf induced in the rod if the nearest point of the... Saturday, April 9, 2011 at 4:44pm by Martina The resistance of a 395 m long aluminum rod is 3580 . A. Find the cross-sectional area of the rod mm2 B. Find the current in the rod when a potential difference of 14 V is applied across the ends. A Monday, February 20, 2012 at 8:01pm by Anonymous infinitely long wire carries a current I100A. Below the wire a rod of length L 10cm is forced to move at a constant speed v=5m/s long horizontal conducting rails. The rod and rails form a conducting loop. The rod has resistance of R 0.4ohms. The rails have neglibible ... Tuesday, December 17, 2013 at 3:51pm by james 6. A square aluminum rod is 1.0 meters long and 5.0 mm on edge. What is the resistance between its ends? What must be the diameter of a circular 1.0 meter long copper rod if its resistance is to be the same Monday, March 14, 2011 at 2:32pm by okay An aluminum rod is 10.0 cm long and a steel rod is 80.0 cm long when both rods are at a temperature of15 degrees cel. Both rods have the same diameter. The rods are joined end-to-end to form a rod 90.0 cm long. The coefficients of linear expansion of aluminum and steel are2.... Saturday, July 24, 2010 at 3:25pm by Paul Physics- Angular Momentum A long thin rod of mass M=2.00 kg and length L=75.0 cm is free to rotate about its center. Two identical masses (each of mass m = .45kg) slide without friction along the rod. The two masses begin at the rod's point of rotation when the rod is rotating at 10.0 rad/s. A) When ... Tuesday, December 28, 2010 at 3:22pm by Miki A long thin rod of mass M = 2:00 kg and length L = 75:0 cm is free to rotate about its center as shown. Two identical masses (each of mass m = .403 kg) slide without friction along the rod. The two masses begin at the rod's point of rotation when the rod is rotating at 10.0 ... Saturday, December 24, 2011 at 8:02pm by Lindsay There is a rod that is 55 cm long and 1 cm in radius that carries a 8 uC charge that is ditributed uniformly over its entire length. What would be the magnitude of the electric filed 4.0 mm from the ord surface, not near either? So far I have 2(8*10^-6)(9*10^9)/.55=261818 then... Thursday, August 27, 2009 at 10:36pm by Sammy A metal rod is 40.125 cm long at 20 °C and 40.148 cm long at 35 °C. calculate the coefficient of linear expansion of the rod for this temperature range. Wednesday, February 22, 2012 at 4:30am by selena You don't want to take the chance of contaminating the ppt. Frankly, I think it's ok to stir with a glass rod, rinse it thoroughly with water from a wash bottle, and reuse the same glass rod as long as it hasn't been in contact with any other surface (meaning that you hold the... Sunday, March 27, 2011 at 9:16pm by DrBob222 Angular Momentum A long thin rod of mass M = 2.00 kg and length L = 75.0 cm is free to rotate about its center as shown. Two identical masses (each of mass m = .421 kg) slide without friction along the rod. The two masses begin at the rod's point of rotation when the rod is rotating at 10.0 ... Monday, July 11, 2011 at 10:37pm by jhay In a certain experiment, it is necessary to be able to move a small radioactive source at selected, extremely slow speeds. This is accomplished by fastening the source to one end of an aluminum rod and heating the central section of the rod in a controlled way. If the ... Wednesday, April 23, 2008 at 6:07pm by lizzy rotational mechanics a thin rod of length 2R and mass M is standing vertically on a perfectly smooth floor. the state of equilibrium in which the rod at rest is unstable and the rod falls. FInd the trajectories that the various points of rod describe and velocity with which the upper end of rod ... Wednesday, March 31, 2010 at 11:46am by pramod A 1.4-m-long massless rod is pivoted at one end and swings around in a circle on a frictionless table. A block with a hole through the center can slide in and out along the rod. Initially, a small piece of wax holds the block 20 from the pivot. The block is spun at 50 , then ... Saturday, April 28, 2012 at 4:00am by Anonymous A worker sits at one end of a 183-N uniform rod that is 2.80 m long. A weight of 107 N is placed at the other end of the rod. The rod is balanced when the pivot is 0.670 m from the worker. Calculate the weight of the worker Wednesday, May 15, 2013 at 1:51am by karen A long, straight metal rod has a radius of 4.50 cm and a charge per unit length of 30.0 nC/m. Find the electric field at the following distances from the axis of the rod, where distances are measured perpendicular to the rod. a)3.00 cm b)13.5 cm c)120 cm Monday, November 1, 2010 at 1:26am by anonymous A long, straight metal rod has a radius of 6.00 cm and a charge per unit length of 36.0 nC/m. Find the electric field at the following distances from the axis of the rod, where distances are measured perpendicular to the rod. (a) 1.50 cm N/C (b) 13.5 cm N/C (c) 125 cm N/C Friday, September 13, 2013 at 3:58am by gita A long, thin rod (length = 4.0 m) lies along the x axis, with its midpoint at the origin. In a vacuum, a +5.0 µC point charge is fixed to one end of the rod, while a -5.0 µC point charge is fixed to the other end. Everywhere in the x, y plane there is a constant external ... Saturday, January 26, 2013 at 3:03am by Amy A long, thin rod (length = 4.0 m) lies along the x axis, with its midpoint at the origin. In a vacuum, a +5.0 µC point charge is fixed to one end of the rod, while a -5.0 µC point charge is fixed to the other end. Everywhere in the x, y plane there is a constant external ... Saturday, January 26, 2013 at 2:03pm by Amy The left end of a 1-kg, 1.0 m long uniform thin rod is attached to a pivot, about which it can rotate freely. A force F is applied at the right end of the rod at 30 degree angle to hold the rod stationary in a horizontal position. What is the magnitude of the force F? I think ... Sunday, April 21, 2013 at 10:31am by gina The density of a 4.41-m long rod can be described by the linear density function λ(x) = 111 g/m + 12.3x g/m2. One end of the rod is positioned at x = 0 and the other at x = 4.41 m. found total mass of rod to be: 609 grams need help finding : the center-of-mass coordinate... Thursday, March 10, 2011 at 12:54pm by Emily The density of a 4.41-m long rod can be described by the linear density function λ(x) = 111 g/m + 12.3x g/m2. One end of the rod is positioned at x = 0 and the other at x = 4.41 m. found total mass of rod to be: 609 grams need help finding : the center-of-mass coordinate... Thursday, March 10, 2011 at 1:06pm by Emily A uniform 1.3-kg rod that is 0.80 m long is suspended at rest from the ceiling by two springs, one at each end of the rod. Both springs hang straight down from the ceiling. The springs have identical lengths when they are unstretched. Their spring constants are 60 N/m and 35 N... Wednesday, December 1, 2010 at 1:27pm by sammie A long thin rod lies along the x-axis from the origin to x=L, with L= 0.890 m. The mass per unit length, λ (in kg/m) varies according to the equation λ = λ0 (1+1.410x2). The value of λ0 is 0.700 kg/m and x is in meters. 1. Calculate the total mass of the ... Tuesday, February 11, 2014 at 5:00pm by Molly A long thin rod lies along the x-axis from the origin to x=L, with L= 0.890 m. The mass per unit length, λ (in kg/m) varies according to the equation λ = λ0 (1+1.410x2). The value of λ0 is 0.700 kg/m and x is in meters. 1. Calculate the total mass of the ... Tuesday, February 11, 2014 at 7:07pm by Molly Physics - please help!!.. A long thin rod lies along the x-axis from the origin to x=L, with L= 0.890 m. The mass per unit length, λ (in kg/m) varies according to the equation λ = λ0 (1+1.410x2). The value of λ0 is 0.700 kg/m and x is in meters. 1. Calculate the total mass of the ... Wednesday, February 12, 2014 at 8:48am by Molly retired disabled vet Please help. To build a straight tapered fishing rod and I start at the butt or big end, the measurment is 400 thousands. if rod tip is 75 thousands. if i stsrt at butt end what will the meaqsurments be at 5" intervals. ie butt,5" 10" 15" ect. rod will be 60 inches long. Thanks Sunday, March 2, 2014 at 5:16pm by Bernie Stark A cylindrical rod( length L, radius R and density d) is dipped vertically in to a liquid. the rod is connected by a wire to a balance that measures the force to lift the rod. the contact angle between rod and liquid is θ. If rod is partially immersed so 0.5L is above the ... Sunday, March 6, 2011 at 11:05pm by kinjal A cylindrical rod( length L, radius R and density d) is dipped vertically in to a liquid. the rod is connected by a wire to a balance that measures the force to lift the rod. the contact angle between rod and liquid is θ. If rod is partially immersed so 0.5L is above the ... Sunday, March 6, 2011 at 11:06pm by kinjal If a rod is moving at a velocity equal to 1/2 the speed of light parallel to its length, what will a stationary observer observe about its length? The length of the rod will become exactly half of its original value. The length of the rod remains the same. The length of the ... Tuesday, October 23, 2012 at 10:24am by Laurie The density of a 5.0-m long rod can be described by the linear density function λ(x) = 140 g/m + 14.8x g/m2. One end of the rod is positioned at x = 0 and the other at x = 5 m. (a) Determine the total mass of the rod. (b) Determine the center-of-mass coordinate. I found ... Monday, June 11, 2012 at 7:14pm by James An infinitely long wire carries a current I=100A. Below the wire a rod of length L=10cm is forced to move at a constant speed v=5m/s along horizontal conducting rails. The rod and rails form a conducting loop. The rod has resistance of R=0.4ohms. The rails have neglibible ... Thursday, December 12, 2013 at 8:29am by Anonymous Physics - Center of Mass The density of a 5.0-m long rod can be described by the linear density function λ(x) = 145 g/m + 14.2x g/m2. One end of the rod is positioned at x = 0 and the other at x = 5 m. a) Determine the total mass of the rod. b) Determine the center-of-mass coordinate. Saturday, June 16, 2012 at 7:43pm by Joe college physics One cm of a 10-cm-long rod is made of metal, and the rest is wood. The metal has a density of 5000kg/m^3 and the wood has a density of 500kg/m^3. When the rod is set into pure water, the metal part points downward. How much of the rod is underwater? Monday, December 6, 2010 at 1:48pm by jessica college physics One cm of a 10-cm-long rod is made of metal, and the rest is wood. The metal has a density of 5000kg/m^3 and the wood has a density of 500kg/m^3. When the rod is set into pure water, the metal part points downward. How much of the rod is underwater? Monday, December 6, 2010 at 1:48pm by jessica One end of a uniform 3.20-m-long rod of weight Fg is supported by a cable at an angle of θ = 37° with the rod. The other end rests against the wall, where it is held by friction as shown in the figure below. The coefficient of static friction between the wall and the rod... Saturday, April 5, 2014 at 5:35pm by Joe Physics Elena Help pls An infinitely long wire carries a current I=100amp . Below the wire a rod of length L=10cm is forced to move at a constant speed v=5m/s along horizontal conducting rails. The rod and rails form a conducting loop. The rod has resistance of R=0.4 ohms . The rails have neglibible... Monday, December 16, 2013 at 7:14pm by saru A thin rod L = 9.0 cm long is uniformly charged and has a total charge of 4.4 nC. Find the magnitude of the electric field at a point P located at a distance R = 13.5 cm from the center of the rod when: (a) The point P lies along the axis of the rod Electric field =___kN/C (b... Monday, July 9, 2012 at 11:11pm by Rain A long thin rod lies along the x-axis from the origin to x=L, with L= 0.890 m. The mass per unit length, λ (in kg/m) varies according to the equation λ = λ0 (1+1.410x2). The value of λ0 is 0.700 kg/m and x is in meters. 1. Calculate the total mass of the ... Wednesday, February 12, 2014 at 1:45pm by Naya If a rod is moving at a velocity equal to 1/2 the speed of light parallel to its length, what will a stationary observer observe about its length? The length of the rod will become exactly half of its original value. The length of the rod remains the same. The length of the ... Monday, October 22, 2012 at 4:39pm by laurie A long, straight metal rod has a radius of 6.00 cm and a charge per unit length of 36.0 nC/m. Find the electric field at the following distances from the axis of the rod, where distances are measured perpendicular to the rod. (a) 1.50 cm N/C (direction) (b) 13.5 cm N/C (... Sunday, September 8, 2013 at 11:16am by preya A long, straight metal rod has a radius of 6.00 cm and a charge per unit length of 36.0 nC/m. Find the electric field at the following distances from the axis of the rod, where distances are measured perpendicular to the rod. (a) 1.50 cm N/C (direction) (b) 13.5 cm N/C (... Monday, September 9, 2013 at 2:02pm by preya College Physics A rod of mass 4.3 kg and length 4.3 m hangs from a hinge as shown in Figure P9.28. The end of the rod is then given a “kick” so that it is moving at a speed of 2.3 m/s. How high will the rod swing? Express your answer in terms of the angle (in degree) that the rod makes with ... Tuesday, October 18, 2011 at 8:46pm by Anon visual basic here is my source code for my program im trying to convert 10 units from furlong to meters the answer is supposed to be 2011.68 meters i keep getting 2000. I know if i cant get this right the rest of the conversion will give the wrong answers anybody have any ideas? Dim inch ... Friday, May 8, 2009 at 1:11pm by cl hai @Elena plz help m on this question ????? An infinitely long wire carries a current I=100A. Below the wire a rod of length L=10cm is forced to move at a constant speed v=5m/s along horizontal conducting rails. The rod and rails form a conducting loop. The rod has resistance... Monday, December 16, 2013 at 1:21am by Klaus hai @Elena plz help m on this question ????? An infinitely long wire carries a current I=100A. Below the wire a rod of length L=10cm is forced to move at a constant speed v=5m/s along horizontal conducting rails. The rod and rails form a conducting loop. The rod has resistance... Monday, December 16, 2013 at 1:21am by Klaus AP Physics A uniform thin rod of length 0.3m and mass 3.5kg can rotate in a horizontal plane about a vertical axis through its center. The rod is at rest when a 3g bullet traveling in the horizontal plane of the rod is fired into one end of the rod. As viewed from above, the direction of... Friday, February 8, 2008 at 6:10pm by Darlene For the step ladder shown in the Figure, sides and are each ft long and hinged at . is a -ft long tie rod, installed halfway up the ladder. A -kg man climbs ft along the ladder. Assuming that the floor is frictionless and neglecting the weight of the ladder, find the tension ... Thursday, April 10, 2008 at 1:21am by lizzy college physics I don't know where to begin with this question. Could someone help me please? A rod of mass 4.1 kg and length 1.6 m hangs from a hinge as shown in the figure below. The end of the rod is then given a "kick" so that it is moving at a speed of 5 m/s. How high will the rod swing... Friday, December 2, 2011 at 11:22am by peter A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. Friday, September 13, 2013 at 3:54am by vishal A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. Sunday, September 22, 2013 at 10:29pm by suhani A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. Monday, September 23, 2013 at 10:45pm by suhani A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. Tuesday, September 24, 2013 at 7:14pm by suhani A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. Friday, September 27, 2013 at 10:10pm by suhani A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. Saturday, September 28, 2013 at 4:00am by physics A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. Saturday, September 28, 2013 at 7:54pm by physics A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. Sunday, September 29, 2013 at 5:51pm by physics A uniform rod of mass M = 0.245kg and length L = 0.49m stands vertically on a horizontal table. It is released from rest to fall. (a) Calculate the angular speed of the rod, the vertical acceleration of the moving end of the rod, and the normal force exerted by the table on ... Tuesday, October 30, 2012 at 11:30am by jay college physics A rod 14.0 cm long is uniformly charged and has a total charge of -20.0 µC. Determine the magnitude and direction of the electric field along the axis of the rod at a point 36.0 cm from its center. Saturday, September 21, 2013 at 2:43am by vishal A rigid rod holds a restaurant sign horizontally from the side of a building. The force due to gravity acting on the sign is 165N with the angle between the sign and the rod being 50 degrees. Ignoring the weight of the rod what is the magnitude of the tension in the rod? Wednesday, February 6, 2013 at 11:39am by Carolyn The length of the rod will become exactly half of its original value. The length of the rod remains the same. The length of the rod will decrease. The length of the rod will increase. Wouldn't the length double, thus being answer D? Tuesday, October 23, 2012 at 10:26am by Laurie The length of the rod will become exactly half of its original value. The length of the rod remains the same. The length of the rod will decrease. The length of the rod will increase. Wouldn't the length double, thus being answer D? Wednesday, October 24, 2012 at 5:07pm by Laurie Calculate the angular speed of the rod, the vertical acceleration of the moving end of the rod, and the normal force exerted by the table on the rod as it makes an angle θ = 49.6° with respect to the vertical. b) If the rod falls onto the table without slipping, find the ... Sunday, April 3, 2011 at 7:56pm by please help!!!! physics ( help) The active element of a certain laser is made of a glass rod 30cm long and 1.5cm in diameter. The average coefficient of linear expansion of glass is 9 x 10^-6°C-1. If the temperature of the rod increases by 65°C, what is the increase in its volume? Thursday, April 28, 2011 at 9:18am by UnKnown A 148 kg student sits on a chair which is solely supported by a solid 0.97 meter-long steel rod 0.68 cm in diameter. What is the change in length of the rod produced by the student's weight? Delta l Monday, March 31, 2014 at 2:20pm by physicskid True or false questions a)Rubbing a polythene rod with a duster gives the rod extra negative charge. b)Rubbing a polythene rod with a duster takes away positive charge from the duster. c)The duster will have the same charge as the polythene rod it has rubbed. d)The duster will... Sunday, February 8, 2009 at 1:35am by Victor One cm of a 10-cm-long rod is made of metal, and the rest is wood. The metal has a density of 5000 kg/m3 and the wood has a density of 500 kg/m3. When the rod is set into pure water, the metal part points downward. How much of the rod is underwater? Thursday, December 16, 2010 at 1:14pm by Sara Can some one help me with this question. A uniform 119.0 cm long, 78.0 gram wooden rod is suspended from it midpoint. On one end of the rod, a mass of 350 g is attached, and on the other end a mass of 150 g is attached. A third mass of 225 g is attached at some distance x from... Sunday, November 4, 2012 at 9:26pm by Mary Pinzon A 1.34‒kg ball is attached to a rigid vertical rod by means of two massless strings each 1.70 m long. The strings are attached to the rod at points 1.70 m apart. The system is rotating about the axis of the rod, both strings being taut and forming an equilateral triangle... Sunday, February 23, 2014 at 2:42am by Chrxeyshthoufov a rod supports a 2.35 kg lamp. a)what is the magnitude of the tension in the rod? b)calculate the components of the force that the bracket exerts on the rod. Sunday, December 4, 2011 at 12:44pm by jessica If an observer is traveling parallel to a rod at twice the speed of the rod, what is does the observer perceive the length to be? Choose one answer. a. The length of the rod remains the same. b. The length of the rod will become twice its original value. c. The length of the ... Saturday, April 30, 2011 at 12:05am by ashley -- If an observer is traveling parallel to a rod at twice the speed of the rod, what is does the observer perceive the length to be? Choose one answer. a. The length of the rod remains the same. b. The length of the rod will become twice its original value. c. The length of the ... Saturday, April 30, 2011 at 10:24am by ashley -- The length of the rod will become exactly half of its original value. The length of the rod remains the same. The length of the rod will decrease. The length of the rod will increase. Wouldn't the length double, thus being answer D? No one has answered this question yet. Monday, October 22, 2012 at 8:35pm by Laurie A uniform AB 2m long Rod which exerts a downwards force of 60N at its centre is placed on a knife egde support position 0.8m from A determine the vertical downward force required at A to prevent rotation of the Rod. Sunday, April 6, 2014 at 1:13am by mwazzy I do not understand your formula. because the distance from the rod is much less than the rod length, assume it is an infinte rod. E= 2k*lambda/r r will be the radius of the rod plus the distance from the surface. r=0.0104m lamda is 2uC/.5 micro coulombs/meter. www.phys.uri.... Tuesday, August 25, 2009 at 12:01am by bobpursley A rod of length L = 1.2 m pivots freely about one end. The rod is held horizontally and released, with no intial velocity. Find the angular velocity of the rod as it passes through the vertical orientation. (Use conservation of energy. The rotational inertia of a rod like this... Wednesday, December 12, 2012 at 1:37am by Karan A 20 cm long, 180g rod is pivoted at one end. A 17g ball of clay is stuck on the other end. What is the period if the rod and clay swing as a pendulum? Sunday, April 24, 2011 at 2:15pm by Dan A 1-kg mass (the blue mass) is connected to a 9-kg mass (the green mass) by a massless rod 70 cm long, as shown in the figure. A hole is then drilled in the rod 40.2 cm away from the 1-kg mass, and the rod and masses are free to rotate about this pivot point, P. Calculate the ... Saturday, March 24, 2012 at 3:18pm by deb A 1-kg mass (the blue mass) is connected to a 9-kg mass (the green mass) by a massless rod 67 cm long, as shown in the figure. A hole is then drilled in the rod 40.2 cm away from the 1-kg mass, and the rod and masses are free to rotate about this pivot point, P. Calculate the ... Saturday, March 24, 2012 at 8:34pm by please A 1-kg mass (the blue mass) is connected to a 8-kg mass (the green mass) by a massless rod 67 cm long, as shown in the figure. A hole is then drilled in the rod 40.2 cm away from the 1-kg mass, and the rod and masses are free to rotate about this pivot point, P. Calculate the ... Sunday, March 25, 2012 at 11:10am by hania A 1-kg mass (the blue mass) is connected to a 9-kg mass (the green mass) by a massless rod 67 cm long, as shown in the figure. A hole is then drilled in the rod 40.2 cm away from the 1-kg mass, and the rod and masses are free to rotate about this pivot point, P. Calculate the ... Sunday, March 25, 2012 at 3:00pm by hania A 1-kg mass (the blue mass) is connected to a 9-kg mass (the green mass) by a massless rod 67 cm long, as shown in the figure. A hole is then drilled in the rod 40.2 cm away from the 1-kg mass, and the rod and masses are free to rotate about this pivot point, P. Calculate the ... Monday, March 26, 2012 at 8:36am by Anonymous 1. A Uniform rod AB 3m long weighing 3 Kg is held horizontally by two strong vertical strings at its ends. A weight of 7 kg is hanging on the rod at a distance of 1m from end A and another weight of 12 Kg is hanging at 0.5 m from end B.(a)Make the free body diagram of the rod ... Tuesday, August 24, 2010 at 10:31am by shiva Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=A+rod+55+long+and+1.0+in+radius+carries+a+2.0+charge+distributed+uniformly+over+its+length","timestamp":"2014-04-20T11:53:40Z","content_type":null,"content_length":"45888","record_id":"<urn:uuid:66a22d1c-4702-466b-9db5-444bcb1e7d07>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Even Strength Data for Games 1-500 As a continuation of a previous post , I figured that I'd throw up EV data for the first 500 games of this season. I've included both overall EV data, as well as data on even strength play with the score tied. Empty net goals have been excluded. The first sheet shows data for games 1-500, the second for games 1-252, and the third for games 253-500. I've also included a fourth worksheet that compares each team's corsi ratio with the scored tied from the 1st half of the year (games 1-252) to the 2nd half (253-500). Some comments: It pains me to admit it as a Habs fan, but Gabe Desjardins is absolutely correct -- the Canadiens look like a terrible team by the numbers. Over the last 250 some games, the Habs have had the worst corsi ratio in the league at EV with the score tied, and by a large margin at that. The scoring chance numbers don't look any better. To make matters worse, the Habs have also been getting bombed on special teams. In the same 250 game period, they've given up almost twice as many shots on the PK as they've accumulated on the powerplay (55 vs. 98). In fairness to Montreal, that has more to do with their league worst penalty differential than it does with special teams performance per se . In any event, the numbers aren't good. Curiously, the team's underlying numbers were actually quite respectable over the first 250 some games. It'll be interesting to see where they end up. On the other side of things, the Ducks appear to have improved considerably at EV relative to the first 250 games. While their current record may not be impressive, they're definitely trending in the right direction. If they can find a way to take fewer penalties, they should be able to at least compete for a playoff spot. Phoenix continues to perform well at EV. The season is almost halfway over at this point and the Coyotes currently have the 2nd best corsi ratio in the league when the score is tied. This is surprising considering that they were 28th last year in this regard. I'm not entirely sure on how to account for their turnaround, although I suspect that it boils down to two things. For one, they've gotten rid of and/or sent to the minors a lot of guys that were really hurting them last year (Turris, Lindstrom, Hale, Fedoruk, Lisin, Porter, and Carcillo). The replacements -- Lombardi, Prucha, Fiddler, Lang, Vandermeer, Aucoin, and Vrbata -- are demonstrably better hockey players. Secondly, I think that the coaching change has likely had an effect as well. Tippett's teams in Dallas were consistently able to outshoot the other guys, both at EV and overall. While it's hard, if not impossible, to quantify his contribution, I think that it's safe to say that he's an upgrade over a relatively inexperienced Gretzky. 11 comments: The annoying part for the habs is the complete disparition of their ability to shoot the puck at the opposition's net (and thus drive possession). They actually are in the top ten for shooting %, so you can't say they were unlucky there. I'm surprised at how low a .917 ESsv% ranks them tough. fantastic stuff, jlikens. correlation between EV tied corsi and EV corsi is r= .91 here are some correlations between 1st 250 EV tied stats and 2nd 250 EV tied stats corsi ratio: r= .58 (not bad for such small samples.) Sh%: r= .26 corsi Sh%: r= .32 (i'm kind of surprised it's that high. too much noise here to conclude anything definitely though.) Sv%: r= 0 corsi Sv%: r = .17 (i'm going to reiterate my stance that all Sh% and Sv% numbers should use total shot attempts as the denominator) with all this stuff, the samples are just way too small right now, but if you keep posting these updates every 250 games, I think we're gonna learn some cool stuff by season's end. it's interesting because using the 08-09 EV tied data, i get the following correlations between first half of the season to second half of the season corsi ratio: r= .75 corsi Sh%: r= .1 corsi Sv%: r= .19 Vic found similar numbers iirc, and his study was more in-depth. also, if we consider special teams ratio to be (PP corsi for)/(PK corsi against), correlation between 1st 250 games STR and 2nd 250 games STR is r = .52 Yeah, I too noticed that there was a correlation between the 1st and 2nd halves of the sample in terms of EV SH% with the score tied. I also noticed that the standard deviation for the entire sample was 0.0195, which seemed a little high to my eye. I decided to do some simulations (50 simulations, to be exact) in order to determine what one would expect to see on the basis of chance alone. The results: Average STDEV: 0.01586 # of Seasons where STDEV was greater than 0.0195: 4/50 (8%) So there's roughly an 8% chance that the standard deviation would be that wide or wider by chance alone. This, of course, differs from what was observed in 0708 and 0809, where the STDEV in some 55-60% of the simulated seasons was broader than the actual STDEV. Of course, we're not even halfway through the season yet, so I'll refrain from concluding anything at this point. In terms of using total shot attempts in calculating SH and SV%, I agree that that's probably an improvement over the current method, what with the home recording biases and what not. I plan to look at the subject of shot recording bias in greater detail in the near future in order to determine which rinks are truly undercounting or overcounting shots. Yeah, it's definitely frustrating to watch. The Habs haven't iced a team that outshot the opposition with the score tied at EV since 2003-04. And before that, you'd likely have to go back as far as 1997-98. I was hoping that would change this year, but it's just not going to happen. what happens if you run that sim for ev tied corsi Sh%? i'm seeing an observed stdev of .0108 is that what you see too? Yeah, I get the same observed value of 0.0108. Simulation Results (50 simulations): Average Expected STDEV: 0.0083 In none of the 50 seasons did the expected STDEV exceed the observed value of 0.0108. The greatest expected STDEV in any of the simulated seasons was 0.01075. Phoenix's improvement is surprising to me. But that's because I think I (we?) discount defensive skill too much, or just do a terrible job of evaluating it. I knew Fiddler and Aucoin would help, but losing Sauer after one game and replacing him with Jovanovski should have hurt. But generally, I think I agree with you - Phoenix added a bunch of players who actually play defense, plus Vrbata, who has played well, and everything has turned out way better. You're right in that defensive ability at the player level is poorly understood and difficult to quantify. Vic had a great post from a while back that examined the effect of individual players on EV shooting and save percentage. He found that while individual players were able to affect their on-ice EV shooting percentage to some degree, the same was not true of save percentage. More specifically, there was no difference between 1st, 2nd, and 3rd pairing defensemen in terms of on-ice EV save percentage. Of course, the fact that 1st pairing D-men play against tougher competition (i.e. forwards with better on-ice shooting percentages) than lower-pairing D-men implies that there is some affect, just not much. That being the case, it seems that the best way for individual players to prevent even strength goals against their own team -- at least over the long run -- is through keeping the puck out of their own zone. The Coyotes have definitely excelled in that regard thus far.
{"url":"http://objectivenhl.blogspot.com/2009/12/even-strength-data-for-games-1-500.html?showComment=1261496759078","timestamp":"2014-04-17T21:22:10Z","content_type":null,"content_length":"63261","record_id":"<urn:uuid:b1b0cecf-a893-456c-9d18-80d257800a79>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
A Sudoku challenge generator When on holiday, one can kill time solving Sudoku puzzles. When one has done a dozen of those puzzles and one happens to have a wandering mind like mine, one starts wondering how those Sudoku challenges are created, and if it would be possible to describe an algorithm that can make such a scarcely filled-in 9-by-9 grid. Some sunny hours later one has a system that might work (I haven’t implemented it fully yet). For my future reference: here’s how I would do it. REMARK: this algorithm is quite logical and as such, I seriously doubt I would be the first one to think of it. I can imagine that Sudoku puzzles are already made by the hundreds with a program that uses this or a quite similar system. I’m not claiming it’s an original ‘invention’, just a fun problem to tackle. Step 1: take a good root grid Let’s start with an completely valid Sudoku filled-in grid. Any one would do, I take the one that has 1-2-…-9 in the top row, and in the top left 3×3 square: If you’re not familiar with these puzzles: for a 9×9 grid to be a valid Sudoku grid, the following 3 requirements should be fulfilled: 1. for each row: every number from 1 to 9 should occur exactly once 2. for each column: every number from 1 to 9 should occur exactly once 3. for each 3×3 square with a thicker border (there are 9 of them): every number from 1 to 9 should occur exactly once Step 2: shuffle the grid around There are a couple of transformations that we can apply on a Sudoku grid while still keeping it in a valid state. They are: • swapping two rows in same group: when you swap 2 rows within the same ‘group’ (within the 3×3 borders), the Sudoku requirements remain fulfilled (I won’t include a formal proof, but trust me on this one). • swapping two columns in same group: the vertical version of the previous one. • swapping two groups of columns: vertical version of the previous one There are maybe other, more complex, transformations, but these will take you a long way. Maybe someone could prove that with these base operations all possible Sudoku grids can be constructed, or maybe on the contrary, that some combinations can never be reached. We don’t really care, as long as we can apply a random sequence of the above transformations to get a grid that seems kind of random but is still valid. You can compare this ‘shuffling’ with the Rubik cube: you get it in the virgin state and then you shuffle it around to get a pseudo-random start situation. Step 3: erase a number of cells This was the more tricky part. How many cells do I erase, and how do I make sure that the remaining challenge is still solvable, with only 1 solution? 1. number of cells to erase: this is one of the most important factors that define the difficulty level of a Sudoku puzzle. There are 81 cells in a full grid, the ‘easy’ puzzles have around 30-35 remaining cells, the intermediate 25-30, the difficult ones 20-25. These are indications based on real-life examples, not some kind of official law. One could probably make a 30-cell challenge that is unsolvable, or a 22-cell challenge that is considered ‘easy’. But let’s take these numbers as a guideline 2. random approach: just erase 50 (easy) to 60 (difficult) cells at random. Very easy, but it is possible to make grids that are unsolvable. Consider e.g. a challenge with only the bottom 3 rows filled in (i.e. 27 remaining cells). There’s no single solution to that. 3. random with simple limits: e.g. take as a limit that only N rows or columns may be empty. Taking N = 0 for easy challenges and N = 1 for difficult challenges could be a safe strategy. I’m not sure this is safe enough, so I made something even more refined. 4. random with level-1 check: this would only erase the cell if it could be solved by a level-1 strategy. What do I call “level-1″? That is: if applying the sudoku rules (rows/columns/squares) only leaves 1 possible solution for that cell. To calculate this, you just take the nine possible values for that cell and strike out all numbers that already appear in that row, that column and that square. If you’re left with only one possible number, that’s a level-1 solution. One remark on this: the first couple of cells you check will always be level-1 solvable (if there are already eight occurences of e.g. ’5′ in the grid, the ninth is always level-1 solvable). As the grid becomes emptier and emptier, some cells will not meet the level-1 requirement and will not be erased. There is a point somewhere where no more cells can be erased, but I’m not really sure where that point will be, and how variable it is (is it e.g. always between 20 and 30? Is it always 27?) I haven’t built a prototype yet so I can’t say. If this point would be too high (we want to make a ‘difficult’ challenge and we can’t erase any cell anymore to get below 25), we might need a 2nd round of erasing that does not use the level-1 check. 5. random with level 2 check: level 2 is where you need to make suppositions about a second cell in order to find the solution for the given cell. I’m not going into details (nor do I have all the details :-) ). 6. row-per-row / column-per-column / square-per-square erasing: instead of jumping at random in the grid, check a (random) cell for each row sequentially, going from 1 to 9 and start over. This can help to make the distribution of remaining cells more even (e.g. every square has 2 – 4 cells left). The ‘simple’ or ‘level-1′ checks can be used like described above 7. number-per-number erasing: instead of jumping at random in the grid, check a (random) occurrence for each number sequentially, going from 1 to 9 and start over. This can help to make the distribution of remaining numbers more even. The ‘simple’ or ‘level-1′ checks can be used like described above The only way to make sure that the resulting algorithm actually works, would be to prove it mathematically (don’t feel that’s something I would want to do – certainly not step 3) or to build a proof of concept and let it run a solid number of test challenges (don’t have that much free time now). If anyone has better suggestions, don’t hesitate to leave a comment. Related posts: 15 Responses to A Sudoku challenge generator 1. I enjoyed the article, please post again if you attempt to put this into code. I don’t think you could generate every puzzle from step 2, but I would love to see a proof of concept if you could make it work. 2. I enjoyed it too. If you would like another sort of challenge try SHENDOKU. I loved the idea of playing with 2 or more friends. http://www.shendoku.com. Let me know what you think… 3. Thanks for you ideas. I’m currently trying to implement this in Python. So far I have only completed the various shuffling techniques you described, plus reflection. I think erasing squares will probably be hardest to implement. Anyway I just wanted to thank you again for doing all the thinking for me :) 4. can I use the algorithm in my project? 5. I have implemented an excel macro for this algorithm. Although my removal method is a little different. The problem with step 2 is that it allows for a repeating pattern with respect to column for each sub grid, as can be seen from your example ’2 5 8′, ’3 6 9′, ’1 4 7′ will always be in the same column for each sub grid. I am still looking for an algorith to break that. 6. I recently read an article about sudoku in the newspaper. From a mathematical point of view it seemed a lot more complicated than I had expected. It is eg. still unknown to mathematicians how many sudoku challenges there are. Remember here that a true soduku has one and only one solution as mentioned above ! The article mentioned that there was one mathematician who though that you could have a sudoku with 16 remaining cells and a unique solution. It remains to be proven though. I believe the best any mathematician had done so far was discover a soduko with 17 remaining cells and only two solutions. 7. enjoyed reading ur article but i have a different algorithm to generate a valid filled sudokus not restricting your shuffling to rows and coloumns..and one more whats that symmetry in sudoku all 8. Cool. Just wondering if step 2 can produce all possible puzzles? If not, what’s the percentage? Actually, I think it should be able to produce all possible puzzles. 9. Acctually, I think swap rows and columns can implement transposing already. 10. in addition to methods of swapping rows/columns to generate puzzles, you should also be able to swap individual numbers globally. for example, replace every 7 with a 5, and replace every 5 with a 11. This article inspired me to write a Sudoku generator in PHP. Thanks for the good ground-work! However I am finding that even with 6 Mutation rules (Row Swap, Col Swap, Row Block Swap, Col Block Swap, Transpose and Replace) and 100 mutations that the mutated grid shows some patterns. Groups of 3 (either row or column) within one block will always reoccur in the same position in the other blocks. Say, in position 1,1 (top left) we had the numbers 4,7,1 then we also get exactly that set of numbers in the 2nd row (or 3rd row) of the top middle block (in perhaps different sequence) and the same set of numbers also in the 3rd row (or 2nd row) of the top right block. Surely a good mutation sequence would not have necessarily the same set of numbers together :-) 12. This will always produce variations on a theme – three numbers which start off adjacent will remain adjacent – though in a different order. If anyone fancies a look, I’ve used an entirely different approach in my Javascript sudoku generator: In essence, it fills in as much as it can, and then solves the puzzle to get the remaining numbers. Best Wishes, 13. Hi Simon, your article is really good. Its so simple and crisp, yet so complete. Thank you so much for sharing this article. 14. I am sorry. I mistook Simon as the author of this blog article. My first note was to the author. Simon, you idea is also very good and innovative. This entry was posted in science. Bookmark the permalink.
{"url":"http://blog.forret.com/2006/08/a-sudoku-challenge-generator/","timestamp":"2014-04-17T06:45:33Z","content_type":null,"content_length":"55750","record_id":"<urn:uuid:131c7842-2a1f-4f34-a95d-0e417784c440>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
GSP Lessons . . . a work in progress . . . GSP Lesson 1. Given two lines and a point "between" them. Construct all circles through the point and tangent to each of the two lines. The case of intersecting lines is shown here. GSP Lesson 2. Find the shortest path between two points on opposides of the river when crossing the river must be done on a path perpendicular to the banks. GSP Lesson 3. Given two circles of different radius that intersect. If E is one point of intersection, construct a line through E that cuts off chords of equal length in the two circles. GSP Lesson 4a. What is the locus of the midpoint of a line segment of varying length where one end is fixed and the other end moves around a circle? GSP Lesson 4b. What is the locus of the midpoint of a line segment of varying length where one end is fixed and the other end moves around a triangle? Generalize to movement around any closed path. GSP Lesson 4c. What is the locus of the midpoint of a line segment of varying length where each end of the segment moves around a circle? GSP Lesson 5. Given two points A and B on the same side of a line k. If C is a point on K, construct the location of C so that AC + CB is a minimum. GSP Lesson 6. If the base and area of a triangle are fixed, find the triangle with minimal perimeter. GSP Lesson 7a. Take any parallelogram and construct squares externally on each side. Prove that the centers of the four sqares are the vertices of a square. Show that the area of this square is always greater than or equal to twice the area of the parallelogram. When is it twice the area? GSP Lesson 7b. Take any parallelogram and construct squares toward the inside of the parallelogram on each side. Prove that the centers of the four sqares are the vertices of a square. Is there a relationship of the area of this square to the area of the parallelogram. GSP Lesson 8. Given three line segments that are the lengths of a point E from the vetrices A, B, and C or an equilateral triangle. Construct triangle ABC. What if E was a point outside the triangle? GSP Lesson 9. Construct a triangle of minimal perimeter inscribed in a given acute triangle. GSP Lesson 10. In an equilateral triangle ABC, let D be the mid-point of AB and E be the mid-point of AC. Extend DE to intersect the circumcircle at point P. Determine the ratio PC/PA. Determine the ratio DE/ GSP Lesson 11. Construct a circle with center O having perpendicular diameters AB and DC. Take the midpoint M of OC and constuct an arc with center at M through A. The arc intersects OD at N. Investigate ON/DN. Show that AN is the length of the side of a regular pentagon inscribed in the circle with radius OA (i.e., construct the inscribed pentagon, . . . and investigate). GSP Lesson 12. (Fixed Angle) What is (construct) the locus of the vertex of a fixed angle that is moved such that its sides always subtend a fixed segment AB. That is, given an angle of a specific measure, place the angle so that its two sides always touch points A and B of the segment GSP Lesson 13 Rectangle circumscribed about an ellipse. Open first GSP file. Open Second GSP File. Run the animations in these files and explore. They should suggest at least the following theorem: Prove that the vertices of a rectangle circumscribed about an ellipse will lie on a circle. Determine its center and radius with respect to the ellipse. Problem: For a circle with diameter FB construct a tangent at G. Select points A and B on the circle and construct the tangents to the circle at A and B. Let P be the intersection of these latter two tangents. Construct rays FA, FP, and FB with respective intersections C, D, and E with the tangent at G. Show that CD = DE. What restrictions must be placed on the locations of A and B? GSP Lesson 15. Given a triangle ABC with its circumcircle. Construct a circle tangent to AB, AC and the circumcircle. (There may be two of them.)
{"url":"http://jwilson.coe.uga.edu/gsp.lesson.folder/gsp.lesson.html","timestamp":"2014-04-16T13:02:29Z","content_type":null,"content_length":"7120","record_id":"<urn:uuid:49854120-9dcc-45e7-b5ae-3e63a5f86459>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
how to repeated evalution of the recurrence relation May 2nd 2011, 12:07 PM how to repeated evalution of the recurrence relation Test the answer by finding the value of $u_{4}$ in 2 different ways: by substitution in your solution and by repeated evaluation of the equation. by substitution I have found u2=27,u3=111 and u4=447 I'm wondering how to find these values by repeated evaluation of the equation? Moderator edit: Please use tex tags NOT math tags. See Questions etc. subforum. May 2nd 2011, 12:18 PM Find the solution to the equation Ur+1= 4Ur + 3? U1 = 6 Test your answer by finding the value of u4 in 2 different ways: by substitution in your solution and by repeated evaluation of the equation. sorry about the equation problem May 2nd 2011, 12:37 PM Test the answer by finding the value of $u_{4}$ in 2 different ways: by substitution in your solution and by repeated evaluation of the equation. by substitution I have found u2=27,u3=111 and u4=447 I'm wondering how to find these values by repeated evaluation of the equation? I think you are trying to solve the difference equation $u_{r+1}-4u_{r}=3$ I think there is a typo and you are missing the +1 So to solve we do it in two parts: First we solve the homogenous equation we use the "guess" $u_{r}=n^{r} \implies u_{r+1}=n^{r+1}$ Plugging this in gives $n^{r+1}-4n^{r}=0 \iff n^{r}(n-4)=0$ this gives $n=4$ so the solution has the form Since the right hand side is a constant we guess that the particular solution is also a constant $C4^{r+1}+A-4(C4^r+A)=3 \iff A-4A=3 \iff A=-1$ So the solution is Now using the initial condition gives $u_{1}=4C-1=6 \iff C=\frac{7}{4}$ So the solution is Now if we plug in we get $u_{4}=7\cdot 4^3-1=447$ May 2nd 2011, 12:42 PM Look at this. May 2nd 2011, 01:51 PM thanks so much for your super clear explanation!!!
{"url":"http://mathhelpforum.com/discrete-math/179285-how-repeated-evalution-recurrence-relation-print.html","timestamp":"2014-04-21T00:45:53Z","content_type":null,"content_length":"10398","record_id":"<urn:uuid:4f97336e-5e59-46b5-8791-5f127e1189fd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Acceleration of Early-Photon Fluorescence Molecular Tomography with Graphics Processing Units Computational and Mathematical Methods in Medicine Volume 2013 (2013), Article ID 297291, 9 pages Research Article Acceleration of Early-Photon Fluorescence Molecular Tomography with Graphics Processing Units ^1Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China ^2Center for Biomedical Imaging Research, Tsinghua University, Beijing 100084, China Received 20 December 2012; Accepted 2 March 2013 Academic Editor: Wenxiang Cong Copyright © 2013 Xin Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Fluorescence molecular tomography (FMT) with early-photons can improve the spatial resolution and fidelity of the reconstructed results. However, its computing scale is always large which limits its applications. In this paper, we introduced an acceleration strategy for the early-photon FMT with graphics processing units (GPUs). According to the procedure, the whole solution of FMT was divided into several modules and the time consumption for each module is studied. In this strategy, two most time consuming modules (G[d] and W modules) were accelerated with GPU, respectively, while the other modules remained coded in the Matlab. Several simulation studies with a heterogeneous digital mouse atlas were performed to confirm the performance of the acceleration strategy. The results confirmed the feasibility of the strategy and showed that the processing speed was improved significantly. 1. Introduction Fluorescence molecular tomography (FMT) is a promising imaging technique for small animals that allows visualization of 3D distributions of fluorescent biomarkers in vivo [1, 2]. However, significant challenges remain in FMT because the high degree of light scatter in biological tissues results in an ill-posed image reconstruction problem and consequently reduces the spatial resolution [3]. Considering this point, time-gated technique is proposed, which only utilizes “early-arriving” photons that experience few scattering events so as to reduce the large amount of diffusion photons. To date, a number of groups have validated that with time-gated detection technique, the spatial resolution and fidelity of the reconstructed results can be improved [3–5]. For the reconstruction of FMT using early photons, there are several feasible algorithms, such as the filtered back-projection method, schemes based on the time-resolved diffusion equation (DE), the time-resolved telegraph equation (TE), and the second-order cumulant approximation of the radiative transport equation (RTE), [3–7]. Among them, the method based on time-resolved DE is the most popular utilized for simplicity. However, compared with continuous wave FMT (CW-FMT), time-domain FMT (TD-FMT) will cost more time because of the time scale. Generally, solving TD-FMT will cost tens of minutes to hours and there are no efficient schemes for its acceleration at present. Fortunately, the high-speed development of graphics processing unit (GPU) technology provides direction to the acceleration of TD-FMT solution. The highly parallel structure of GPU makes it more effective than central processing unit (CPU) for a range of algorithms on parallelizable floating point operations. However, programming on GPU had been difficult until the compute unified device architecture (CUDA) was proposed in 2006 [8]. CUDA comes with a software environment that allows developers to use C as a high-level programming language. Utilizing CUDA-enabled GPU, parallel acceleration algorithms has been studied in the field of fluorescence tomography. Fang and Boas reported a parallel Monte Carlo algorithm accelerated by GPU for modeling time-resolved photon migration in arbitrary 3D turbid media [9]. Zhang et al. implemented acceleration of adaptive finite element framework for bioluminescence tomography with CUBLAS and CULA libraries [10]. However, to date, CUDA-enabled GPU technology has not been utilized to solve TD-FMT. In this paper, we introduced an acceleration strategy for the early-photon FMT. The time consumption of each module was studied to confirm the necessity of GPU acceleration. In the strategy, two most time consuming modules ( and W modules) were accelerated with CUDA language, respectively, and the other modules were coded in the Matlab. Several simulations with a heterogeneous digital mouse atlas were performed to evaluate the performance of the acceleration strategy. The paper is organized as follows. In Section 2, the forward and inverse models based on TD-FMT are illustrated in detail. Numerical simulations with fluorescence targets embedded in a 3D mouse model are carried out. In Section 3, simulation results are shown and analyzed. Finally, we discuss the results and conclusion in Section 4. 2. Materials and Methods 2.1. Time-Domain Diffusion Equation and Finite Element Method The radiative transfer equation (RTE) is considered as the most accurate model for describing the process of photon propagation in biological tissues. However, because RTE is computationally expensive, the diffusion approximation of RTE is commonly used. Thus, photon propagation for FMT can be modeled with the coupled time-domain DEs as follows [7]: where denotes the photon density for excitation and fluorescence light, respectively. provides the impulse light source. is the absorption coefficient and is the reduced scattering coefficient. is the diffusion coefficient defined by . As the excitation and emission wavelength are close to each other, the optical properties are assumed to be identical at both excitation and emission wavelengths for simplification. The fluorescent targets are described by fluorescent distribution and lifetime is the lifetime function. is the temporal convolution operator. is the speed of light. To solve these equations, Robin boundary conditions are implemented on the boundary of the region [7]: where denotes the outward normal vector of the boundary. The coefficient takes into account the refractive index mismatch between both media. Based on the first-order Born approximation, the fluorescence signal measured at a detector point for an impulsive excitation at source position at time can be written as The weight matrix is described as where and are Green’s functions of excitation and emission ( and in short). In addition, for an isotropic impulse source, is equal to . In order to reduce the influence of heterogeneity, the normalized Born approximation [11] is employed as follows: where is the normalized Born approximation of . By utilizing the standard Galerkin-FEM method, the object is discretized into mesh nodes and the time is approximated with a sequence of time points with a time interval . Then, Green’s functions can be derived: where and are matrices of with the same expression as given in [7, 12, 13] but differs in form: At last, (3) is converted into the following matrix-form equation: Then the unknown fluorescence distribution at different time-gates is obtained by solving the linear equation (9) using algebraic reconstruction technique (ART) with nonnegative constraints. 2.2. GPU Acceleration Strategy 2.2.1. The Flow Chart of the Acceleration Strategy For the whole procedure, there is a large amount of matrix operations which are suitable for parallel accelerations by GPU. However, besides the matrix operations, there are still some other operations such as parameter configurations and mesh discretization, which are not suitable for the GPU acceleration. Therefore, the rest parts will be implemented in Matlab for programming flexibility. The execution flow chart of the whole algorithm is shown in Figure 1. The main program which contains the parts unnecessary to be accelerated is executed in Matlab. The parts of and weight matrix acceleration, which need to be accelerated by GPU, are coded into subroutines so as to be called by the Matlab program. For the acceleration, because CUBLAS library is used for the subroutine which can be recognized by the C compiler, “Matlab executable” (MEX) technology is available for the interface between the Matlab program and the acceleration. As to the weight matrix acceleration, CUDA language is used in the subroutine and thus NVMEX technology is utilized as the interface. Details about the acceleration algorithms and the NVMEX technology are illustrated in the next subsections. 2.2.2. Acceleration In the calculation procedure, the module to solve is time consuming because matrix inversion should be performed for each detector at each time node. Although the method to solve is similar to that of , the number of light sources is much smaller than the number of detectors. As a result, the time consumption of is very little that it is unnecessary to be accelerated. The Matrix inversion of large size is computationally complex and there are no effective methods for this problem. Fortunately, the matrices that need to be inversed for each detection point and each time node are the same. Therefore, the inversion of the matrix can be calculated in advance and thus the inversion operations can be converted into multiplication operations, which can be accelerated by GPU more NVIDIA has provided a CUBLAS library on top of the CUDA driver for the developers to do some basic linear algebra operations. CUBLAS is an implementation of basic linear algebra subprograms (BLAS) and the “CU” stands for CUDA [10]. The multiplication operations during solving can be implemented by using the CUBLAS library. Furthermore, it can be found that for each detector is irrelevant and can be parallel computing. However, for different time nodes, cannot be calculated simultaneously because the calculation of the ()th time node of depends on the th time node of . Therefore, we can calculate for all of the detectors for one time node at a time. At last, the structure of the whole should be changed in order to solve the weight matrix conveniently. 2.2.3. Weight Matrix Acceleration As mentioned in (4), to solve the weight matrix, time convolution of several matrices should be calculated. Because the number of source-detector (sd for short) pairs is large and the size of or for each point and each time node is large, the whole procedure of solving the weight matrix is time consuming. CUDA language is adopted for the acceleration algorithm of solving the weight matrix. Figure 2 shows the principle of the acceleration algorithm. and is Green’s function for one source or detector for all the time nodes. The row stands for different mesh nodes and the column stands for different time nodes. It can be found that data of each row is irrelevant and only time convolution is calculated. Thus, data of each row can be distributed into different threads; therefore they can be implemented simultaneously. In this paper, the number of threads contained in each block is configured 256. The total block number is configured according to the row number of the matrix. Texture memory is used to load the matrix of , , and E because it can accelerate the data visiting speed with its cache. 2.2.4. NVMEX Technology As the execution efficiency of Matlab is lower than C or Fortran, the time consuming subroutines are always programmed with C or Fortran and complied into binary MEX-files, which can be loaded and executed by the Matlab interpreter. However, subroutines with CUDA languages cannot be complied into MEX-files directly because CUDA language could not be recognized by the conventional compilers based on C or Fortran. Instead, this problem can be solved by the NVMEX technology, in which “NV” stands for NVIDIA. NVMEX technology connects Matlab and CUDA language conveniently and efficiently. With NVMEX technology, the codes based on CUDA are compiled into MEX-files by the “nvcc” complier and then called by the Matlab interpreter. 2.3. Experimental Setup Numerical experiments are performed to validate the performance of the acceleration strategy. The synthetic measurements are generated based on a free-space, time-gated FMT system, schematically depicted in Figure 3(a). The excitation light is an ultrafast laser emitting approximately 1ps pulses. The imaged mouse is suspended on a rotation stage and the laser beam is coupled to the surface of the mouse by a pair of galvanometer-controlled mirrors. At last, the transmitted light is detected by a high-speed intensified CCD (ICCD) at the opposite side of the excitation light [7]. In this simulation study, a 3D mouse atlas is employed which provides not only the complex surface but also the anatomical information [14]. We perform the numerical simulations based on the mouse chest region, so only the mouse torso from the neck to the bottom of the liver, as shown in Figure 3(b), is selected, with a height of 3cm. In the simulations, the mouse is suspended on the rotation stage and the rotation axis is defined as the -axis. The mouse is rotated over 360° with 60° increments and the data collected consisted of 6 projections. The projection number is 6 because the computational size of the whole program will enlarge as the projection number increases and the memory consumption will exceed the limit of the computer. As shown in Figure 3(c), the field of view (FOV) of the detection with respect to each excitation source is 120°. A cylindrical fluorescent target with the height of 0.2cm and radius of 0.1cm is located at the (−0.31, −0.02, 1.93), which is indicated by the blue circle in Figure 3(c). The simulations are performed in a heterogeneous mouse model. The absorption coefficient and the reduced scattering coefficient shown in Table 1, which are calculated based on [15], are assigned to heart, lung, and liver to simulate photons propagation in biological tissues. In order to evaluate the acceleration performance of the acceleration strategy, 6 simulated cases were performed. Configurations of these cases were the same except that the numbers of discretized mesh nodes and detectors were different. The excitation and emission intensity on the surface of the mouse model were calculated by a forward simulated program in advance. For different cases, the reconstructed fluorescent distributions at the time node of 300ps were shown as the early-photon results. For the reconstruction, the relaxation parameter of ART was and the number of iteration steps was 100. At last, the programs are performed on an Intel(R) Core (TM) i7-2600 CPU (3.4GHz) platform with 16GB memory. A NVDIA Geforce GTX 460 graphics card with 336 cores is used for the acceleration strategy. The version number for the CUDA is 4.0. The contrasted programs are performed by Matlab 2008 and COMSOL Multiphysics 3.5 (COMSOL Inc, Stockholm, Sweden). 3. Results 3.1. The Necessity of the GPU Acceleration For the simulated cases, the time consumption of each module by Matlab is shown in Table 2. The whole program is divided into 6 modules, among which the T4 and T5 modules are suitable for GPU-enabled acceleration. (In fact, the T3 module is also matrix operation. However, the time consumed by T3 is so short compared with the whole program that it is unnecessary to be accelerated.) It can be found that T4 and T5 modules are time consuming compared with other modules. In order to study the time occupancy quantitatively, we define as the time percentage of each module to the total time for each case. The values of the (T4) module, W (T5) module, and () module are shown in Figure 4. It can be found that, for each case, the value for the module is more than 95%. As a result, we can reach the conclusion that the GPU acceleration is necessary. 3.2. Speedup Performance of the Acceleration Algorithms For the 6 simulated cases, the fluorescent target is reconstructed by the Matlab program and the GPU acceleration strategy, respectively. Time consumption of each module by the two methods is recorded. Then the speedup ratios of the acceleration algorithm, the weight matrix acceleration algorithm, and the whole acceleration strategy are studied, respectively. The time consumptions of by both methods are showcased in Table 3. The speedup ratios of the -accelerating algorithm for different cases are shown in Table 3. It can be found that the speedup ratios decrease as the number of mesh nodes increases. The main reason is that, in the acceleration algorithm, the is calculated for all the detection points and for one time node at a time. Therefore, the structure of the should be adjusted into another form in order to suit for the following weight matrix calculation. The memory need for this step is huge, and as the scale increases, the memory can exceed the physical memory of the computer which leads to more time consumption. The time consumptions of the weight matrix by both methods and the speedup ratios are showcased in Table 4. The speedup ratio for the weight matrix acceleration is more than 25, which is higher than that of the acceleration algorithm. The reason is that the convolution operation is highly parallel, which makes it more easily for the GPU to achieve significant acceleration. The speedup effect of the whole strategy is shown in Table 5. The final acceleration effect is a compromise of the acceleration of the computation between and . 3.3. Accuracy of the Acceleration Strategy In the GPU acceleration strategy, arithmetic operations are performed with single precision, because the use of double-precision operations results in increased memory requirements and a reduction of speedup performance. However, operations with single precision may bring in some errors compared with the double-precision operations by the Matlab. In order to study the error brought in by the single-precision operation in GPU. One simulated case with 4697 mesh nodes and 1409 detectors is selected to study the accuracy of reconstruction result (in fact, all the cases have the same conclusion and only one case is shown). Besides, 10% zero-mean, Gaussian noise was added to the synthetic data to simulate the actual case. Figure 5 shows the reconstructed results by Matlab and the GPU acceleration strategy. Then, the max error between the results is calculated as follows: where and stand for the reconstructed fluorescent signals in each node by Matlab and GPU acceleration strategy, respectively. It can be found that the max error is 0.15%, which is negligible. 4. Discussions In this paper, we introduced an acceleration strategy for the early-photon fluorescence molecular tomography with GPU. Results of several numerical simulation cases validate the feasibility of this acceleration strategy. With the acceleration strategy, the speedup ratio is about 10 for different cases. Compared with the other GPU-enabled acceleration algorithms [9, 10], the speedup ratio is not very great. There are mainly two reasons. First, the step to solve is mainly matrix inversion operations, which is less suitable for parallel acceleration compared with the operations of matrix multiplication and matrix convolution. Besides, the time consumed by the structure conversion of cannot be neglected while the computational scale is large. Second, the contrasted program is executed by Matlab and the functions for matrix operations in Matlab have been optimized. The efficiency of the whole acceleration strategy is decided by two factors: the time percentage of the parallel modules to the whole program and the speedup efficiency of each acceleration algorithm. It can be found that the speedup ratio of the weight matrix algorithm is larger than that of . The cases studied in this paper are focused on different computational sizes and the projection number for each case is 6 for simplicity. If the projection number increases while the numbers of mesh nodes and detectors remain the same, the time percentage of the weight matrix module will increase. Therefore, for these cases, the final speedup ratio will be higher. For the acceleration algorithm, the speedup ratio is not very remarkable. Future work will focus on improving its performance. In fact, the stuffing matrix produced by the FEM is a sparse matrix and the sparsity is used while the matrix inversion operations are performed in Matlab. However, the sparsity has not been utilized in the GPU acceleration algorithm. It is believed that the utilization of the sparsity of matrix will further improve speedup ratio of the acceleration algorithm. We performed several cases of different parameters to test the acceleration strategy. The imaging quality is improved when the numbers of mesh nodes and detectors increase. More detectors result in better spatial resolution and finer meshes will provide more details in the reconstructed results [16]. However, this paper is focused on the performance of the acceleration strategy for different simulation cases. The relationship between the experimental parameters and the reconstructed results is not the key point and is less considered. In conclusion, we accelerated the early-photon fluorescence molecular tomography with GPU. Feasibility of this acceleration strategy was confirmed by several simulations. The accelerated results showed few errors while the time consumption was significantly reduced. This work is supported by the National Basic Research Program of China (973) under Grant no. 2011CB707701, the National Natural Science Foundation of China under Grant no. 81071191, 81271617, the National Major Scientific Instrument and Equipment Development Project under Grant no. 2011YQ030114, National Science and technology support program under Grant no. 2012BAI23B00. 1. R. Weissleder and V. Ntziachristos, “Shedding light onto live molecular targets,” Nature Medicine, vol. 9, no. 1, pp. 123–128, 2003. View at Publisher · View at Google Scholar · View at Scopus 2. V. Ntziachristos, J. Ripoll, L. V. Wang, and R. Weissleder, “Looking and listening to light: the evolution of whole-body photonic imaging,” Nature Biotechnology, vol. 23, no. 3, pp. 313–320, 2005. View at Publisher · View at Google Scholar · View at Scopus 3. G. M. Turner, G. Zacharakis, A. Soubret, J. Ripoll, and V. Ntziachristos, “Complete-angle projection diffuse optical tomography by use of early photons,” Optics Letters, vol. 30, no. 4, pp. 409–411, 2005. View at Publisher · View at Google Scholar · View at Scopus 4. F. Leblond, H. Dehghani, D. Kepshire, and B. W. Pogue, “Early-photon fluorescence tomography: spatial resolution improvements and noise stability considerations,” Journal of the Optical Society of America A, vol. 26, no. 6, pp. 1444–1457, 2009. View at Publisher · View at Google Scholar · View at Scopus 5. M. Niedre and V. Ntziachristos, “Comparison of fluorescence tomographic imaging in mice with early-arriving and quasi-continuous-wave photons,” Optics Letters, vol. 35, no. 3, pp. 369–371, 2010. View at Publisher · View at Google Scholar · View at Scopus 6. F. Gao, W. Liu, and H. Zhao, “Linear scheme for time-domain fluorescence molecular tomography,” Chinese Optics Letters, vol. 4, no. 10, pp. 595–597, 2006. View at Scopus 7. B. Zhang, X. Cao, F. Liu, et al., “Early-photon fluorescence tomography of a heterogeneous mouse model with the telegraphy equation,” Applied Optics, vol. 50, no. 28, pp. 5397–5407, 2011. 8. Q. Fang and D. A. Boas, “Monte carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units,” Optics Express, vol. 17, no. 22, pp. 20178–20190, 2009. View at Publisher · View at Google Scholar · View at Scopus 9. B. Zhang, X. Yang, F. Yang et al., “The CUBLAS and CULA based GPU acceleration of adaptive finite element framework for bioluminescence tomography,” Optics Express, vol. 18, no. 19, pp. 20201–20213, 2010. View at Publisher · View at Google Scholar · View at Scopus 10. A. Soubret, J. Ripoll, and V. Ntziachristos, “Accuracy of fluorescent tomography in the presence of heterogeneities: study of thenormalized born ratio,” IEEE Transactions on Medical Imaging, vol. 24, no. 10, pp. 1377–1386, 2005. View at Publisher · View at Google Scholar · View at Scopus 11. M. Schweiger, S. R. Arridge, M. Hiraoka, and D. T. Delpy, “The finite element method for the propagation of light in scattering media: boundary and source conditions,” Medical Physics, vol. 22, no. 11 I, pp. 1779–1792, 1995. View at Publisher · View at Google Scholar · View at Scopus 12. F. Gao, H. Zhao, L. Zhang, et al., “A self-normalized, full time-resolved method for fluorescence diffuse optical tomography,” Optics Express, vol. 16, pp. 13104–13121, 2008. 13. B. Dogdas, D. Stout, A. F. Chatziioannou, and R. M. Leahy, “Digimouse: a 3D whole body mouse atlas from CT and cryosection data,” Physics in Medicine and Biology, vol. 52, no. 3, article 003, pp. 577–587, 2007. View at Publisher · View at Google Scholar · View at Scopus 14. G. Alexandrakis, F. R. Rannou, and A. F. Chatziioannou, “Tomographic bioluminescence imaging by use of a combined optical-PET (OPET) system: a computer simulation feasibility study,” Physics in Medicine and Biology, vol. 50, no. 17, pp. 4225–4241, 2005. View at Publisher · View at Google Scholar · View at Scopus 15. T. Lasser and V. Ntziachristos, “Optimization of 360° projection fluorescence molecular tomography,” Medical Image Analysis, vol. 11, no. 4, pp. 389–399, 2007. View at Publisher · View at Google Scholar · View at Scopus
{"url":"http://www.hindawi.com/journals/cmmm/2013/297291/","timestamp":"2014-04-20T06:59:59Z","content_type":null,"content_length":"176622","record_id":"<urn:uuid:448ca437-46a4-4551-839b-e4d5a1d5d287>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
Integrals for Results 1 - 10 of 163 - Adv. Math , 1996 "... this paper have a finite number of elements, a minimum element 0 # and a maximum element 1 . For two elements x and y in a poset P, such that x#y, define the interval [x, y]=[z#P:x#z#y]. We will consider [x, y] as a poset, which inherits its order relation from P. Observe that [x, y] is a poset with ..." Cited by 50 (9 self) Add to MetaCart this paper have a finite number of elements, a minimum element 0 # and a maximum element 1 . For two elements x and y in a poset P, such that x#y, define the interval [x, y]=[z#P:x#z#y]. We will consider [x, y] as a poset, which inherits its order relation from P. Observe that [x, y] is a poset with minimum element x and maximum element y. For x, y # P such that x#y, we may define the Mo# bius function +(x, y) recursively by +(x, y)= & : x#z<y +(x, z), if x<y, 1, if x=y , 1998 "... The linear span of isomorphism classes of posets, has a Newtonian coalgebra structure. We observe that the ab-index is a Newtonian coalgebra map from the vector space to the algebra of polynomials in the non-commutative variables a and b. This enables us to obtain explicit formulas showing how the c ..." Cited by 36 (28 self) Add to MetaCart The linear span of isomorphism classes of posets, has a Newtonian coalgebra structure. We observe that the ab-index is a Newtonian coalgebra map from the vector space to the algebra of polynomials in the non-commutative variables a and b. This enables us to obtain explicit formulas showing how the cd-index of the face lattice of a convex polytope changes when taking the pyramid and the prism of the polytope and the corresponding operations on posets. As a corollary, we have new recursion formulas for the cd-index of the Boolean algebra and the cubical lattice. Moreover, these operations also have interpretations for certain classes of permutations, including simsun and signed simsun permutations. We prove an identity for the shelling components of the simplex. Lastly, we show how to compute the ab-index of the Cartesian product of two posets given the ab-indexes of each poset. , 1997 "... this paper we will consider oriented matroids. The lattice of regions of an oriented matroid is an Eulerian poset, thus it is natural to ask how to compute its cd-index. We provide here an answer to this question ..." Cited by 33 (24 self) Add to MetaCart this paper we will consider oriented matroids. The lattice of regions of an oriented matroid is an Eulerian poset, thus it is natural to ask how to compute its cd-index. We provide here an answer to this question - Comm. Math. Phys , 1995 "... Abstract. An example of a finite dimensional factorizable ribbon Hopf C-algebra is given by a quotient H = uq(g) of the quantized universal enveloping algebra Uq(g) at a root of unity q of odd degree. The mapping class group Mg,1 of a surface of genus g with one hole projectively acts by automorphis ..." Cited by 22 (1 self) Add to MetaCart Abstract. An example of a finite dimensional factorizable ribbon Hopf C-algebra is given by a quotient H = uq(g) of the quantized universal enveloping algebra Uq(g) at a root of unity q of odd degree. The mapping class group Mg,1 of a surface of genus g with one hole projectively acts by automorphisms in the H-module H ∗⊗g, if H ∗ is endowed with the coadjoint H-module structure. There exists a projective representation of the mapping class group Mg,n of a surface of genus g with n holes labelled by finite dimensional H-modules X1,..., Xn in the vector space HomH(X1 ⊗ · · · ⊗ Xn, H ∗⊗g). An invariant of closed oriented 3-manifolds is constructed. Modifications of these constructions for a class of ribbon Hopf algebras satisfying weaker conditions than factorizability (including most of uq(g) at roots of unity q of even degree) are described. After works of Moore and Seiberg [44], Witten [62], Reshetikhin and Turaev [51], Walker [61], Kohno [22, 23] and Turaev [59] it became clear that any semisimple abelian ribbon category with finite number of simple objects satisfying some nondegeneracy condition gives rise to projective representations of mapping class groups - Advances in Math. 146 , 1998 "... Crane and Frenkel proposed a state sum invariant for triangulated 4-manifolds. They defined and used new algebraic structures called Hopf categories for their construction. Crane and Yetter studied Hopf categories and gave some examples using group cocycles that are associated to the Drinfeld double ..." Cited by 20 (5 self) Add to MetaCart Crane and Frenkel proposed a state sum invariant for triangulated 4-manifolds. They defined and used new algebraic structures called Hopf categories for their construction. Crane and Yetter studied Hopf categories and gave some examples using group cocycles that are associated to the Drinfeld double of a finite group. In this paper we define a state sum invariant of triangulated 4-manifolds using Crane-Yetter cocycles as Boltzmann weights. Our invariant generalizes the 3-dimensional invariants defined by Dijkgraaf and Witten and the invariants that are defined via Hopf algebras. We present diagrammatic methods for the study of such invariants that illustrate connections between Hopf categories and moves to triangulations. 1 Contents 1 Introduction 3 2 Quantum 2- and 3- manifold invariants 4 Topological lattice field theories in dimension 2 . . . . . . . . . . . . . . . . . . . 4 Pachner moves in dimension 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Turaev-Viro inv... - ANNALS PURE APPL. LOGIC , 1996 "... ..." , 1998 "... Algebraic techniques are used to find several new combinatorial interpretations for valuations of the Martin polynomial, M(G; s), for unoriented graphs. The Martin polynomial of a graph, introduced by Martin in his 1977 thesis, encodes information about the families of closed paths in Eulerian graph ..." Cited by 18 (8 self) Add to MetaCart Algebraic techniques are used to find several new combinatorial interpretations for valuations of the Martin polynomial, M(G; s), for unoriented graphs. The Martin polynomial of a graph, introduced by Martin in his 1977 thesis, encodes information about the families of closed paths in Eulerian graphs. The new results here are found by showing that the Martin polynomial is a translation of a universal skein-type graph polynomial P(G) which is a Hopf map, and then using the recursion and induction which naturally arise from the Hopf algebra structure to extend known properties. Specifically, when P(G) is evaluated by substituting s for all cycles and 0 for all tails, then P(G) equals sM(G; s+2) for all Eulerian graphs G. The Hopf-algebraic properties of P(G) are then used to extract new properties of the Martin polynomial, including an immediate proof for the formula for M(G; s) on disjoint unions of graphs, combinatorial interpretations for M(G; 2+2 k) and M(G; 2&2 k) with k # Z 0, and a new formula for the number of Eulerian orientations of a graph in terms of the vertex degrees of its Eulerian subgraphs. - Mathematical Research Letters "... Abstract. A fundamental problem in the theory of Hopf algebras is the classification and construction of finite-dimensional quasitriangular Hopf algebras over C. Quasitriangular Hopf algebras constitute a very important class of Hopf algebras, introduced by Drinfeld. They are the Hopf algebras whose ..." Cited by 18 (9 self) Add to MetaCart Abstract. A fundamental problem in the theory of Hopf algebras is the classification and construction of finite-dimensional quasitriangular Hopf algebras over C. Quasitriangular Hopf algebras constitute a very important class of Hopf algebras, introduced by Drinfeld. They are the Hopf algebras whose representations form a braided tensor category. However, this intriguing problem is extremely hard and is still widely open. Triangular Hopf algebras are the quasitriangular Hopf algebras whose representations form a symmetric tensor category. In that sense they are the closest to group algebras. The structure of triangular Hopf algebras is far from trivial, and yet is more tractable than that of general Hopf algebras, due to their proximity to groups. This makes triangular Hopf algebras an excellent testing ground for general Hopf algebraic ideas, methods and conjectures. A general classification of triangular Hopf algebras is not known yet. However, the problem was solved in the semisimple case, in the minimal triangular pointed case, and more generally for triangular Hopf algebras with the Chevalley property. In this paper we report on all of this, and explain in full details the mathematics and ideas involved in this theory. The classification in the semisimple case relies on Deligne’s theorem on Tannakian categories and on Movshev’s theory in an essential way. We explain Movshev’s theory in details, and refer to [G5] for a detailed discussion of the first aspect. We also discuss the existence of grouplike elements in quasitriangular semisimple Hopf algebras, and the representation theory of cotriangular semisimple Hopf algebras. We conclude the paper with a list of open problems; in particular with the question whether any finitedimensional triangular Hopf algebra over C has the Chevalley property. 1. - J. Pure Appl. Algebra , 2000 "... Let H be a Hopf algebra in a rigid braided monoidal category with split idempotents. We prove the existence of integrals on (in) H characterized by the universal property, employing results about Hopf modules, and show that their common target (source) object IntH is invertible. The fully braided ve ..." Cited by 17 (3 self) Add to MetaCart Let H be a Hopf algebra in a rigid braided monoidal category with split idempotents. We prove the existence of integrals on (in) H characterized by the universal property, employing results about Hopf modules, and show that their common target (source) object IntH is invertible. The fully braided version of Radford’s formula for the fourth power of the antipode is obtained. Connections of integration with cross-product and transmutation are studied. 1991 Mathematics Subject Classification. Primary 16W30, 18D15, 17B37; Secondary 18D35.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=638885","timestamp":"2014-04-23T22:21:05Z","content_type":null,"content_length":"36579","record_id":"<urn:uuid:7c1909d9-be38-4ab7-a7ac-41b7d2e9c52c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
The n-Category Café Homotopy Type Theory, II Posted by Mike Shulman First, an announcement: the homotopy type theory project now has its own web site! Follow the blog there for announcements of current developments. Now, let’s pick up where we left off. The discussion in the comments at the last post got somewhat advanced, which is fine, but in the main posts I’m going to try to continue developing things sequentially, assuming you haven’t read anything other than the previous main posts. (I hope that after I’m done, you’ll be able to go back and read and understand all the comments.) Last time we talked about the correspondence between the syntax of intensional type theory, and in particular of identity types, and the semantics of homotopy theory, and in particular that of a nicely-behaved weak factorization system. This time we’re going to start developing mathematics in homotopy type theory, mostly following Vladimir Voevodsky’s recent work. From a foundational point of view, what we’re doing today is analogous to developing mathematics in set theory. When you learn ZFC, you learn to define (for instance) Kuratowski ordered pairs, cartesian products, functions as sets of ordered pairs, and so on, eventually building up all the familiar structures of mathematics. Similarly, starting from homotopy type theory, we need to do some building up of familiar concepts, although of course the process will be different since we have different starting notions. I’m mostly going to describe this informally, avoiding the formal syntax of type theory with its turnstiles, judgements, and derivations. And I’m not going to make any use of the correspondence we talked about last time, but there is one advantage to having described that correspondence first: namely, if it makes you more comfortable, you can think of everything I say today as a description of constructions performed in a category with a nicely-behaved WFS. You can even (and, in fact, should) think of topological spaces or simplicial sets with their usual (acyclic cofibration, fibration) WFS (although there are technical issues involved in making that precise, which are discussed in the references I linked to last time). Now, where do we start developing mathematics? What we have is a basic notion of homotopy type (a.k.a. $\infty$-groupoid), including dependent types, identity (or “path”) types, dependent sums and products, and perhaps some other constructors (like inductive and coinductive types). This is great for doing homotopy theory, but in a lot of mathematics there is not (yet) visible any homotopical behavior; thus we really need also a notion of “set” to be a good foundational system. Classically, sets can be identified with homotopy types that are discrete: they contain no nonidentity morphisms/paths/homotopies (or higher such). That notion isn’t invariant under equivalence, but there is a corresponding notion which is: we require instead that the space of paths/morphisms between any two points is either empty or contractible. (If it is contractible, then the two points represent the “same element” of the corresponding set.) I’m going to follow the homotopy theorists and call types of this form homotopy 0-types, or 0-types for short. Of course, we should also have a notion of homotopy $n$-type (a.k.a. $n$-groupoid) for all natural numbers $n$. We can define these inductively by saying that the space of paths between any two points in an $n$-type should be an $(n-1)$-type. And, as regular $n$-Café readers will know, we can continue downwards a couple of steps: the inductive definition gives the right answer for $n=0$ if a “(-1)-type” is one which is either empty or contractible, which is equivalent to saying that the space of paths between any two points in it is contractible. And the inductive definition then gives the right answer for $n=-1$ if a “(-2)-type” is one which is contractible. Cf. negative thinking. (By the way, Voevodsky says a type is of h-level $(n+2)$ where I would say it is an $n$-type. This has the benefit of starting the numbering at $0$ rather than $-2$, but it has the disadvantage of not matching the well-known numbering of homotopy types and groupoids. Use whichever you prefer.) Thus, in order to define $n$-types inductively for all $n$, we just need to get things started at $n=-2$ by defining when a type is “contractible.” However, before we do that, we need to address a different issue, which may be one of the trickiest aspects of homotopy type theory for a non-type-theorist to get used to. We’ve said that we expect to recover set theory from 0-types, groupoid theory from 1-types, and so on. But that means we should also expect to recover logic from $(-1)$-types. (The empty type represents “false,” while the contractible one represents “true”—and in intuitionistic logic, there can be more $(-1)$-types than that.) In particular, we should not include an external “logic” in our foundational theory. Statements such as “$X$ is a 0-type” or “$Y$ is contractible” should not be things we say about the theory which can be true or false; rather they should be (-1)-types themselves. The “truth” or “falsity” of such a proposition then corresponds to whether or not we can exhibit a point of the corresponding (-1)-type. It may seem like we’ve painted ourselves into a bit of a corner here: we need to define what it means to be a (-1)-type, as part of our inductive definition of $n$-types for all $n$, but that “definition” must itself be a (-1)-type. However, if we sit back calmly and think about it, we can see there isn’t really a problem. All we need to do is define, for any type $X$, a type $IsContr(X)$ representing the proposition “$X$ is contractible,” such that when we then go on to define “$X$ is a (-1)-type” in terms of “$X$ is contractible,” the type $IsContr(X)$ turns out to in fact be a (-1) -type. It’s a bit bootstrappy, but not circular or paradoxical. So how do we define $IsContr(X)$, in such a way that it is always either empty or contractible? A natural idea, if we should happen to think of it, is that we should define $IsContr(X)$ to be the type of contractions of $X$. This is logical because if a space $X$ is contractible, then its space of contractions is itself contractible, while if $X$ is not contractible, then its space of contractions is of course empty. We formalize this as follows: $IsContr(X) \coloneqq \Sigma_{x\in X} \Pi_{y\in X} Paths_X(x,y)$ I’m writing $Paths_X(x,y)$ for the identity type of $X$ (what was previously written $Id_X(x,y)$), since in homotopy type theory we interpret it as a type of paths, or equivalences, rather than equalities. If we unpack the above definition, we see that a point of $IsContr(X)$ consists of a point $x\in X$ together with, for every $y\in X$, a path from $x$ to $y$. The point $x$ is the “basepoint” or “center” to which we are contracting, and the paths supply the “contraction.” Remember that all constructions on types are “natural” or “continuous,” so that the selection of paths is necessarily natural/continuous in $y$, as we should require. The above definition of $IsContr(X)$, due to Voevodsky (like pretty much everything else we’re doing today), is a bedrock on which the rest of the edifice rests. I’ve tried to make it seem inevitable in hindsight, but I certainly don’t think I could have come up with it myself! Voevodsky then proved the following theorem: $IsContr(X) \to IsContr(IsContr(X))$ Now actually, like everything else in type theory, $IsContr(X) \to IsContr(IsContr(X))$ is a type: the type of functions from $IsContr(X)$ to $IsContr(IsContr(X))$. So when I say he “proved” it, I mean that he exhibited, by formal type-theoretic constructions, a specified point of that type: a function from $IsContr(X)$ to $IsContr(IsContr(X))$. Once we define (-1)-types, we can show that $IsContr(X) \to IsContr(IsContr(X))$ is actually a (-1)-type, so that it contains at most one point; hence exhibiting a point of it is sufficient to show that it is “true,” which is what we mean in general by “proving a theorem.” So what does this theorem mean? The type of functions between two propositions is just an implication, so this theorem means that if $X$ is contractible, then so is $IsContr(X)$: the space of contractions of a contractible space is contractible. On the other hand, if $X$ is not contractible, then $IsContr(X)$ is empty, since a point of $IsContr(X)$ would be a contraction of $X$. Thus $IsContr(X)$ is, at least intuitively, a (-1)-type, as desired. There’s a slight subtlety here, though: we almost certainly can’t prove this theorem in plain unmodified intensional type theory. This is because $IsContr(X)$ contains, among other things, a (dependent) function type, and so $IsContr(IsContr(X))$ contains, among other things, the path (identity) type of a function type—but fully intensional type theory does not specify what the identity types of function types are like. In particular, it can be the case there that two functions are pointwise equal everywhere—i.e. the type $\Pi_{x\in X} Id_Y(f(x), g(x))$ is inhabited—but not themselves equal—i.e. the type $Id_{X\to Y}(f,g)$ is not inhabited. However, for homotopy type theory, we expect a path from $f$ to $g$ to consist exactly of a natural/continuous family of paths from $f(x)$ to $g(x)$, so that these two types should actually be equivalent. Thus we need to augment intensional type theory by an axiom called functional extensionality which asserts this—or else a stronger axiom which implies functional extensionality. I’ll come back to this in the next post. Before we go on, I want to address a point that’s confused several people, including myself. (If this paragraph confuses you rather than clarifying anything, just skip it.) I described above the homotopical interpretation of $IsContr(X)$: a point of $IsContr(X)$ is a point of $X$ together with a continuous deformation retraction to that point. On the other hand, we can interpret the same definition from a propositions as types point of view, in which the identity type represents equality, $\Sigma$ represents “there exists,” and $\Pi$ represents “for all.” In this case, $IsContr(X)$ translates to “there exists a point $x\in X$ such that all other points $y\in X$ are equal to $x$.” This is a perfectly good notion of what it means for a set to be “contractible”. The mistake is to start thinking of identity types as representing paths, but to keep trying to interpret $\Sigma$ and $\Pi$ as logical quantifiers, forgetting that for consistency with $Path_X$, they must also be interpreted as continuous or natural operations. This mismatched interpretation leads to the conclusion that $IsContr(X)$ means “there exists a point $x\in X$ such that every point $y\in X$ is connected to $x$ by a path”, which sounds like a definition of connectedness, not contractibility. Okay, let’s go back to our in-progress inductive definition of $n$-types. We’ve defined what it means to be contractible, i.e. to be a (-2)-type. Now we can define “$X$ is a (-1)-type”, a.k.a. “$X$ is a proposition”, as follows: $IsProp(X) \coloneqq \Pi_{x,y\in X} IsContr(Paths_X(x,y))$ This is almost a translation of our proposed definition from above: we wanted to say that $X$ is a (-1)-type if the space of paths between any two points of $X$ is contractible. However, instead of merely asserting that each path space is contractible, giving a point of $IsProp(X)$specifies a contraction of each path space. In fact, we don’t have any tools yet to construct types which assert that things exist without specifying them—but since the theorem above shows that contractions are unique when they exist, we can hope that the extra data in $IsProp(X)$ will be redundant. And indeed, Vladimir goes on to prove the following theorems: So our bootstrap process is working: we’ve defined the notions of a type “being contractible” and “being a proposition” as types, which are themselves in fact propositions ((-1)-types). Now we can go $IsSet(X) \coloneqq \Pi_{x,y\in X} IsProp(Paths_X(x,y))$ and inductively: $IsNType(X,n+1) \coloneqq Pi_{x,y\in X} IsNType(Paths_X(x,y),n)$ Now that we have these definitions, we can hope to prove that sets behave the way we expect sets to, and so on. For instance, the category of sets ought to be an elementary topos. However, all we can prove so far is that it is locally cartesian closed. We can remedy this by assuming, as an additional axiom, that we have a type of all propositions. This will give us a subobject classifier in the category of sets, which will therefore be an elementary topos. (Vladimir calls this a resizing axiom, for reasons that I’ll explain in the next post.) With a type of all propositions, we can also construct familiar logical operations on (-1)-types, just as we can for subobjects in any topos. I’m talking about connectives like “and” and “or” and quantifiers like “there exists” and “for all”. Of course, since our propositions are (particular) types, we already have the type-theoretic operations like $\times$, $\Sigma$, and $\Pi$, and these sometimes do what we want, but not always—the issue is that they don’t necessarily preserve the property of being a (-1)-type. For instance, one can prove that if $X$ and $Y$ are (-1)-types, then so are $X\times Y$ (which we therefore call “$X$ and $Y$”) and the function type $X\to Y$ (which we call “$X$ implies $Y$”). Similarly, if $X$ is any type at all and $Y(x)$ is a (-1)-type dependent on $X$, then $\Pi_{x\in X} Y(x)$ is a (-1)-type (which we call “for all $x\in X$, $Y(x)$”). On the other hand, under the same hypotheses $\Sigma_{x\in X} Y(x)$ need not be a (-1)-type; rather than the proposition “there exists an $x\in X$ such that $Y(x)$” it is the type “$\{ x\in X | Y(x) \}$”. What we need is a way of “squashing” any type $Z$ down to a (-1)-type which is inhabited just when $Z$ is. From a homotopy perspective, is natural to call this “squashed” type $\pi_{-1}(Z)$. There are several ways to obtain this squashing operation. We can assert it as part of the structure of the type theory, as in this paper by Steve Awodey and Andrej Bauer; there $\pi_{-1}(X)$ is written as $[X]$. We can hope to obtain it as a consequence of a general “quotienting” or “exactness” axiom, whose structure is currently unclear, but which I may talk a bit about later on. But we can also derive it from a subobject classifier, in the same way that we prove that any elementary topos is a regular category: $\pi_{-1}(Z) \coloneqq \Pi_{P\in \Omega} (Z\to P)$ where $\Omega$ is the subobject classifier (the type of all propositions). Thus, intensional type theory with functional extensionality and a subobject classifier seems to be a pretty good foundational system: at least, we can derive familiar-looking theories of logic and sets. Let me finish today by describing another one of Voevodsky’s insights: the definition of when a map is an equivalence. Homotopically, it’s natural to define $f\colon X\to Y$ to be an equivalence if we have a map $g\colon Y\to X$ and paths $p\in Paths_{X\to X}(g f, id_X)$ and $q\in Paths_{Y\to Y}(f g, id_Y)$. This might lead us to the definition $?? \qquad IsEquiv(f) \coloneqq \Sigma_{g\colon Y\to X} \; (Paths_{X\to X}(g f, id_X) \; \times \; Paths_{Y\to Y}(f g, id_Y)) \qquad ??$ but unfortunately this definition does not give us a (-1)-type. We could squash it down to a (-1)-type with $\pi_{-1}$, but it’d be nice to be able to talk about equivalences without needing the axioms that give us $\pi_{-1}$. Voevodsky’s idea was to make use of the fact that $IsContr$ is a (-1)-type, and define $f\colon X\to Y$ to be a weak equivalence if all its (homotopy) fibers are $IsEquiv(f) \coloneqq \Pi_{y\in Y} IsContr( \Sigma_{x\in X} Paths_Y(f(x),y))$ Here $\Sigma_{x\in X} Paths_Y(f(x),y)$ is the (homotopy) fiber of $f$ over $y$: a point of it is a point $x\in X$ equipped with a path from $f(x)$ to $y$. He was then able to prove: There’s also another way to define $IsEquiv$ that also gives a (-1)-type. Recall that any equivalence of categories can be improved to an adjoint equivalence, but we need to change one of the natural isomorphisms. In fact, given the two functors and one of the natural isomorphism, there is a unique choice of the second one such that the triangle identities hold. However, given just the two functors (assuming they are known to be inverse equivalences), the choice of the two natural isomorphisms is not unique. This suggests that we need to add more “adjointness” data to make the first definition $IsEquiv$ into a (-1)-type. The first thing we may think of is to add all the higher data going all the way up, to make it a “fully coherent” $\infty$-equivalence. This “would work,” but it is not expressible in the type theory (at least, not easily). However, it turns out that if we cut off at any finite level with “half” of the data at that level, then we still get a (-1)-type. So, for instance, it suffices to specify, in addition to $f$, $g$, $p$, and $q$, a secondary homotopy asserting that $p$ and $q$ satisfy one of the triangle identities. There will then be a contractible space of choices of all the higher data. (It would also work to specify only $f$, $g$, and $p$, except for the fact that given only that data we can’t necessarily conclude that $f$is an equivalence; $q$ might not exist at all.) To get some homotopical intuition for why this works, think of constructing $S^\infty$ (which is contractible) as follows. First take two points, which is $S^0$. Then glue on two paths (1-discs) connecting them, to make $S^1$. Then glue on two 2-discs to form the two hemispheres of $S^2$. And so on. None of the $S^n$’s are contractible, although $S^\infty$ is, but if we stop after gluing on one of the $k$-discs for any $k$, then we end up with $D^k$ which is contractible. This other way to define $IsEquiv$ should be attributed to a handful of people who came up with it a year ago at an informal gathering at CMU, but I don’t know the full list of names; maybe someone else can supply it. Next time: the univalence axiom! Posted at March 18, 2011 8:00 PM UTC Re: Homotopy Type Theory, II I would like to know if such new foundations would change the view on consistency issues etc. Posted by: Thomas on March 18, 2011 10:07 PM | Permalink | Reply to this Re: Homotopy Type Theory, II I would like to know if such new foundations would change the view on consistency issues etc. I’m not sure what you mean; can you clarify? Homotopy type theory as a foundation is at least as consistent as ZFC, since you can construct a model of it out of spaces or simplicial sets in ZFC. And the axioms I’ve presented so far are certainly much weaker than ZFC. In fact, so far, I haven’t mentioned any axioms which prevent all homotopy types from being sets! So any elementary topos would satisfy all the axioms I’ve mentioned so far. But even once we add quotients and/or the univalence axiom, which ensure the existence of higher homotopy types, I think we would still need some additional axioms of the replacement/collection/separation type to get up to anything comparable to ZFC. (It’s also less clear how to phrase such axioms in a type-theoretic framework.) Posted by: Mike Shulman on March 21, 2011 4:31 AM | Permalink | Reply to this Re: Homotopy Type Theory, II we require instead that the space of paths/morphisms between any two points is either empty or contractible That you keep writing thinks like ‘either empty or contractible’, when you could just as easily write ‘empty if not contractible’, means that I really must still exclude you from the class of constructive mathematicians. (One could also say ‘either contractible or, if not, then empty’, which allows one to be technically correct in constructive mathematics without sounding like a Posted by: Toby Bartels on March 19, 2011 5:20 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Toby, are you joking? Posted by: Tom Leinster on March 19, 2011 6:47 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Posted by: Toby Bartels on March 20, 2011 6:03 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Posted by: Toby Bartels on March 20, 2011 10:16 PM | Permalink | Reply to this Re: Homotopy Type Theory, II I’m perfectly happy with the label “pluralist.” (Mathematicians having debates nowadays about whether constructive or classical mathematics is “true” or “correct” seems to me kind of like physicists having debates about whether the aether is an inertial reference frame.) But I’m curious about the definition of “constructive mathematician” that your comment implies: is a constructive mathematician not allowed to write in classical mathematics when writing to an audience including many classical mathematicians he wants to avoid alienating, about a subject which can be done classically just as well as constructively? Or would a “real” constructive mathematician not place himself in such a position? (-: Posted by: Mike Shulman on March 21, 2011 4:06 AM | Permalink | Reply to this Re: Homotopy Type Theory, II If you felt a twinge of guilt when writing ‘either empty or contractible’, then you could (if you cared to) reapply for admission to the club. I assumed that you didn't feel such a twinge, since you could have avoided it with an alternative phrasing that classical audiences can still comprehend. As a pluralist, I want the things that I write to make sense in the broadest possible context, which ‘either empty or contractible’ does not. It's one thing if such a treatment is more complicated; the digressions necessary to make some things constructively rigorous would probably alienate ordinary mathematicians, and in any case they would take time. But when making the treatment constructive is a trivial edit, why not just do it? Don't take any of this too seriously. I only brought it up to correct a parenthetical remark in the previous discussion. Posted by: Toby Bartels on March 21, 2011 6:11 AM | Permalink | Reply to this Re: Homotopy Type Theory, II I maintain that “empty if not contractible” sounds more awkward than “either empty or contractible,” especially to a classical ear. And in fact, I don’t even know offhand why “empty if not contractible” is correct constructively. I know that in intuitionistic logic, everything is “false if not true” (in fact being “not true” is essentially the definition of being “false”), though it need not be “either true or false”—but why does that extend to the characterization of (-1)-types? E.g. if I know that a type X is empty if not contractible, and I have two points $x,y\in X$, then how can I conclude that they are connected by a path? Obviously if they weren’t connected by a path, then X would not be contractible, hence it would be empty, a contradiction, so I know that $x$ and $y$ are not not connected by a path… but how do I get to the positive assertion that they are? I can see that “contractible if inhabited” would do the job, though. Right? I did feel a slight twinge of guilt writing “either empty or contractible”, but I don’t think it is necessary, even as a pluralist, to always write things that make sense in the broadest possible context, especially when being expository and informal. I feel that anyone who cares about making things constructive should be perfectly capable of translating the one to the other in their head. (Don’t worry, I’m not taking this too seriously; but I’m enjoying the discussion.) Posted by: Mike Shulman on March 21, 2011 6:19 PM | Permalink | Reply to this Re: Homotopy Type Theory, II I did feel a slight twinge of guilt Then maybe you are a practising constructivist mathematician after all! I maintain that “empty if not contractible” sounds more awkward than “either empty or contractible,” especially to a classical ear. That’s why I suggested ‘contractible or, if not, then empty’. I don’t even know offhand why“empty if not contractible” is correct constructively. You're right! This is all a mathematical error on my part. You’ve effectively given a weak counterexample already. (Making it more explicit, let $P$ be a truth value that is not false; we’ll show that $P$ is true if my definition is correct. Consider the subspace $A$ of the unit interval such that each point $x$ belongs to $A$ iff $x$ is an endpoint or $P$ is true. Then $A$ is not not contractible, so (by hypothesis) a $(-1)$-type, so (since inhabited) contractible, so $P$ is true.) I can see that “contractible if inhabited” would do the job, though. Yes, that’s what I should have said. That’s even harder to make look normal to a classicist. Perhaps ‘either empty or, if nonempty, then contractible’ (although classical mathematicians really should learn the word ‘inhabited’). Posted by: Toby Bartels on March 21, 2011 11:54 PM | Permalink | Reply to this Re: Homotopy Type Theory, II I don’t think “either empty or, if nonempty, then contractible” really looks normal to a classical mathematician either. Posted by: Mike Shulman on March 22, 2011 12:46 AM | Permalink | Reply to this Re: Homotopy Type Theory, II However, it’s worth noting that “contractible if inhabited” is precisely the correct interpretation of the theorem $IsContr(X) \to IsContr(IsContr(X))$! If I had thought about this, then perhaps I would have written “contractible if inhabited”. Posted by: Mike Shulman on March 23, 2011 5:11 AM | Permalink | Reply to this Re: Homotopy Type Theory, II I must admit, I had the same impulse as Toby when reading the post, to take you to task for this! I’d agree it’s good to try to keep to classically familiar phrasings as far as possible. But this is a case where it introduces a pretty serious inaccuracy — the distinction between a proposition and a decidable proposition is a big one. (And especially when developing maths in a framework that’s actively inconsistent with classical logic!) One of the things I love in these posts is that you’re writing in normal mathematical prose, not formal syntax. There’s a somewhat widely held misconception — which confused me for some time — that to do logic in weaker foundations, one needs to work absolutely formally, not just semi-formally as usual. And it’s important to dispel this by example… but in the process, we do have to be extra-careful about some subtleties of phrasing that don’t matter classically. Posted by: Peter LeFanu Lumsdaine on March 22, 2011 11:34 PM | Permalink | Reply to this Re: Homotopy Type Theory, II I’m not sure what you’re implying by “actively inconsistent with classical logic.” Homotopy type theory, as a foundation, is perfectly consistent with classical logic, as long as by “logic” you refer only to propositions (i.e. (-1)-types). For instance, the logic in Voevodsky’s “univalent model” is classical. The propositions-as-types logic is not classical, but I don’t intend “logic” to refer to Posted by: Mike Shulman on March 22, 2011 11:49 PM | Permalink | Reply to this Re: Homotopy Type Theory, II I’ll let Peter speak for himself, but one thing one *could* mean is that the Univalence Axiom is inconsistent with the standard, set theoretic interpretation of type theory, under which $\Sigma$ and $\Pi$ are sums and products and $\mathrm{Id}_A \to A\times A$ is the diagonal. The reason is simply that it would force every isomorphism to be the identity. Looking instead at Mike’s proposal to have only the “logic” of subobjects (or “propositions”) be classical, within a not-necessarily classical system of type theory, we know this makes sense at least in the extensional case, where e.g. in any topos $\mathcal{E}$ we take the [-]-operation to be the left adjoint “image” reflection from the slice category $\mathcal{E}/X$ of types over $X$ into subobjects of $X$. Then we can simply compose with double-negation to get a further left adjoint $\mathrm{Sub}(X) \to \mathrm{Sub}_{egeg}(X)$ into the $egeg$-stable propositions. The whole thing even has the form of a “big double-negation” on the slice category: for any $A\to X$ we take $A\mapsto egeg A = 0^{0^A}$, since $egeg A$ is always a proposition (i.e. $egeg A \rightarrowtail X$), even when $A$ is arbitrary. What about the intensional case? As Vladimir mentions somewhere, at least in simplicial sets one wants to use $2$ rather than the subobject classifier as Prop – taking *complemented* subobjects rather than arbitrary ones as propositions – since the sub-*types* of a homotopy type are complemented. Of course, Vladimir’s model of the Univalence Axiom in SSets shows that it is compatible with this combination of non-classical “type theory” and classical “logic” (if that’s your preferred point of view), respectively constructive logic over a classical Prop. Posted by: Steve Awodey on March 23, 2011 4:45 AM | Permalink | Reply to this Re: Homotopy Type Theory, II some kind of LaTeX problem there: $egeg A = 0^{0^A}$ should read as iterated exponents: 0^{0^A}. It comes out OK in the preview, but not in the posting for some reason. Posted by: Steve Awodey on March 23, 2011 4:54 AM | Permalink | Reply to this Re: Homotopy Type Theory, II I think that for purposes of homotopy models, the fact that SSets is itself a 1-topos (and hence has a subobject classifier) is basically irrelevant. What one wants is a classifier of “homotopy-subobjects” in the sense of a monomorphism in an (∞,1)-category. In the 1-category SSet, regarded as a presentation of the $(\infty,1)$ category $HoTypes = \infty Gpd$, it so happens that the classifier of complemented subobjects in the 1-categorical sense (namely 2) also classifies all homotopy-subobjects in the homotopy sense, and that the logic of the latter is classical. But in general, I think this coincidence is unlikely to hold true, even when considering $(\infty,1)$-categories which happen to be presented by 1-categories that are themselves 1-topoi. Rather, the classifier of homotopy-subobjects should coincide with the ordinary subobject classifier in the 1-category of “internal sets” (= categorically-discrete objects = 0-types = objects of h-level 2). In SSet, the latter is just Set, so of course (assuming a classical metatheory) its logic is classical. But in general, the 1-topos of discrete objects need not be Boolean. (The expression $0^{0^A}$ looks fine to me—maybe your browser doesn’t fully support MathML?) Posted by: Mike Shulman on March 23, 2011 5:09 AM | Permalink | Reply to this Re: Homotopy Type Theory, II OK, I don't know Voevodsky's theory like you do, but it seems from this discussion that you (Mike and Steve) only say that Voevodsky's univalent model has a classical logic of $(-1)$-types because you must both be excluded from the class of constructivist mathematicians? Because his model is based on $Set$ and one must assume that $Set$ is boolean to conclude that his $(-1)$-types have a classical logic? Is that right? PS: I also view 0^{0^A} just fine, on Windows XP running Firefox 3.6. Posted by: Toby Bartels on March 23, 2011 6:55 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Well, his construction is certainly based on a Boolean version of Set. I think I can say that without committing myself to a personal belief in the Booleanness of Set. (-: And I think it’s not entirely clear whether it would work starting from a non-Boolean Set. At least, there’s some work to do there, and I’m not aware of anyone who’s done it. Classical homotopy theory uses the axiom of choice in constructing lifting properties, for instance. Perhaps constructively we should work with algebraic Kan complexes instead of Kan complexes; does that create any problems? Maybe not… but we’d have to go through and check all the little bits of classical homotopy theory that get used to make sure that they still work. (I think Voevodsky’s model is not really morally different from the models in categories with weak factorization systems that I described in Part I. He works with Kan complexes and Kan fibrations, and deals with the coherence issues in a somewhat different way, I think, but his main focus is on ensuring that the univalence axiom holds.) Posted by: Mike Shulman on March 23, 2011 7:49 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Ah, so it's Voevodsky who's not a constructivist. Fair enough. And I think it’s not entirely clear whether it would work starting from a non-Boolean Set. At least, there’s some work to do there, and I’m not aware of anyone who’s done it. Then maybe I need to learn his theory and do this! Posted by: Toby Bartels on March 23, 2011 6:51 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Ah, so it’s Voevodsky who’s not a constructivist. I’m not sure I would even say that; he certainly seems to display some interest in the fact that homotopy type theory enables one to do homotopy theory constructively. It’s just that the construction of that particular model starts from a Boolean Set; his goal with it was to show the consistency of the univalence axiom relative to ZFC. I think one can be a constructivist and still prove theorems that contain PEM (and even AC) as a hypothesis; Boolean toposes exist even if Set isn’t one. (-: Posted by: Mike Shulman on March 23, 2011 7:59 PM | Permalink | Reply to this Re: Homotopy Type Theory, II his goal with it was to show the consistency of the univalence axiom relative to ZFC Well, if that's all that he's doing, then never mind. I think one can be a constructivist and still prove theorems that contain PEM (and even AC) as a hypothesis Some constructivists would disagree with this, but they must be wrong, because as you say Boolean toposes [with NNO] exist even if Set isn’t one. Much as constructive mathematics tells the classicist something about arbitrary toposes, so classical mathematics tells the constructivist something about the boolean ones. Posted by: Toby Bartels on March 24, 2011 6:29 PM | Permalink | Reply to this Re: Homotopy Type Theory, II (The expression $0^{0^A}$ looks fine to me—maybe your browser doesn’t fully support MathML?) I use a browser with no MathML support and see it fine. However, the script that processes all the math on the page has a tendency to not work a fair amount of the time, which may be what happened to him (the script now seems to run for me even in firefox). I often have to refresh to see math correctly, and in my comments, I usually have difficulty getting it to format more than half my post in a Not to be a complainer, but I kind of preferred the old system that forced me to use firefox (for the MathML), but at least always worked. :) Posted by: Dan Doel on March 23, 2011 8:28 AM | Permalink | Reply to this Re: Homotopy Type Theory, II I recently got a new phone (Android), with free Internet, and I've started using it to check some websites. But I've given up on using it for the Café (and the Lab), because the math rendering always hangs. (Sometimes I can see a little, more often none at all.) Posted by: Toby Bartels on March 23, 2011 6:54 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Can someone explain briefly why ‘empty if not contractible’ is acceptable in constructive mathematics, whereas ‘either empty or contractible’ is not? Posted by: Tom Ellis on March 21, 2011 10:52 AM | Permalink | Reply to this Re: Homotopy Type Theory, II I can’t tell you exactly what Toby is thinking. However… Here is some Agda code: lemma1 : {A : Set} -> Proposition A -> (¬ Contractible A -> ¬ A) lemma1 pa nca x = nca (x , \y -> proj1 (pa x y)) It demonstrates that the univalent statement “A is a proposition” implies “A is empty if not contractible.” The converse is not, as far as I can tell (given a few minutes thought), provable, so they are not equivalent statements. However, it is not the case that “A is a proposition” constructively implies “A is either contractible or empty.” That essentially implies the existence of a decision procedure for which of the two an arbitrary A is, and there’s no way to write that. If I postulate excluded middle, it’s easy to prove both the converse and the ‘either or’ statement: lemma2 : {A : Set} -> Proposition A -> Contractible A \/ ¬ A lemma2 {A} pa with lem A ... | inl x = inl (x , \y -> proj1 (pa x y)) ... | inr ¬x = inr ¬x lemma3 : {A : Set} -> (¬ Contractible A -> ¬ A) -> Proposition A lemma3 {A} pf x y with lem (Contractible A) ... | inl cna = h-lift cna x y ... | inr ¬cna = false-elim (pf ¬cna x) (h-lift demonstrates than an n-type is also an (n+1)-type). Maybe there’s a constructive way to prove the converse I’m missing, though. Anyhow, if Mike were to say, “empty if not contractible,” he would at least be making a constructively weaker statement than, “is a proposition.” When he says, “either empty or contractible,” he is actually saying something constructively stronger than, “is a proposition.” Posted by: Dan Doel on March 21, 2011 10:03 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Actually, it seems the best ‘informal’ characterization of, “is a proposition,” would be, “contractible if inhabited.” lemma1 : {A : Set} -> Proposition A -> (A -> Contractible A) lemma1 pa x = (x , \y -> proj1 (pa x y)) lemma2 : {A : Set} -> (A -> Contractible A) -> Proposition A lemma2 {A} ci x y = h-lift (ci x) x y The first lemma gets you from proposition to “empty if not contractible” by contraposition. But, “empty if not contractible,” is constructively weaker than, “contractible if inhabited.” Posted by: Dan Doel on March 21, 2011 11:01 PM | Permalink | Reply to this Re: Homotopy Type Theory, II So this is saying that from a construction of a point in A we can always construct a contraction of A? Posted by: Tom Ellis on March 23, 2011 11:05 AM | Permalink | Reply to this Re: Homotopy Type Theory, II I believe it says that A is a proposition if and only if we can construct a contraction of A given a point of A. For true propositions this is due to them being contractible. For false propositions it is vacuously true because it is impossible to construct a point. If I tried with sets and above, though, lemma2 would hold, but lemma1 would not. Posted by: Dan Doel on March 23, 2011 12:18 PM | Permalink | Reply to this Re: Homotopy Type Theory, II For true propositions this is due to them being contractible. For false propositions it is vacuously true because it is impossible to construct a point. And for arbitrary propositions this is due to them being contractible if they have a point. Posted by: Toby Bartels on March 23, 2011 6:55 PM | Permalink | Reply to this Re: Homotopy Type Theory, II This other way to define IsEquiv should be attributed to a handful of people who came up with it a year ago at an informal gathering at CMU, but I don’t know the full list of names; maybe someone else can supply it. Let’s see: Mike Shulman, Peter Lumdaine, Michael Warren, Dan Licata – right? I think it’s called “gradlemma” in VV’s coq files (although only 2 of the 4 were actually still grad students at the time). Great post, BTW. Posted by: Steve Awodey on March 19, 2011 5:51 AM | Permalink | Reply to this Re: Homotopy Type Theory, II This mismatched interpretation leads to the conclusion that $IsContr(X)$ means “there exists a point $x \in X$ such that every point $y \in X$ is connected to $x$ by a path”, which sounds like a definition of connectedness, not contractibility. One should read it as ‘there exists a point $x$ in $X$ such that every point $y$ in $X$ is connected to $x$ by a path which varies continuously with $y$’. Then it's a definition of contractibility. Posted by: Toby Bartels on March 19, 2011 5:55 AM | Permalink | Reply to this Re: Homotopy Type Theory, II exactly – the quantifiers have to be read the right way. Mike has put his finger on a very important point here (the one he says is sometimes confusing) about the way to read the quantifiers constructively (propositions as types, etc.). He says: “The mistake is to start thinking of identity types as representing paths, but to keep trying to interpret Σ and Π as logical quantifiers, forgetting that for consistency with PathX, they must also be interpreted as continuous or natural operations. This mismatched interpretation leads to the conclusion that IsContr(X) means “there exists a point x∈X such that every point y∈X is connected to x by a path”, which sounds like a definition of connectedness, not contractibility.” But I wouldn’t say it’s a “mistake” or a “mismatch” to keep reading these operations as quantifiers – one just needs to read them as *constructive* quantifiers. It was Dana Scott who introduced the idea that constructively definable operations are always continuous. It’s like the fact that the logic of a topos of sheaves is intuitionistic, but here we have an even more extreme form of it. My point is that we don’t want to deny or ignore the fact that these quantifiers (or the type of Paths) are logical operations – only differently construed – rather we want to make use of that to develop an internal logic that is (not just intuitionistic but) constructive. Posted by: Steve Awodey on March 19, 2011 3:32 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Can you say more about this? But I wouldn’t say it’s a “mistake” or a “mismatch” to keep reading these operations as quantifiers – one just needs to read them as constructive quantifiers. It was Dana Scott who introduced the idea that constructively definable operations are always continuous. It’s like the fact that the logic of a topos of sheaves is intuitionistic, but here we have an even more extreme form of it. Posted by: Emily Riehl on March 19, 2011 6:21 PM | Permalink | Reply to this Re: Homotopy Type Theory, II … well, it got a bit long, so I put it on HoTT over here. Posted by: Steve Awodey on March 20, 2011 5:06 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Technically, you are of course right, in that that is a valid (at least, self-consistent) way to read the type. But I’ve never been able to understand why one would want to artifically restrict the expressive power of language by forcing “such that there exists” to mean the same thing as “equipped with”. If in some context that’s the only meaning of “such that there exists” you have available, it makes sense, but in homotopy theory we have a perfectly good, and more natural, candidate for that. If you want to read $\Sigma_{x\in X} \Pi_{y\in X} Paths_X(x,y)$ as “there exists an $x\in X$ such that for every $y\in X$, there exists a path from $x$ to $y$,” then how would you read $\pi_{-1} \Sigma_{x\in X} \Pi_{y\in X} \pi_{-1} Paths_X(x,y)$ (which is what I would read as “there exists an $x\in X$ such that for every $y\in X$, there exists a path from $x$ to $y$”)? Posted by: Mike Shulman on March 21, 2011 3:48 AM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II One way of thinking about IsContr($X$) is as the homotopy pullback of the identity function along the canonical map $X^\delta \to X$ where $(-)^\delta$ sends $X$ to the space with the discrete topology on same underlying set. For the purposes of what we are considering here, the actual set underlying $X$ doesn’t make much sense, because we can’t point to a set underlying a homotopy type, so this makes me think that at least in this interpretation using spaces, we are implicitly using the cohesiveness of $Top \to Set$. I could imagine other settings where we might want to interpret IsContr, using other cohesive categories, not least some sort of algebraic geometric setting. Perhaps this is a way to build up to an $\infty$-cohesive $(\infty,1)$-category?? (Complete speculation on my behalf) Another way to think about IsContr if we want to think about simplicial sets as modelling homotopy types is via the decalage functor $Dec:sSet \to sSet$. Another way of thinking about it with higher groupoids (say Trimble groupoids, for choice), is the ‘tangent category’ functor that Urs and I use in our paper The inner automorphism 3-group of a strict 2-group. One can consider the tangent category of the fundamental $n$- or $\infty$-groupoid and it looks very much like the construction from the first paragraph, using the path space and then the path space again and so on… Posted by: David Roberts on March 20, 2011 2:31 AM | Permalink | Reply to this Re: Homotopy Type Theory, II One way of thinking about $IsContr(X)$ is as the homotopy pullback of the identity function along the canonical map $X^\delta\to X$ where $(-)^\delta$ sends $X$ to the space with the discrete topology on same underlying set. I don’t see that. Homotopy pullbacks preserve homotopy equivalences, which identities are, so it seems to me that the homotopy pullback you describe should be homotopy equivalent to $X^\delta$. Posted by: Mike Shulman on March 21, 2011 3:51 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Yeah, I realised that a bit later. What I (think) I really meant was to consider $X^\delta\times_X X^I$ as being a a space over $X$ via the endpoint evaluation and then consider the space of sections $X\to X^\delta\times_X X^I$. But hmmm this doesn’t capture what I want. Something more like sections of $\Pi_0(X)\times_X X^I \to X$ for some section $\Pi_0(X) \to X$ of the canonical map, but we don’t necessarily have $\Pi_0$ available. This is where bringing in cohesiveness would come in handy. Back to the drawing board then… Posted by: David Roberts on March 21, 2011 7:32 AM | Permalink | Reply to this Re: Homotopy Type Theory, II David is trying to see the analog of $IsContr(-)$ as a familiar construction on path spaces of topological space. I don’t think this has to do with cohesion, but I agree that it would be helpful for following the discussion to have this spelled out. If I understand correctly, the claim is that there is a canonical and natural construction on any topological space that sends it to a conctractible space if it is contractible and sends it to an empty space if not. Let me see, we start with the path fibration $\array{ X^I \\ \downarrow^{\mathrlap{{\delta_0, \delta_1}}} \\ X \times X }$ and are supposed to first take the dependent product with respect to $\delta_1$ and then the dependent sum with respect to $\delta_0$, both along the terinal morphism $X \to *$ For this morphism the left adjoint to pullback is simply forgetting the map to $X$. The right adjoint is more interesting. Posted by: Urs Schreiber on March 21, 2011 10:16 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Ah, I’ve realised where I went wrong. I hadn’t read what the type IsContr is defined as properly. Really I want something like the space $\Gamma(X)$ of sections of $\prod_{x\in X} P_x X \xrightarrow {ev_1} X$ If $X$ is not path-connected, $\Gamma(X)$ is obviously empty. If $X$ is connected but not contractible there are also no sections, but if $X$ is contractible, there is at least one section, and $\Gamma$ seems to be contractible, but one should show this by some topological analogue of the theorem $\Gamma(X) \to \Gamma(\Gamma(X))$. This would (should?) be that $\Gamma(X)^{\Gamma(\Gamma (X))}$ is (canonically?) contractible. Posted by: David Roberts on March 22, 2011 5:22 AM | Permalink | Reply to this Re: Homotopy Type Theory, II David says: Really I want something like the space $\Gamma(X)$ of sections of $\prod_{x \in X} P_x \stackrel{ev_1}{\to} X$ […] Yes, this sounds good now! (Don’t have time for more, right now.) Posted by: Urs Schreiber on March 22, 2011 8:51 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Ok, so here’s the general set-up. Take a category $u:C \to Set$, such that $u$ has a left adjoint $d$, and $C$ has a path object $(-)^I$ (I’m not assuming $C$ has weak equivalences, but that will b). Also assume that $C$ has pullbacks and is cartesian closed. Then for any $a\in C$ we can define $IsContr(a)\in C$ by $IsContr(a)\colon = C(d u a\times a,(d u a\times a)\times_{ a\times a}a^I).$ When $C$ has small products then the dependent product along the projection $d u a\times a \to a$ exists and is isomorphic to $IsContr(a)$ as defined here. How much more structure is this than is present in the axiomatic HoTT version? (obviously apart from being a category over $Set$, which isn’t a priori defined) It should cover the general structure when looking at models in other categories Actually, since $C$ is cartesian closed, we have $IsContr(a) = C(a,\left((d u a\times a)\times_{ a\times a}a^I\right)^{d u a})$ and the object $\left((d u a\times a)\times_{ a\times a}a^I\right)^{d u a}$ is almost $\prod_{x\in ua} P_x a$ (where $P_x a = d\{ x\} \times_a a^I$), which is almost the dependent product. Is is in some cases. Obviously one can play around a bit with the adjoints here, but I won’t for the time being. I also think this covers the three cases I considered in my wrong post above, namely the topological setting, the simplicial setting and the $n$-groupoid setting. Here is a naive question along a different line: is $IsContr$ some sort of comonad? Obviously one can take $C$ relative to other categories, but $Set$ is the one which fit in my examples above. Posted by: David Roberts on March 23, 2011 12:56 AM | Permalink | Reply to this Re: Homotopy Type Theory, II I can’t figure out how what you are describing is supposed to be $IsContr$. Let’s consider a topological space A. Then a point of the space you want to call $IsContr(A)$ is a continuous map $f\colon A^\delta \times A \to (A^\delta\times A)\times_{A\times A} A^I$. Since $A^\delta$ is discrete, $A^\delta \times A$ is a copower of that many copies of $A$; thus for each point $x\in A$ we have a continuous map $f_x \colon A \to (A^\delta\times A)\times_{A\times A} A^I$. Suppose for simplicity that $A$ is connected. Then each map $pr_1 f_x \colon A \to A^\delta\times A$ must factor through one copy of $A$, say the one indexed by $g(x)\in A^\delta$. Thus so far, we have a discontinuous map $g\colon A\to A$, and for each $x\in A$ a continuous map $h_x \coloneqq pr_1 f_x \colon A\to A$. Now the composite $A \xrightarrow{f_x} A^\delta \times A \to A\times A$ sends $y\in A$ to $(g(x),h_x(y))$. The rest of the data says that for each $x$, we have paths from $g(x)$ to $h_x(y)$ which depend continuously on $y$. It seems to me that if $A$ is any connected space (or probably any space at all), then I can pick a basepoint $z\in A$ and define $g(x) = z$ and $h_x(y)=z$ for all x and y, and let the paths be constant, obtaining a point of your space. Which is of course not what we want. Am I misunderstanding something? I’m also not sure why you want to try to drag in a forgetful functor to Set. Topological spaces and simplicial sets happen to have such functors, but I certainly wouldn’t expect a general (presentation of a) $(\infty,1)$-topos to have a well-behaved such functor. I think one of the nice things about $IsContr$ is that it’s defined purely intrinsically to the category in question. I think Urs had the right way to think about it categorically: take the path space $A^I \to A\times A$ as an object of $C/(A\times A)$, apply the right adjoint to pullback along one projection $A\times A \to A$, then forget the remaining morphism to $A$. Posted by: Mike Shulman on March 23, 2011 3:35 AM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II Arg, it should be sections, not maps. Thus we need $C$locally cartesian closed (as you’ve pointed out that $Set$ is in this theory), and it should be maps over $d u a\times a$, or if the dependent product exists, maps over $a$. If the dependent product (in $Top$, say, which is where my intuition is) along the projection $pr:X\times X \to X$ is the same as along the projection $X^\delta \times X \to X$, then I agree with most of what you say in the last paragraph. I would prefer to drop reference to an extrinsic category $Set$. But I still maintain that we need the space of sections of $\Pi_{pr}X^I \to X$ Posted by: David Roberts on March 23, 2011 6:14 AM | Permalink | Reply to this Re: Homotopy Type Theory, II If I understand you correctly, now you want to talk about the space of maps from $d u a\times a$ to $(d u a\times a)\times_{a\times a} a^I$ over $d u a\times a$, which is equivalently the space of maps from $d u a \times a$ to $a^I$ over $a\times a$. So a point of that space consists of a choice, for every $x,y\in a$, of a path from $x$ to $y$, which depends continuously on $y$. Which is to say, it consists of, for every point $x\in a$, a contraction to $x$. I think now I agree that this will give you something equivalent to $IsContr(a)$, since a contractible space can be contracted (essentially uniquely) to any of its points, but I still don’t understand why you want to bring in $d u a$. The construction of $IsContr(a)$ I described above gives you the right answer without it, and is with less superfluity (only one contraction rather than a whole bunch). Posted by: Mike Shulman on March 23, 2011 8:16 AM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II I think Urs had the right way to think about it categorically: take the path space $A^I \to A \times A$ as an object of $C/A \times A$, apply the right adjoint to pullback along one projection $A \times A \to A$, then forget the remaining morphism to $A$. What’s the right adjoint $\Pi_{x \in X}$ on topological spaces? $Top/X \stackrel{\overset{\sum_{x \in X}}{\to}}{\stackrel{\overset{(-)\times X}{\leftarrow}}{\underset{\prod_{x \in X}}{\to}}} Top$ we compute \begin{aligned} Hom_{Top/X}(A \times X , B \to X) & = Hom_{Top}( A\times X, B) \times_{Hom_{Top}(A \times X, X)} \{p_2\} \\ & = Hom_{Top}( A, [X,B]) \times_{Hom_{Top}(A , [X,X])} \{\tilde p_2\} \\ &= Hom_{Top}(A, [X,B] \times_{[X,X]} \{Id\}) \\ &= Hom_{Top}(A, \Gamma(B)) \end{aligned} to find that $\prod_{x \in X} B = \Gamma(B)$ indeed is the space of sections. But for the case at hand we need $Top/(X \times X) \stackrel{\overset{\sum_{x \in X}}{\to}}{\stackrel{\overset{(-)\times X}{\leftarrow}}{\underset{\prod_{x \in X}}{\to}}} Top/X \,.$ Posted by: Urs Schreiber on March 23, 2011 9:18 PM | Permalink | Reply to this Re: Homotopy Type Theory, II For any map $f\colon Y\to X$, the right adjoint $\Pi_f\colon Top/Y \to Top/X$ takes a space $p\colon E\to Y$ to the space $\Pi_f E \to X$ whose fiber over $x\in X$ is the space of sections of $p|_{f^ {-1}(x)} \colon p^{-1}(f^{-1}(x)) \to f^{-1}(x)$, with a suitable topology. (This is not quite true, of course, since $Top$ is not locally cartesian closed, but there are various fixes.) Posted by: Mike Shulman on March 23, 2011 9:48 PM | Permalink | Reply to this Re: Homotopy Type Theory, II This is not quite true, of course, since $Top$ is not locally cartesian closed, but there are various fixes. Let’s talk $sSet$. Hm, or rather, remind me: if we want to model the homotopy type theory on a 1-category of topological spaces, what do we need? I guess we need at least to restrict to CW-complexes? Posted by: Urs Schreiber on March 23, 2011 10:59 PM | Permalink | Reply to this Re: Homotopy Type Theory, II The values of the $\Pi$-functors in $sSet$ should also be easy to compute using representable functors. An $n$-simplex of $\Pi_f E$ over an $n$-simplex $x\in X_n$ is a map $\Delta^n \to \Pi_f E$ whose composite with $\Pi_f E \to X$ is $x\colon \Delta^n \to X$, so by adjointness such a thing is the same as a map $f^{-1}(x) \to E$ over $Y$, i.e. a section of $E$ over $f^{-1}(x)$ (where $f^{-1} (x)$ means the pulllback of $x\colon \Delta^n \to X$ along $f$). Posted by: Mike Shulman on March 24, 2011 6:10 AM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II You’ve caught me! (-: I’ve been invoking topological spaces for intuition, but there are issues with trying to use them to model the entire theory. On the one hand, it’s easier with topological spaces than with simplicial sets to get a weak factorization system which satisfies the necessary niceness properties: we can construct a “mapping path space” using Moore paths (paths of variable length, which concatenate strictly associatively). This is described in the paper of van den Berg and Garner. In this way we get a model of dependent type theory with identity types, where the display maps are the Hurewicz fibrations. There are issues with exponentials, of course, since $Top$ is not locally cartesian closed, nor does it seem to have a nice subcategory which is. But I think that Benno and Richard’s construction should work just as well if we replace $Top$ by pseudotopological spaces or subsequential spaces, both of which are lcc. There will be the usual coherence issues with functoriality of pullback, but hopefully they can be dealt with in the usual ways without messing up the identity types. Thus it seems likely to me that we can indeed get a model of homotopy type theory this way, or at least the fragment of it that I’ve described so far. However, it isn’t the “canonical intended” model in $\infty$-groupoids, because as you point out, we’re using all spaces rather than only the “cofibrant” ones, so there aren’t enough morphisms. But unfortunately, CW complexes are not closed under pullbacks, nor exponentials, nor under mapping path spaces; so we can’t just “restrict to the subcategory of CW complexes.” There is something a bit magical that happens, though: if we instead restrict to spaces with the homotopy type of CW complexes (m-cofibrant spaces), which are just as “cofibrant” as CW complexes for the purposes of Whitehead’s theorem, then we do get a category which is closed under mapping-path-spaces (this is clear) and under pullbacks of Hurewicz fibrations. The latter fact is not at all obvious to me, but it follows from the fact that given a fibration over an m-cofibrant space, the total space is m-cofibrant iff each fiber is. I’m told that this is originally due to Jim Stasheff; there’s a proof in 3.5.2 of May-Sigurdsson. So I think that we can restrict to m-cofibrant spaces and get a model of the exponential-free fragment of homotopy type theory. Unfortunately, as far as I can tell, this trick seems to be incompatible with exponentials. Firstly, it’s totally unclear to me whether Stasheff’s theorem above could be proven for any lcc replacement of $Top$. (The proof I know uses Milnor’s theorem that exponentials by compact spaces preserve m-cofibrancy, and the proof of that uses some serious point-set topology.) Moreover, even if we had exponentials, m-cofibrant spaces won’t in general be closed under them. On the other hand, m-cofibrant spaces do present the same $(\infty,1)$-category as simplicial sets, and the latter are lcc. So it seems possible that we could pass back and forth across that equivalence to get “exponentials up to homotopy,” which would at least satisfy the rules of $\Pi$-types up to homotopy. Posted by: Mike Shulman on March 24, 2011 7:22 AM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II However, all we can prove so far is that [$Set$] is locally cartesian closed. Can we at least prove that it's also a pretopos? Posted by: Toby Bartels on March 19, 2011 5:58 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Actually, I can see that the answer is bound to be No using only ordinary type theory, since modding out a set by an equivalence relation gives a groupoid that is not a set. But I hope that there are still pullback-stable disjoint finitary coproducts in $Set$, and that $Set$ is a pretopos assuming the existence of $\Pi_0$ or one of these mysterious exactness axioms (although these results are less likely to have actually been considered and proved). Posted by: Toby Bartels on March 19, 2011 6:07 AM | Permalink | Reply to this Re: Homotopy Type Theory, II I hope that there are still pullback-stable disjoint finitary coproducts in Set Well, that depends on the “perhaps some other constructors” that I mentioned. With just identity types, dependent sums, and functionally-extensional dependent products, I don’t think you can construct coproducts. But in extensional type theory, coproducts are one of the simplest “inductive types,” so if you include some inductive types, then you almost certainly have coproducts. (Probably some type theorist is going to jump on me now saying something about injectivity of constructors, but I think that the fragment of HoTT relating to sets should be sufficiently extensional for that not to be an issue.) (It’s also worth mentioning that if you have general enough “inductive types” then you can even define identity types and dependent sums as particular cases of them. That’s in fact what Coq and Agda do. It seems that one still has to take dependent products as basic, though.) Just having a $\pi_0$ operation won’t give you ordinary exactness of $Set$, unless you also have some sort of homotopy exactness. And actually, the homotopy quotient of an equivalence relation on a set, if it exists, will already be a set without any need for $\pi_0$—at least if we interpret “equivalence relation” in the usual sense (internal to $Set$) which I am advocating, rather than in the propositions-as-types sense where the “relation” map $R \to X\times X$ is not required to be monic. Posted by: Mike Shulman on March 21, 2011 4:00 AM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II With just identity types, dependent sums, and functionally-extensional dependent products, I don’t think you can construct coproducts. But in extensional type theory, coproducts are one of the simplest “inductive types,” so if you include some inductive types, then you almost certainly have coproducts. Yes, I took it for granted that we have binary sums as one of the type constructors, since otherwise you can hardly have a foundation of (in this case weakly predicative constructive) mathematics (and, as you say, they won't come free). So my question is whether we have enough axioms to prove that they're disjoint pullback-stable coproducts when restricted to $Set$. (Anyway, I guess that if we don't, then we can always add this as an axiom while we're putting in the constructor itself, so maybe it's not a very important question.) Posted by: Toby Bartels on March 21, 2011 5:29 AM | Permalink | Reply to this Re: Homotopy Type Theory, II It’s true that we could always add that as an axiom, but it would be more satisfying if the standard axioms for inductive types implied disjointness and stability of coproducts. Coq’s dependent elimination is strong enough to prove disjointness of coproducts (this is just a translation into “paths language” of the standard ‘discriminate’ and ‘injection’ tactics for equality). However, I don’t fully understand the theoretical basis of Coq’s dependent elimination: it uses a “match” construct as primitive rather than a recursor, and I neither understand precisely the semantics of “match,” nor whether/how it can be implemented in terms of a recursor. (I think I remember Vladimir commenting briefly on this question at Oberwolfach, indicating he didn’t know the answer either.) So I’m a little leery of asserting the validity of all of Coq’s dependent elimination in homotopy type theory—especially since as Dan Doel pointed out, Agda’s dependent elimination is (by default) strong enough to prove the uniqueness of reflexivity proofs (“axiom K”), which is inconsistent with nontrivial homotopy models. But I would love for someone to tell me I don’t have to I haven’t tried pullback-stability yet, but I expect that it should also be provable, and probably less worrisome. Posted by: Mike Shulman on March 22, 2011 6:39 PM | Permalink | Reply to this Re: Homotopy Type Theory, II For what it's worth, if you write out standard axioms for inductive types in a theory of homotopy $0$-types (sets, or perhaps completely presented sets in a theory without quotient types) using recursion (instead of match) as your primitive elimination operation, then you can still prove disjointness there. Posted by: Toby Bartels on March 23, 2011 7:29 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Disjointness of coproducts is pretty easy. Here is an Agda file proving they are. I used –without-K to be sure, but beyond that, I tried to restrict myself to Martin-loef combinators as much as possible, so I only use pattern matching to define those. The large elimination corresponds to use of recursion to define an element of U in intuitionistic type theory. It’s pretty standard that proving disjointness of constructors requires use of universes or the equivalent. I also had to expand some of the categorical definitions to read $\forall x. f x = g x$ instead of $f = g$ for the obvious reason. Sums aren’t even coproducts (only weak coproducts) in intuitionistic type theory up to the normal equality, of course. I toyed a bit with pullback stability, but it actually seemed like it’d be a more difficult proof. For instance, I got into a spot where it seemed like I’d actually need K, and thus would have to require the objects involved in the pullback to actually be sets in the homotopy sense. So, I punted, and only proved that my datatype: record _×_ {A B Z : Set} (f : A -> Z) (g : B -> Z) : Set where constructor _,_<_> fst : A snd : B cmm : f fst == g snd is a pullback up to extensional equality that ignores the last field. Actually proving the stability property is something I’m not sure how to do, either. Getting the identities to work out is non-trivial. Posted by: Dan Doel on March 23, 2011 7:42 AM | Permalink | Reply to this Re: Homotopy Type Theory, II If I understand what it does properly, I think pullback-stability is most naturally phrased in type theoretic terms as: $\Pi x : A + B.\; (\Sigma a:A.\; x = inl\; a) + (\Sigma b:B.\; x = inr\; b)$ This is easy to prove. You don’t need any large elims to do it, though you do need dependent elimination when eliminating the coproduct argument $x$. Posted by: Neel Krishnaswami on March 23, 2011 10:33 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Yeah, that seems equivalent. At least, proving that lemma helped me define the second part of the more categorical isomorphism (I was thinking I might need to break out Inspect, and your simplified statement seems similar). I’ve updated the file to include this. I still haven’t proved the two pieces of the categorical definition are actually inverse. It’s starting to get a little hairier than I appreciate working with only ML-style combinators. Posted by: Dan Doel on March 23, 2011 12:03 PM | Permalink | Reply to this Re: Homotopy Type Theory, II This disjointness and stability is looking great. My next question would be, can we extend this to show that coproducts are disjoint and stable for all types, not just for sets? For disjointness I think the part about intersection should be the same, and the part about monicness should say that the induced function $map inl \colon Paths_X(x,y) \to Paths_{X+Y}(inl x, inl y)$ (and its obvious dual) is an equivalence. (What we’ve got so far is that there exists a map in the other direction, which is of course sufficient for a map between (-1)-types to be an equivalence.) It’s pretty standard that proving disjointness of constructors requires use of universes or the equivalent. That’s interesting. But I notice that the “large elimination” isn’t very large: the only output types it gives are $\top$ and $\bot$. So it seems that all you really need is a type family which includes both $\top$ and $\bot$. For instance, a subobject classifier ought to suffice. This seems kind of like the categorical facts that a category with coproducts and a terminal object is (finitary) extensive as soon as the single coproduct $1+1$ is disjoint and stable, and that elementary toposes are always extensive. Posted by: Mike Shulman on March 27, 2011 4:03 AM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II This disjointness and stability is looking great. My next question would be, can we extend this to show that coproducts are disjoint and stable for all types, not just for sets? I fixed the proof of disjointness like you said. For proving the equivalence, I used the definition that Peter LeFanu Lumsdaine mentioned in the HTT3 comments, because it seemed easier. However, I needed to use an even fancier large elimination. Maybe it’s not necessary, but I couldn’t figure out how to do without it. I’m not so optimistic about my ability to prove stability. I’m already not sure how to prove that the type I have is a pullback in general. More realistically, do you know what I’d have to do to Neel’s lemma to make it homotopically kosher? Would proving that it’s an equivalence between A + B and (Σ A \a → x ≡ inl a) + (Σ B \b → x ≡ inr b) be sufficient? That’s interesting. But I notice that the “large elimination” isn’t very large: the only output types it gives are ⊤ and ⊥. So it seems that all you really need is a type family which includes both ⊤ and ⊥. For instance, a subobject classifier ought to suffice. The usual universe U suffices for this, and you could conceivably have an even smaller universe with just those two codes. I don’t think either of those actually acts as a subobject classifier; if I’m not mistaken, to have a classifier, we’d need the theory to be impredicative? Large elimination is a little stronger than a universe in some ways, I think. I actually flirted with making a universe and using that this time around, because I wanted to define (as the file says): P : (A B : Set) -> (s : A + B) -> (inl x == s) -> Set P A B (inl y) eq = foo (bar eq) == eq P A B (inr _) () where () is is the absurd match against obviously empty types. The +-Elim I had wasn’t working for me, because I wanted to do dependent elimination on s to refine it in the identity type. I could achieve that with +-elim and a universe, but then I realized that I didn’t have a code for (foo (bar eq) == eq) unless A and B were also coded in the universe (and thus Id (A + B) would have a code …). So, I wouldn’t have a proof for all types, I’d just have a proof schema for however high a universe you wish to write down. Similarly, my proof is only for Agda’s Set, not its Set1 and so on. This could be recitified with universe polymorphism, but I didn’t want to complicate things. Posted by: Dan Doel on March 27, 2011 12:44 PM | Permalink | Reply to this Re: Homotopy Type Theory, II The usual universe U suffices for this, and you could conceivably have an even smaller universe with just those two codes. I don’t think either of those actually acts as a subobject classifier; if I’m not mistaken, to have a classifier, we’d need the theory to be impredicative? Yes, indeed. I was just trying to relate the discussion to things which may be more familiar to the homotopy theorists and topos theorists in the audience, by pointing out that a subobject classifier would be sufficient to assume instead of a universe. I would expect that a small “universe” with just those two codes should be a classifier for complemented subobjects. Posted by: Mike Shulman on March 28, 2011 6:47 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Would proving that it’s an equivalence between A + B and (Σ A \a → x ≡ inl a) + (Σ B \b → x ≡ inr b) be sufficient? After thinking about it, I quickly realized this makes no sense. The second part depends on the first, so it doesn’t fit the type for an equivalence. But, I have another question: if we think about the proposed pullback type… f : X -> Z g : Y -> Z f * g : Type Values of f * g are of the form: (x, y, eq) x : X y : Y eq : Id Z (f x) (g y) So, if Z is not a set, then there are potentially many inhabitants (x, y, eq) even for fixed x and y. Is this a problem, or no? Posted by: Dan Doel on March 27, 2011 4:18 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Is this a problem, or no? That’s the behavior we expect for homotopy pullbacks. As an extreme case, if $X$ and $Y$ are the unit type and $f$ and $g$ are identical, corresponding to a point $z\colon Z$, then the homotopy pullback is the loop space $\Omega_z Z$, which is in general quite nontrivial even though $X$ and $Y$ are trivial. Posted by: Mike Shulman on March 28, 2011 6:29 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Okay. The file now contains (I think) a proof that the datatype in question is in fact a homotopy pullback. I need to take a break, but perhaps I’ll get to stability later. Posted by: Dan Doel on March 29, 2011 12:43 PM | Permalink | Reply to this Re: Homotopy Type Theory, II If I understand what the file says correctly, then I think being a homotopy pullback is more than that. It doesn’t just mean that every homotopy commutative square factors through it up to homotopy, uniquely up to homotopy. It also means that every path between commutative squares factors through, and every path between paths, and so on—in other words, the space of homotopy commutative squares is equivalent to the space of maps into the homotopy pullback. That is, the map $(f\times g)^X \to \sum_{m\colon X\to A} \; \sum_{n\colon X\to B} \; \prod_{x\colon X} Paths_C(f m x, g n x)$ defined by $h \mapsto (f \circ h, g \circ h, \phi h)$, is an equivalence. Hmm, I guess that may require functional extensionality. (Homotopy theorists take note: this is an “internal” statement in the logic of the homotopy type theory, which is different from the “external” statement that the type in question is a homotopy pullback relative to the category-of-fibrant-objects homotopy theory on the category of types. I think the latter is basically immediate from the standard construction of homotopy pullbacks in a category of fibrant objects.) Side note: I wish that in Agda we could use a notation for path-spaces more like the $\rightsquigarrow$ that Andrej Bauer picked for his Coq files. The notation $\equiv$ looks too much like equality for my comfort. Posted by: Mike Shulman on March 29, 2011 5:32 PM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II If I understand what the file says correctly, then I think being a homotopy pullback is more than that. It doesn’t just mean that every homotopy commutative square factors through it up to homotopy, uniquely up to homotopy. That’s what I get for staring at the 2-pullback definition trying to figure out what coherence condition I was missing as a premise. That is, the map … defined by $h&#8614;(f&#8728;h,g&#8728;h,&#981;h)$, is an equivalence. Is that all I have to prove? Because believe it or not, that was trivial, since definitional eta works for all the involved types. The file has been updated. Side note: I wish that in Agda we could use a notation for path-spaces more like the ⇝ that Andrej Bauer picked for his Coq files. We can. This module is entirely self-contained, so I picked the names. I just don’t really like prefix Id. I switched it to a long squiggly arrow (the short one is to squashed in monospace, I think). That kind of suggest a directionality that isn’t entirely appropriate for these types though, no (unfortunately, the emacs mode I’m using has no long, bidirectional squiggly arrow available)? Posted by: Dan Doel on March 29, 2011 6:51 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Ah, beautiful! I forgot that Agda has definitional eta. That kind of suggest a directionality that isn’t entirely appropriate for these types though, no? Heh – Peter Lumsdaine and I had this argument. I think it is appropriate to suggest a directionality, because a particular element of a path-type does have a direction. It can always be reversed, but reversal is an operation, which produces an element of a different type. And reversal is only an involution up to homotopy. Posted by: Mike Shulman on March 29, 2011 7:05 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Okay. Another update…. Hopefully I’m learning correctly, and wrote the correct proof from the start this time. I assume stability works like this: we have two homotopy pullback diagrams: $\begin{matrix} A \times_D B & \longrightarrow & B & \,\,\,\,\, & A \times_D C & \longrightarrow & C\\ \downarrow & & \downarrow\mathrlap{g} & & \downarrow & & \downarrow\mathrlap{h} \\ A & \underset {\scriptsize{f}}{\longrightarrow} & D & & A & \underset{f}{\longrightarrow} & D \end{matrix}$ There is a morphism $factor : (A \times_D B) + (A \times_D C) \to A \times_D (B + C)$. Stability means it is an equivalence. Assuming I’ve got that right, my file has a proof. And so coproducts are disjoint and stable under pullback in intensional type theory, up to having to manually write in point-wise equality for functions in quite a few places (and adding the univalence axiom would allow the more categorical statements, due to function extensionality). Posted by: Dan Doel on March 30, 2011 2:04 AM | Permalink | Reply to this Re: Homotopy Type Theory, II And so coproducts are disjoint and stable under pullback in intensional type theory Can you also prove the full statement of universal colimits: that all homotopy colimits are stable under pullback? Posted by: Urs Schreiber on March 30, 2011 8:09 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Well, we don’t yet know how to formalize the notion of “$(\infty,1)$-category” in homotopy type theory. (There are of course many possible definitions of an internal $(\infty,1)$-category in an $(\ infty,1)$-category; the problem is that a priori they all involve specifying an infinite amount of data, so are not directly formalizable in the type theory.) So we can’t yet even write down what an arbitrary homotopy colimit would mean. Nor, actually, do we have enough axioms yet to prove that arbitrary homotopy colimits exist, even if we could define what they would mean. You can see that we are still at an early stage in many ways. Posted by: Mike Shulman on March 30, 2011 10:24 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Mike> specifying an infinite amount of data … Could Martin-Lof’s Mathematics of infinity be relevant in any of this? Posted by: Bas Spitters on March 31, 2011 8:18 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Could Martin-Löf’s Mathematics of infinity be relevant in any of this? I don’t think so, not precisely. Of course there are ways to specify an “infinite amount of data” in type theory. We can give a function $\mathbb{N}\to X$, which specifies infinitely many elements of $X$ (where $\mathbb{N}$ is defined inductively). Or we can consider coinductive types. And certainly it seems likely that a solution to the problem referred to will involve one or both of these ideas or similar ones, and a number of us are currently working on that question. It’s just not clear yet exactly how to do it. Martin-Löf’s Mathematics of Infinity starts off by considering many examples of coinductive objects, so it is related in that sense. But instead of distinguishing inductive from coinductive definitions, it looks to me like he considers the result of adding axioms requiring that the coinductive “infinite” objects already exist in the ordinary inductively defined type. This turns out to give a type-theoretic analogue of nonstandard analysis, which is very interesting! But doesn’t seem to me to have immediate relevance to the problem at hand. Posted by: Mike Shulman on March 31, 2011 11:01 PM | Permalink | Reply to this Re: Homotopy Type Theory, II the problem is that a priori they all involve specifying an infinite amount of data, so are not directly formalizable in the type theory Okay, i see. Nor, actually, do we have enough axioms yet to prove that arbitrary homotopy colimits exist How about just homotopy pushouts? Can one prove they exist? Can one show they are stable under homotopy pullback? Can one show the pasting law for homotopy pullbacks and pushouts? Or: do you expect that this can be done? Posted by: Urs Schreiber on March 31, 2011 7:10 AM | Permalink | Reply to this Re: Homotopy Type Theory, II So far, in these posts, I’ve been working in intensional type theory with a version of function extensionality. But I haven’t introduced any axioms yet which contradict extensional type theory, so the basic theory still has sound semantics in any locally cartesian closed category. Including inductive type constructors gives us coproducts, at least, and some other things, but as far as I know it doesn’t imply quotients, pushouts, coequalizers, or any colimit of that sort. Having a subobject classifier is enough to construct quotients of equivalence relations on sets, and thereby colimits in the extensional world of 0-types, but I don’t think it suffices for homotopy colimits of higher types. An exactness axiom should be enough, if we can figure out how to write it down. One can do some things with the univalence axiom too, but it’s not clear to me exactly how far it gets you. The pasting lemma for homotopy pullbacks, however, I expect would not be difficult. Posted by: Mike Shulman on March 31, 2011 8:01 AM | Permalink | Reply to this Re: Homotopy Type Theory, II One can do some things with the univalence axiom too, but it’s not clear to me exactly how far it gets you. Could you post about what you is possible with the univalence axiom? My own interest is in smoothly treating “naive constructive reasoning” – i.e., simple algebra and analysis handled in Bishop style. Basically, I do program verification, and a lot of program verification works with only elementary algebra and category theory. But I desperately need support for coequalizers and coinduction to do this cleanly. I’d be quite happy with an exotic story for big/complex types, as long as the simple stuff looked reasonably familiar (at least to a constructivist). Posted by: Neel Krishnaswami on April 1, 2011 1:23 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Another nice extreme example is that the homotopy pullback of the diagonal $X\to X\times X$ with itself is the free loop space of $X$. Posted by: Mike Shulman on March 29, 2011 6:54 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Posted by: Mike Shulman on March 23, 2011 7:50 AM | Permalink | Reply to this Re: Homotopy Type Theory, II these mysterious exactness axioms By the way, I think the idea of these exactness axioms shouldn’t be mysterious to anyone familiar with $(\infty,1)$-topoi. We just want an analogous property to the “exactness” part of the $(\ infty,1)$-Giraud theorem: every internal groupoid is effective. (If I write a Part IV about exactness, then I’ll try to explain that for people not familiar with $(\infty,1)$-topoi.) The only mysterious part (currently) is phrasing that formally in the type theory, since an internal groupoid a priori involves an infinite amount of data. Posted by: Mike Shulman on March 21, 2011 4:36 AM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II If $IsContr$ plays such an important role here, perhaps $IsFutureContr$ and $IsPastContr$ (as in directed algebraic topology) will feature in a theory of directed identity types. Posted by: David Corfield on March 21, 2011 10:39 AM | Permalink | Reply to this Re: Homotopy Type Theory, II There’s a start on this in Dan Licata’s PhD thesis – he uses a directed interpretation of type (ie, types as 1-categories rather than 1-groupoids) – the idea is that the step to directedness takes you from equivalence to coercibility. He uses this to describe certain properties of substitution, which is one of the noninvertible operations of interest to logicians. :) Posted by: Neel Krishnaswami on March 22, 2011 12:02 AM | Permalink | Reply to this Re: Homotopy Type Theory, II Awesome! Thanks, I hadn’t seen that yet. I’d been sort of stumbling towards something similar; perhaps the same thing has occurred to other people too. (I think I remember some conversations about this stuff at CMU last year?) Anyway, it looks like Dan Licata has pushed these ideas further, dealing successfully with co- and contra-variance of type dependencies. He assumes an involution $(-)^ {op}$, which I wouldn’t want to do in general (since I think there are potentially interesting 2-toposes that lack such an involution), but probably that is not essential. For the $(\infty,\infty)$-categorical theory, though, I still dream that we might think of a way to have “directed identity types” with elimination and computation rules analogous to the symmetrical Posted by: Mike Shulman on March 23, 2011 9:41 PM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II From a homotopy perspective, is natural to call this “squashed” type $\pi_{-1}(Z)$. Don’t you mean the (-1)-truncation $\tau_{\leq -1}(Z)$ of $Z$? Posted by: Urs Schreiber on March 21, 2011 1:13 PM | Permalink | Reply to this Re: Homotopy Type Theory, II Posted by: Mike Shulman on March 21, 2011 6:02 PM | Permalink | Reply to this Re: Homotopy Type Theory, II How is that different? “From a homotopy perspective” there is no $\pi_{-1}$. There is a $(-1)$-truncation. Generally, the idea of “squashing a type down to an $n$-type” is that of $n$-truncation, not of forming the $n$th homotopy group. Posted by: Urs Schreiber on March 21, 2011 7:39 PM | Permalink | Reply to this Re: Homotopy Type Theory, II It’s true that by $\pi_{-1}$ I meant to indicate the (-1)st fundamental groupoid (which is another name for the (-1)st truncation), not the “(-1)st homotopy group.” I’m not even quite sure what the latter would mean. But homotopy groups for $n\ge 0$ do also make perfect sense in a homotopical foundation: $\pi_0$ is the 0th truncation, and $\pi_n$ is the 0th truncation of the $n$th loop space. (If we’re working in the internal language of an $(\infty,1)$-topos, then these are of course the “categorical” homotopy groups rather than the “geometric” ones, the latter not being “internal.”) There’s a bit of a problem that it’s common to write $\Pi_n$ for the groupoid and $\pi_n$ for the group, but in type theory, capital $\Pi$ means something different! It’s true that $\tau_{\le n}$ is another possibility, and perhaps the best one, although I’m wary of that notation ever since you pointed out that Lurie uses it for two incompatible things. I guess we could also generalize the Awodey-Bauer notation and write $[Z]_n$, or perhaps $[Z]_{n+2}$. Posted by: Mike Shulman on March 21, 2011 8:04 PM | Permalink | PGP Sig | Reply to this Re: Homotopy Type Theory, II There’s a bit of a problem that it’s common to write $\Pi_n$ for the groupoid and $\pi_n$ for the group, but in type theory, capital $\Pi$ means something different! But shouldn’t you be typing rendering to for the dependent product, instead of rendering to It would seem helpful to carefully harmonize notation when making the connection to homotopy theory. For instance I am looking at Coquand’s slide 30 from the conference, and see the assertion/ $\pi_1(X,a) = Id_X a \; a$ Shouldn’t this read instead $\Omega_a X = Id_X a \; a$ I suppose this is what the warning on slide 32 is meant to refer to. But it seems to me with other notation no warning here would be necessary. Posted by: Urs Schreiber on March 21, 2011 9:57 PM | Permalink | Reply to this Re: Homotopy Type Theory, II I don’t really like the look of \prod for dependent products; it’s too big and ugly, and I don’t want the indexing variable underneath it, instead of as a subscript, when displayed. (-: Same with \ sum and \Sigma. I think the $\pi_1$/$\Omega$ issue in Coquand’s slides is separate, though related. I pointed that out to him at the conference and he agreed that it should really be $\Omega$. Posted by: Mike Shulman on March 21, 2011 10:03 PM | Permalink | Reply to this Re: Homotopy Type Theory, II I don’t really like the look of \prod for dependent products; it’s too big and ugly, and I don’t want the indexing variable underneath it, instead of as a subscript, when displayed. You can't do this in iTeX, but in TeX you can write \prod\nolimits to get the indexing variable as a subscript even when displayed. (I have \product in my personal set of macros, with this Posted by: Toby Bartels on March 23, 2011 7:18 AM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2011/03/homotopy_type_theory_ii.html","timestamp":"2014-04-19T05:00:14Z","content_type":null,"content_length":"307628","record_id":"<urn:uuid:c574884a-d2d5-48a4-9864-4e53c9084c67>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove that trigonometric terms are in arithmetic progression? May 21st 2009, 09:37 AM #1 Prove that trigonometric terms are in arithmetic progression? If $\theta_1$ and $\alpha$ are two real numbers such that $\frac{\cot^4 \theta}{\csc^4 \theta}\sec^2 \alpha$, $\frac{1}{\csc 150^{\circ}}$, $\frac{\tan^4 \theta}{\sec^4 \theta}\csc^2 \alpha$ are in Arithmetic Progression, prove that: $\frac{\sec^{2n} \alpha}{\sec^{2n + 2} \theta}$, $\frac{1}{\csc 150^{\circ}}$, $\frac{\csc^{2n} \alpha}{\csc^{2n + 2} \theta}$ are also in A.P. for all $n\in \mathbb{N}$ Vacuously True? By definition, a,b,c are in arithmetic progression iff c-b=b-a ~ a+c=2b. Given $\frac{\cot^4 \theta}{\csc^4 \theta}\sec^2 \alpha$ + $\frac{\tan^4 \theta}{\sec^4 \theta}\csc^2 \alpha$ = $\frac{2}{\csc 150^{\circ}}$ Or, $\frac{\sec^2 \alpha}{\sec^4\theta}$ + $\frac{\csc^2 \alpha}{\csc^4\theta}$ = $-\frac{4\sqrt{3}}{3}$ But the sum of two squares can never equal a negative number, therefore there are no real solutions for $\theta$ and $\alpha$. Ergo, this theorem is vacuously true. May 22nd 2009, 10:10 AM #2 Senior Member Apr 2009 Atlanta, GA
{"url":"http://mathhelpforum.com/trigonometry/89951-prove-trigonometric-terms-arithmetic-progression.html","timestamp":"2014-04-18T14:20:05Z","content_type":null,"content_length":"36092","record_id":"<urn:uuid:757f4d5e-3143-4252-a8a8-7c0a0f9b0bdb>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
A004716 - OEIS %S 1,4,6,8,9,10,20,21,23,27,28,29,32,33,34,35,36,37,39,40,41,42,44,46, %T 47,49,54,55,56,57,60,61,62,63,64,65,67,77,79,80,84,87,93,95,102,103, %U 105,109,111,112,113,115,117 %N Positions of ones in the binary expansion of log(3)/log(2)-1. %D J. M. Borwein and R. Girgensohn, Addition theorems and binary expansions, Canad. J. Math., 47 (1995), 262-273. %D S. R. Finch, Mathematical Constants, Cambridge, 2003, pp. 430-433. %H T. D. Noe, <a href="/A004716/b004716.txt">Table of n, a(n) for n=1..1000</a> %H S. R. Finch, <a href="http://www.people.fas.harvard.edu/~sfinch/constant/plff/plff.html">Plouffe's Constant</a> %K nonn %O 1,2 %A _Simon Plouffe_
{"url":"http://oeis.org/A004716/internal","timestamp":"2014-04-18T09:32:45Z","content_type":null,"content_length":"7499","record_id":"<urn:uuid:7b3cdcf0-9ae6-4d98-a541-219196f598c6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex numbers questions August 31st 2010, 06:09 AM Complex numbers questions I have a whole load of questions that I have attempted but could not do. Hopefully someone here can help me out! 1. Express z^4 + z^3 + z^2 + z + 1 as a product of two real quadratic factors. 2. Find the zeros of z^5 - 1, giving your answers in the form r(cos theta + i sin theta), where r > 0 and -pi < theta < pi. 3. Z1 and Z2 are complex numbers on the Argand diagram relative to the origin. If |Z1 + Z2| = |Z1 - Z2| where | | denotes the moduli, show that arg Z1 and arg Z2 differ by pi/2 If you can do any of these, that would be great. I've been stuck on these for a while now. (Headbang)
{"url":"http://mathhelpforum.com/calculus/154850-complex-numbers-questions-print.html","timestamp":"2014-04-17T12:33:41Z","content_type":null,"content_length":"3623","record_id":"<urn:uuid:74359d32-bd38-4b30-be7a-2e5fc84400a8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics 9 TEACHING FOR MATHEMATICAL PROFICIENCY Previous chapters have described mathematical proficiency as the integrated attainment of conceptual understanding, procedural fluency, strategic competence, adaptive reasoning, and productive disposition. Effective forms of instruction attend to all these strands of mathematical proficiency. In this chapter we turn from considering what there is to learn and what is known about learning to an examination of teaching that promotes learning over time so that it yields mathematical proficiency. Instruction as Interaction Our examination of teaching focuses not just on what teachers do but also on the interactions among teachers and students around content.1 Rather than considering only the teacher and what the teacher does as a source of teaching and learning, we view the teaching and learning of mathematics as the product of interactions among the teacher, the students, and the mathematics in an instructional triangle (see Box 9–1). We view the teaching and learning of mathematics as the product of interactions among the teacher, the students, and the mathematics. Certainly the knowledge, beliefs, decisions, and actions of teachers affect what is taught and ultimately learned. But students’ expectations, knowledge, interests, and responses also play a crucial role in shaping what is taught and learned. For instruction to be effective, students must have, perceive, and use their opportunities to learn. The particular mathematical content and its representation in instructional tasks and curriculum materials also matter for teachers’ and students’ work, but teachers and students vary in their interpretations and uses of the same content and of the same curricular resources. Students interpret and respond differently to the same mathemati- OCR for page 313 Adding + It Up: Helping Children Learn Mathematics Box 9–1 The Instructional Triangle: instruction as the interaction Among Teachers, Students, and Mathematics, in Contexts SOURCE: Adapted from Cohen and Ball, 1999, 2000, in press. cal task, ask different questions, and complete the work in different ways. Their interpretations and actions affect what becomes the enacted lesson. Teachers’ attention and responses to students further shape the course of instruction. Some teachers may not notice how students are interpreting the content, others may notice but not investigate further, and still others may notice and respond by reiterating their own interpretation. Moreover, instruction takes place in contexts. By contexts we mean the wide range of environmental and situational elements that bear on instruction—for instance, educational policies, assessments of students and teachers, OCR for page 313 Adding + It Up: Helping Children Learn Mathematics school organizational structures, school leadership characteristics, the nature and organization of teachers’ work, and the social matrix in which the school is embedded. These matter principally as they permeate instruction—that is, whether and how they enter into the interactions among teachers, students, and content.2 Hence, what goes on in classrooms to promote the development of mathematical proficiency is best understood through an examination of how these elements—teachers, students, content—interact in contexts to produce teaching and learning. Much debate centers on forms and approaches to teaching: “direct instruction” versus “inquiry,” “teacher centered” versus “student centered,” “traditional” versus “reform.” These labels make rhetorical distinctions that often miss the point regarding the quality of instruction. Our review of the research makes plain that the effectiveness of mathematics teaching and learning does not rest in simple labels. Rather, the quality of instruction is a function of teachers’ knowledge and use of mathematical content, teachers’ attention to and handling of students, and students’ engagement in and use of mathematical tasks. Moreover, effective teaching—teaching that fosters the development of mathematical proficiency over time—can take a variety of forms. To highlight this point, we use excerpts from four classroom lessons and analyze what we see going on in them in light of what we know from research on teaching. Four Classroom Vignettes The pedagogical challenge for teachers is to manage instruction in ways that help particular students develop mathematical proficiency. High-quality instruction, in whatever form it comes, focuses on important mathematical content, represented and developed with integrity. It takes sensitive account of students’ current knowledge and ways of thinking as well as ways in which those develop. Such instruction is effective with a range of students and over time develops the knowledge, skills, abilities, and inclinations that we term mathematical proficiency. The four classroom vignettes we present below offer four distinct images of what mathematics instruction can look like. Each vignette configures differently the mathematical content and the roles and work of teachers and students in contexts; hence, each produces different opportunities for mathematics teaching and learning. Two points are important to interpreting and using these vignettes. First, to provide a close view, each vignette zooms in on an individual lesson. Effective instruction, however, depends on the OCR for page 313 Adding + It Up: Helping Children Learn Mathematics coherent connection over time among lessons designed collectively to achieve important mathematical goals. For example, some of these teachers may be attempting to develop students’ productive disposition toward mathematics and as mathematics learners, but it is difficult to pinpoint isolated attempts in a single lesson since that development takes place gradually—over months rather than minutes. Second, rather than seeking to argue that one of these lessons is “right,” our analysis probes the possibilities and the risks each affords. The instructional challenge in any approach to teaching and learning is to capitalize on its opportunities and ward off its pitfalls. The first example (Box 9–2) is typical of much teaching that many American adults remember from their own experience in mathematics classes.3 Note how the teacher, Mr. Angelo, constructs the lesson in a way that structures the students’ path through the mathematics by tightly constraining both the content and his students’ encounters with it. The approach used by Mr. Angelo structures and focuses students’ attention on a specific aspect of the topic: multiplying by powers of 10. He has distilled the content into an integrated “rule” that his students can use for all instances of multiplication by powers of 10. Box 9-2 Mr. Angelo— Teaching Eighth Graders About Multiplying by Powers of 10 After a conducting a short warm-up activity and checking a homework assignment that focused on multiplying by 10, Mr. Angelo announces that the class is going to work on multiplying by powers of 10. He is concerned that students tend to perform poorly on this topic on the spring tests given by the school district, and he wants to make sure that his students know what to do. He reviews briefly the idea of powers of 10 by showing that 100 equals 102, 1000 equals 103, and soon. Going to the overhead projector, he writes the following: 4×10= 45×100 = 450×100= “Who knows the first one?” Mr. Angelo asks. “Luis?” “Forty,” replies Luis. Nodding, Mr. Angelo points to the second, “And this one?” Sonja near the front offers, “Forty-five hundred.” “That’s right—forty-five hundred,” affirms Mr. Angelo, and he writes the number on the overhead transparency. “And what about the last one?” he asks. “Forty-five thousand,” call out several students. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics Writing “45,000,” Mr. Angelo says, “Good, you are all seeing the trick. What is it? Who can say it?” Several hands shoot into the air. Ethel says, “You just add the same number of zeros as are all together in the number and in the number you are multiplying by. Easy.” “Right,” says Mr. Angelo. “Let’s try some more and see if you are getting it.” He writes three more examples: 30×70= 40×600= 45×6000= “So who can do these?” he asks, looking over the students. “What’s the first one?” “Three hundred!” announces Robert, confidently. Mr. Angelo pauses and looks at the other students. “Who can tell Robert what he did wrong?” There is a moment of silence and then Susan raises her hand, a bit hesitantly. “I think it should be twenty-one hundred,” she says. “You have to multiply both the 3 and the 7, too, in ones like this. So 3 times 7 is 21, and then add two zeros—one from the 30 and one from the 70.” “Good!” replies Mr. Angelo. “Susan reminded us of something important for our trick. It’s not just about adding the right number of zeros. You also have to look to see whether the number you are multiplying by begins with something other than a 1, and if it does, you have to multiply by that number first and then add the zeros.” He writes 2100 after the equals sign and continues with the remaining examples. Mr. Angelo writes another three examples on the overhead: 4.5×0.1= 4.5×0.01= 4.5×0.001= “I wonder whether I can fool you. Now we are going to multiply by decimals that are also powers of 10: one tenth, one hundredth, one thousandth, and so on. We’ll do easy ones to start.” Who knows the first one?” he asks. “Luis?” “Point four five,” replies Luis. Nodding, Mr. Angelo rephrases Luis’s answer: “Forty-five hundredths.” He then points to the second, “How about this one?” Nadya responds, “Point zero four five,” almost inaudibly. “That’s right. Forty-five thousandths,” Mr. Angelo affirms, and he writes the number on the overhead. “And what about the last one?” “Point zero zero forty-five,” responds the girl near the front again. Mr. Angelo writes “0.0045” and says, “Good, does anyone see the rule. Who can say it?” After a long pause, one hand in the back goes up. “You just move the decimal point.” OCR for page 313 Adding + It Up: Helping Children Learn Mathematics “Right,” says Mr. Angelo. “You move the decimal point to the left as many places as there are in the multiplier.* But think now. What did we decide happens to the product when we multiply a decimal by 10, 100, or 1,000? These are the powers of 10 that are greater than one, right?” This time several hands go up. “You just add the same number of zeros to the end of the number as are in the number you are multiplying by.” “Okay, that is what we said. But now we are ready for a better rule now that we have looked at some powers of 10 that are less than one. They are numbers like one tenth, one hundredth, one thousandth, and so on. Instead of having two completely different rules, it is better to have one good rule. And here it is. Listen carefully: “When you multiply by a power of 10 that is greater than one, you move the decimal point to the right as many places as the number of zeros in the multiplier. When you multiply by a power of 10 that is less than one, you move the decimal point to the left as many places as there are in the multiplier.” Mr. Angelo illustrates the movement of the decimal point with a colored pen. He explains, “You can remember which way to move the decimal point if you remember that multiplying by a number greater than one makes the product bigger and multiplying by a number less than one makes the product smaller. Right makes bigger, left makes smaller.” “Let’s practice this a bit now and get it under our belts.” Mr. Angelo passes out a worksheet with 40 exercises that resemble what was done in class. He goes over the first exercise to make sure his students remember what to do. While the students work, Mr. Angelo circulates around the room, answering questions and giving hints. The students make a variety of computational errors, but most seem able to use the rule correctly. Mr. Angelo is pleased with the outcome of his lesson. * Mr. Angelo is referring to the number of places between the decimal point and the last nonzero digit in the multiplier. Strictly speaking the first factor in a product is the multiplier. But because of the commutative property, Mr. Angelo uses the term for whichever factor he wishes to focus on. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics This lesson focuses on mathematical procedures for multiplying by powers of 10. Mr. Angelo designs the work to progress from simple examples (multiplying by 10, 100, and 1,000), to more complex ones (multiplying by multiples of powers of 10), to multiplying by powers of 10 less than one.4 He stages the examples so that the procedure he is trying to teach covers more and more cases, thus leading to a more general rule usable for multiplication by any power of 10 other than 10°=1. Mr. Angelo asks brief questions to engage students in the steps he is taking. By giving the students a rule, he simplifies their learning, heading off frustration and making getting the right answer the point—and likely to be attained. Concerned about the spring testing, he attempts to ensure that his students develop a solid grasp of the procedure and can use it reliably. He is careful to connect what are often two disjointed fragments: a rule for adding zeros when multiplying by powers of 10 greater than one and a different rule for moving the decimal point when multiplying by powers of 10 less than one. Although Mr. Angelo integrates these two “rules,” he does not work in the underlying conceptual territory. He does not, for example, explain why, for problems such as 30×70=?, students multiply the 3 and the 7. He might have shown them that 30×70=3×10×7×10 and that, using associativity and commutativity, one can multiply 3 by 7 and then multiply that product by 10 times 10, or 100. Instead, he skips this opportunity to help the procedure make sense and instead adds an extra twist to the rule. He also does not show his students what they are doing when they “move the decimal point.” In fact, of course, one does not “move” the decimal point. Instead, when a number is multiplied by a power of 10 other than one, each digit can be thought of as shifting into a new decimal place. For example, since .05 is one tenth times .5, in .5×10–1=?, the 5 can be thought of as shifting one place to the right—to the hundredths place, which is one tenth of one tenth. If a 5 is in the tens place, then multiplying by 10 shifts it to the left one place, to the hundreds place: What was 50 is now 500. Describing these changes in terms of “adding zeros” or “moving the decimal point” stays at the surface level of changes in written symbols and does not go beneath to the numbers themselves and what it means to multiply them. Students miss an opportunity to see and use the power of place-value notation: that the placement of digits in a numeral determines their value. A 5 in the tens place equals 50; in the hundredths place, 0.05; and in the ones place, 5. Mr. Angelo offers his students an effective and mathematically justifiable rule, but he does so without exploring its conceptual underpinnings. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics In lessons such as Mr. Angelo’s, mathematics entails following rules and practicing procedures, often with little attention to the underlying concepts.5 Procedural fluency is given central attention. Adaptive reasoning is not Mr. Angelo’s goal: He does not offer a justification for the rule he is teaching, nor does he engage students in reasoning about the structure of the place-value notation system that is its foundation. He focuses instead on ensuring that they can use it correctly. Other aspects of mathematical proficiency are also not on his agenda. Instead, Mr. Angelo has a clear purpose for the lesson, and to accomplish that purpose he controls its pace and content. Students speak only in response to closed questions calling for a short answer, and students do not interact with one another. When a student gets an answer wrong, Mr. Angelo signals that immediately and asks someone else to provide the correct answer. The lesson is paced quickly. We turn now to our second teacher, Ms. Lawrence, who is working with her fifth graders on adding fractions (Box 9–3). Ms. Lawrence’s goals are different from Mr. Angelo’s. Although she also structures the lesson to accomplish her goals, unlike Mr. Angelo, she emphasizes explanation and reasoning along with procedures. The pace of the lesson is carefully controlled to allow students time to think but with enough momentum to engage and maintain their interest. Box 9–3 Ms. Lawrence— Teaching Fifth Graders About Adding Fractions After a few minutes in which the class does mental computation to warm up, Ms. Lawrence reviews equivalent fractions by asking the students to provide other names for She asks the class what fractions are called that “name the same number.” On the chalkboard she writes a problem involving the addition of fractions with like denominators: She asks the students how to find the sum. One student, Betsy, volunteers that you just add the numerators and write the sum over the denominator. “Why does this work?” Ms. Lawrence asks. She asks Betsy to go to the board and explain. Confidently, Betsy draws two pie diagrams, one for each fraction, and explains that the denominator tells the size of the pieces and the numerators how many pieces all together: OCR for page 313 Adding + It Up: Helping Children Learn Mathematics In response, Ms. Lawrence poses another problem, this time involving unlike denominators: “How would we find the sum of these two?” she asks. Stepping back, she gives the students a chance to think. She then asks whether the sum would be less than or greater than 1. Several students raised their hands, eager to respond. Ms. Lawrence calls on Susan, who explains that the sum would be less than 1 because is less than and equals exactly 1. Ms. Lawrence then asks how you could find the exact sum. Jim raises his hand and offers and as equivalent fractions with a common denominator. Ms. Lawrence writes on the chalkboard as Jim dictates: She asks Jim why he chose 12 as the common denominator. “Twelve is the smallest number that both 3 and 4 go into,” replies Jim. “How did you come up with that?” Ms. Lawrence asks. “By multiplying 3 and 4,” he answers. Ms. Lawrence turns to the class. “Let’s take a closer look. Jim got the equivalent fractions by multiplying the numerator and denominator of each fraction by the denominator of the other fraction. So if we show all the steps, it looks like this.” She then reworks the problem to make her point, justifying each step by giving a property of the rational numbers: Ms. Lawrence stops and looks at the students. “How do we know that what Jim did makes sense? How do we know that he is adding the same fractions as in the original problem: and This is really important. Maybe he has just added two other fractions.” OCR for page 313 Adding + It Up: Helping Children Learn Mathematics “Oh!” exclaims Lucia. “I know! Two thirds is equivalent to eight twelfths. We could show that with a picture like what Betsy drew for three eighths and four eighths. If we draw two thirds on a pie that has three pieces, those two pieces will actually make eight pieces on that same pie if it’s divided into 12. But the eight pieces, eight twelfths, will equal the same total amount of pie as two pieces that are each one third of the pie.” She pauses, and beams, looking at Ms. Lawrence expectantly. “Is that right?” “Yes, you explained it well,” says Ms. Lawrence. “Can someone come up and make pictures to show what Lucia just said?” Several hands go up, and Ms. Lawrence picks Nicole, who comes to the board and represents accurately what Lucia said. Ms. Lawrence makes a few additional remarks to make sure that all the students understand. Ms. Lawrence continues with three more examples, showing all the steps in each. She then asks the students to generalize the process by writing “a rule that would work for any two fractions.” Several students volunteer a verbal rule. “Let’s try this out on a couple of less obvious examples,” she says, writing on the overhead projector: Ms. Lawrence asks the students to work on these problems in pairs. As the students work, she walks around, listening, observing, and answering questions. Satisfied that the students seem to understand and are able to carry out the procedure, she assigns a page from their textbook for practice. The assignment contains a mixture of problems in adding fractions, including some fractions that already have like denominators and many that do not, and in adding whole numbers as well as several word problems. Ms. Lawrence wants the practice that she provides to require the students to think and not merely follow the algorithm blindly. She believes that this way of working will equip them well for the standardized test her district administers in April and the basic skills test they have to take at the beginning of sixth grade. She expects the students to remember the procedure because they have had opportunities to learn why it makes sense. She knows that this approach is understandable to her students’ parents, while at the same time she is stretching them beyond what some have been demanding—a solid focus on basic skills. She feels comfortable with the balance she has struck on these issues. SOURCE: This vignette was constructed to embody the principles from Good, Grouws, and Ebmeier, 1983. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics In this lesson, Ms. Lawrence is trying to develop her students’ ability to add fractions with like or unlike denominators. She wants them to understand how to convert fractions to fractions with the same denominator and add them, and to have a reliable procedure for doing so. She also wants them to understand why the procedure works. Her lesson is designed to engage the students actively in the conceptual and procedural development of the topic. She begins by reviewing equivalent fractions, a concept both familiar and necessary for the new work. She poses a variety of questions and expects the students to explain their reasoning. She does not stop with well-articulated statements of the procedure but demands explanation and connection to the underlying meaning. She seeks to make the procedure make sense by asking for and providing explanations. In this lesson, time is spent in a variety of ways to address Ms. Lawrence’s goals: The students spend time practicing mental computation, developing a general rule for adding fractions, explaining and making sense of others’ explanations, and working with a partner to practice on more complex examples of what they were learning. The lesson proceeds at a steady pace, but one that affords time for developing the ideas. Ms. Lawrence checks to see whether the students are understanding before she assigns them independent work, and the assignment mixes familiar and extension problems to help strengthen students’ proficient command of the content. Although the focus of the lesson is not on strategic competence, when she asks students to estimate the sum of two fractions, she is helping them become sensitive to strategies they might use. Our third teacher, Mr. Hernandez, is working on making and linking different representations of rational numbers (Box 9–4). He works hard to engage all his students in active work on the mathematics. Toward that end, he asks challenging questions that allow for a variety of solutions, and he expects the students to push themselves. He is conscious of the district and state basic skills assessments, but he has concluded that if he invests in this sort of work with his students, it pays off in their preparedness for the test. Occasionally, he finds that the approach is not working for some of his students, and he seeks ways to build their skills more solidly. He worries a bit, since the parents have been quite vocal in his school, with much pressure about getting students to algebra in eighth grade. He takes a strong stand on the importance of developing a solid foundation with number and representation, particularly with rational numbers. This lesson is different from either Mr. Angelo’s or Ms. Lawrence’s. Mr. Hernandez has selected a task that draws on students’ past experience OCR for page 313 Adding + It Up: Helping Children Learn Mathematics it, not just how much time is allocated for mathematics but how that time is spent. They need to investigate not just whether calculators or other resources are used, but how they are used.70 Research that looks across countries can provide a sharper picture of what matters in instruction aimed at developing proficiency. A second set of issues concerns instruction over time. Although learning is fundamentally temporal, too little research has addressed the ways in which instruction develops over time. Many studies are restricted to isolated fragments of teaching and learning, providing little understanding of how the interactions of teachers, students, and content emerge over time, and how earlier interactions shape later ones. How do ideas developed in class affect later work, and what affects teachers’ and students’ ability and inclination to make such links, as well as their use of such connections over time? How is time used, and how does its use by teachers and students affect the quality of instruction? A third arena concerns students and how their diversity affects instruction. Too little research offers insight into the experience of students and how the instruction offered, together with their responses to it, affects their learning. Still more important, there are too few well-designed studies that would offer insight into how instruction might be developed to work effectively for all students. Too often, research on classroom teaching and learning either studies faceless, colorless students and teachers out of context, or it is situated in particular contexts but lacks a design that permits analyses that could provide the knowledge needed for effective instruction in mathematics. Fourth, too little research has addressed what it takes for students to learn mathematics in class. What do students need to do, and know how to do, in order to profit from the instruction offered by each of our four teachers? A cursory glance at any mathematics class makes plain that the skills, abilities, knowledge, and dispositions displayed by students are not the same, and yet teachers and researchers rarely attend to what students need to know and be able to do in order to use instruction effectively. People seem to assume implicitly that instruction acts on students and that opportunities to learn are actually moments of learning. Research that examined both what students have to know and do in mathematics instruction and what teachers can do to enable all students to make use of that instruction would add significantly to the knowledge base on teaching and learning mathematics. A fifth set of issues has to do with reconnecting research on teacher knowledge with instructional effectiveness. Although most people believe that teachers’ knowledge of mathematics and of students makes a difference for OCR for page 313 Adding + It Up: Helping Children Learn Mathematics the quality of teaching, little empirical confirmation of this belief can be found. Moreover, too little is known about the mathematical knowledge that teachers need and how it is used in instruction. We discuss this point more in chapter 10, but it is important to the discussion in this chapter, too. Every time we reiterate that how teachers use texts, manipulatives, and calculators makes the difference, we are hovering around questions concerning what teachers know and how they make use of that knowledge in teaching. Finally, too little of the extant research probes the work of teaching at a sufficiently fine grain to contribute to the development of a conceptual and practical language of practice. Much of the interactive work in instruction remains unexamined, which leaves to teachers the unnecessary challenge of reinventing their practice from scratch, armed with only general advice. Suggestions that a class “discuss the solutions to a problem” provides little specificity about what constitutes a productive discussion and runs the risk of a free-for-all session that resembles sharing more than instruction. Research needs to be designed to illuminate what is entailed in a “discussion” and to probe the specific moves that teachers and students engage in that lead to productive rather than an unproductive discussions. Instruction that develops mathematical proficiency is neither simple, common, nor well understood. It comes in many forms and can follow a variety of paths. As this chapter demonstrates, such instruction offers numerous fertile sites for research that could make a profound difference in teachers’ practice and their students’ learning. Notes 1. An interactive perspective on teaching and learning has been discussed by a number of people, including Piaget, Vygotsky, Bauersfeld, Steier, Voigt, Hawkins, Gravemeijer, Easley, Cobb, and von Glaserfeld. The particular version employed here is based on the work of Cohen and Ball, 1999, 2000, in press. 2. Cohen and Ball, 1999, 2000, in press. 3. This lesson is typical of lessons observed in many U.S. classrooms during the past half-century. See, for example, the report by Fey, 1979, or the more recent TIMSS video study (Stigler and Hiebert, 1999). 4. Note that Mr. Angelo has avoided 10°, partly because the rule is stated in terms of moving the decimal point, and multiplying by 10°=1 leaves the number unchanged. 5. U.S. eighth-grade lessons from the TIMSS video study were characterized the same way. See Stigler and Hiebert, 1999. 6. Cohen and Ball, 2000. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics 7. Berliner and Biddle, 1995. Opportunity to learn was also studied in what is now called the First International Mathematics Study (Husén, 1967), although there it was based on teachers’ perceptions of students’ opportunity to learn. 8. McKnight, Crosswhite, Dossey, Kifer, Swafford, Travers, and Cooney, 1987. 9. Knapp, Shields, and Turnbull, 1995; Mason, Schroeter, Combs, and Washington, 1992; Steele, 1992. 10. Berliner, 1979. 11. Stevenson and Stigler, 1992, p. 150. 12. Freeman and Porter, 1989; Porter, 1993. 13. See, for example, Campbell, 1996; Carpenter, Fennema, Peterson, Chiang, and Loef, 1989; Hiebert, Carpenter, Fennema, Fuson, Wearne, Murray, Olivier, and Human, 1997; Knapp, 1995; Silver and Stein, 1996. 14. Doyle, 1983, 1988; Stein, Grover, and Henningsen, 1996. 15. Henningsen and Stein, 1997; Stein, Grover, and Henningsen, 1996. 16. Clark and Yinger, 1979. 17. Shavelson and Stern, 1981. 18. Boaler, 1997. 19. Good and Brophy, 2000. 20. Good and Brophy, 2000. 21. Smith, 1996. 22. For example, Hatano, 1988, suggests that students are motivated to learn with understanding when they encounter novel problems regularly, are encouraged to seek comprehension over efficiency, and engage in dialogue. 23. National Research Council, 1999b, pp. 29–38. 24. Feather, 1982. 25. Bandura, 1997; Bandura and Schunk, 1981; Dweck and Elliott, 1983. 26. Good and Brophy, 2000. 27. Brophy, 1998, Brophy and Kher, 1986; Good and Brophy, 2000. 28. These principles and the discussion that follows are based largely on a synthesis by Baroody, 1999. For related research and syntheses, see also Baroody, 1987, 1996; Cawley, 1985; and Geary, 1993. For practical advice for teaching, see Thornton and Bley, 1994. 29. Baroody, 1999. 30. See Donlan, 1998, for example, for a discussion of students with speech deficiencies. See Nunes and Moreno, 1998, for a discussion of hearing impairment. 31. Becker, 1981; Leder, 1987. See also Leder, 1992. 32. Ladson-Billings, 1999. 33. Foster, 1995. 34. Steele, 1992. 35. Knapp, 1995. 36. Good and Brophy, 2000. 37. See, for example, Ball and Bass, 2000; Cobb, Boufi, McClain, and Whitenack, 1997; Hiebert and Wearne, 1993; Lampert, 1990; Wood, 1999. 38. Hiebert, Carpenter, Fennema, Fuson, Wearne, Murray, Olivier, and Human, 1997. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics 39. Oakes, 1985: Oakes, Gamoran, and Page, 1992. 40. Kulik, 1992; Linchevski and Kutsher, 1998; Mason and Good, 1993; Mosteller, Light, and Sachs, 1996; Slavin, 1987, 1993. 41. Loveless, 1998. 42. Linchevski and Kutscher, 1998. 43. Argys, Rees, and Brewer, 1996. 44. Druckman and Bjork, 1994, pp. 83–111; Johnson, Johnson, and Maruyama, 1983; Sharan, 1980; Slavin, 1980, 1983, 1995. 45. Ellis and Gauvain, 1992. 46. Fennema, Carpenter, Franke, Levi, Jacobs, and Empson, 1996; Thompson and Briars, 1989. 47. Hiebert, 1990. 48. Case, 1985. 49. Flanders, 1987; McKnight, Crosswhite, Dossey, Kifer, Swafford, Travers, and Cooney, 1987; Schmidt, McKnight, and Raizen, 1997. 50. Siegler and Stern, in press; Sophian, 1997. 51. Carpenter, Fennema, Peterson, Chiang, and Loef, 1989; Cobb, Wood, Yackel, Nicholls, Wheatley, Trigatti, and Perlwitz, 1991; Fennema, Carpenter, Franke, Levi, Jacobs, and Empson, 1996; Hiebert and Wearne, 1993. 52. Cooper, 1989; Epstein, 1988; Miller and Kelley, 1991. 53. Epstein, 1998; Good and Brophy, 2000. 54. Good and Brophy, 2000. 55. Fuson, 1986; Fuson and Briars, 1990; Wearne and Hiebert, 1988. 56. Cohen, 1990; Hart, 1996; Resnick and Omanson, 1987. 57. Ball, 1992a, 1992b. 58. Thompson and Lambden, 1994. 59. Fuson, Wearne, Hiebert, Murray, Human, Olivier, Carpenter, and Fennema, 1997; Hiebert, Carpenter, Fennema, Fuson, Wearne, Murray, Olivier, and Human, 1997. 60. Fuson, 1986. 61. Fey, 1989; NCTM, 1974. 62. Brolin and Björk, 1992; Groves 1993, 1994a, 1994b; Hembree and Dessart, 1986, 1992; Ruthven, 1996, 1998; Shuard, 1992. 63. Hembree and Dessart, 1986, 1992. 64. Ruthven, 1996. 65. Brolin and Björk, 1992. 66. Groves, 1993, 1994a, 1994b. 67. Shuard, 1992. 68. Mitchell, Hawkins, Jakwerth, Stancavage, and Dossey, 1999. 69. National Research Council, 1999a, p. 48. 70. Stigler and Hiebert, 1999. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics References Argys, L.M., Rees, D.I., & Brewer, D.J. (1996). Detracking America’s schools: Equity at zero cost? Journal of Policy Analysis and Management, 15, 623–645. Ball, D.L. (1992a). Constructing new forms of teaching: Subject matter knowledge in inservice teacher education. Journal of Teacher Education, 43, 347–356. Ball, D.L. (1992b). Magical hopes: Manipulatives and the reform of math education. American Educator, 14, 46–47. Ball, D.L., & Bass, H. (2000). Making believe: The collective construction of public mathematical knowledge in the elementary classroom. In D.Phillips (Ed.), Constructivism in education (Ninety-ninth Yearbook of the National Society for the Study of Education , Part 1, pp. 193–224). Chicago: University of Chicago Press. Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman. Bandura, A., & Schunk, D. (1981). Cultivating competence, self-efficacy, and intrinsic interest through proximal self-motivation. Journal of Personality and Social Psychology, 41, 586–598. Baroody, A.J. (1987). Children’s mathematical thinking: A developmental framework for preschool, primary, and special education teachers. New York: Teachers College Press. Baroody, A.J. (1996). An investigative approach to teaching children labeled learning disabled. In D.K.Reid, W.P.Hresko, & H.L.Swanson (Eds.), Cognitive approaches to learning disabilities (3rd ed., pp. 545–615). Austin, TX: Pro-Ed. Baroody, A.J. (1999). The development of basic number and arithmetic knowledge among children classified as mentally retarded. In L.M.Glidden (Ed.), International review of research in mental retardation (vol. 22, pp. 51–103). New York: Academic Press. Becker, J. (1981). Differential treatment of females and males in mathematics class. Journal for Research in Mathematics Education, 12, 40–53. Berliner, D. (1979). Tempus educare. In P.Peterson & H.Walberg (Eds.), Research on teaching: Concepts, findings, and implications. Berkeley, CA: McCutchan. Berliner, D., & Biddle, B. (1995). The manufactured crisis: Myth, fraud, and the attack on America’s public schools. New York: Addison-Wesley. Boaler, J. (1997). Experiencing school mathematics: Teaching styles, sex and setting. Buckingham, UK: Open University Press. Brolin, H., & Björk, L-E. (1992). Introducing calculators in Swedish schools. In J.T.Fey and C.R.Hirsch (Eds.), Calculators in mathematics education (1992 Yearbook of the National Council of Teachers of Mathematics, pp. 226–232). Reston, VA: NCTM. Brophy, J. (1998). Motivating students to learn. Boston: McGraw-Hill. Brophy, J., & Kher, N. (1986). Teacher socialization as a mechanism for developing student motivation to learn. In R.Feldman (Ed.), Social psychology applied to education (pp. 256–288). New York: Cambridge University Press. Campbell, P.F. (1996). Empowering children and teachers in the elementary mathematics classrooms of urban schools . Urban Education, 30, 449–475. Case, R. (1985). Intellectual development: Birth to adulthood. New York: Academic Press. Carpenter, T.P., Fennema, E., Fuson, K., Hiebert, J.Human, P., Murray, H., Olivier, A., & Wearne, D. (1999). Learning basic number concepts and skills as problem solving. In E.Fennema & T.A.Romberg, Mathematics classrooms that promote understanding (pp. 45–61). Mahwah, NJ: Erlbaum. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics Carpenter, T.P., Fennema, E., Peterson, P.L., Chiang, C.P., & Loef, M. (1989). Using knowledge of children’s mathematics thinking in classroom teaching: An experimental study. American Educational Research Journal, 26, 499–531. Cawley, J.F. (1985). Cognition and the learning disabled. In J.F.Cawley (Ed.), Cognitive strategies and mathematics for the learning disabled (pp. 1–32). Rockville, MD: Aspen. Clark, C., & Yinger, R. (1979). Teachers’ thinking. In P.L.Peterson & H.J.Walberg (Eds.), Research on teaching: Concepts, findings, and implications (pp. 231–263). Berkeley, CA: McCutchan. Cobb, P., Boufi, A., McClain, K., & Whitenack, J. (1997). Reflective discourse and collective reflection. Journal for Research in Mathematics Education, 28, 258–277. Cobb, P., Wood, T., Yackel, E., Nicholls, J., Wheatley, G., Trigatti, B., & Perlwitz, M. (1991). Assessment of a problem-centered second-grade mathematics project. Journal for Research in Mathematics Education, 22, 3–29. Cohen, D.K. (1990). Revolution in one classroom: The case of Mrs. Oublier. Education Evaluation and Policy Analysis, 12, 311–329. Cohen, D.K., & Ball, D.L. (1999). Instruction, capacity, and improvement (CPRE Research Report No. RR-043). Philadelphia: University of Pennsylvania, Consortium for Policy Research in Education. Available: http:/ /www.gse.upenn.edu/cpre/docs/pubs/rr43.pdf. [July 10, 2001]. Cohen, D.K., & Ball, D.L. (2000, April). Instructional innovation: Reconsidering the story. Paper presented at the meeting of the American Educational Research Association, New Orleans. Cohen, D.K., & Ball, D.L. (in press). Making change: Instruction and its improvement. Phi Delta Kappan. Cooper, H. (1989). Homework. New York: Longman. Donlan, C. (1998). Number without language? Studies of children with specific language impairments. In C.Donlan (Ed.), The development of mathematical skills (pp. 255–274). East Sussex, UK: Psychology Press. Doyle, W. (1983). Academic work. Review of Educational Research, 53, 159–199. Doyle, W. (1988). Work in mathematics classes: The context of students’ thinking during instruction. Educational Psychologist, 23, 167–180. Druckman, D., & Bjork, R.A. (Eds). (1994). Learning, remembering, believing: Enhancing human performance. Washington, DC: National Academy Press. Available: http://books.nap.edu/catalog/2303.html. [July 10, 2001]. Dweck, C., & Elliott, E. (1983). Achievement motivation. In E.M.Heatherington (Ed.), P.H.Mussen (Series Ed.), Handbook of child psychology: Vol. 4. Socialization, personality, and social development (4th ed., pp. 643–691). New York: Wiley. Ellis, S.A., & Gauvain, M. (1992). Social cultural influences on children’s collaborative interactions. In L.T.Winegar & J.Valsiner (Eds.), Children’s development within social context (pp. 155–180). Hillsdale, NJ: Erlbaum. Epstein, J. (1988). Homework practices, achievements, and behaviors of elementary school students (Report No. 26). Baltimore: Johns Hopkins University, Center for Research on Elementary and Middle Schools. (ERIC Document Reproduction Service No. ED 301 322). Epstein, J. (1998, April). Interactive homework: Effective strategies to connect home and school. Paper presented at the meeting of the American Educational Research Association, San Diego, CA. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics Feather, N. (Ed.). (1982). Expectations and actions. Hillsdale, NJ: Erlbaum. Fennema, E., Carpenter, T.P., Franke, M.L., Levi, L., Jacobs, V.R., & Empson, S.B. (1996). A longitudinal study of learning to use children’s thinking in mathematics instruction. Journal for Research in Mathematics Education, 27, 403–434. Fey, J.T. (1979). Mathematics teaching today: Perspectives from three national surveys. Mathematics Teacher, 72, 490–504. Fey, J.T. (1989). Technology and mathematics education: A survey of recent developments and important problems. Educational Studies in Mathematics, 20, 237–272. Flanders, J.R. (1987, September). How much of the content in mathematics textbooks is new? Arithmetic Teacher, 35, 1, 18–23. Foster, M. (1995). African American teachers and culturally relevant pedagogy. In J.A. Banks & C.M.Banks (Eds.), Handbook of research on multicultural education (pp. 570– 581). New York: Macmillan. Freeman, D., & Porter, A. (1989). Do textbooks dictate the content of mathematics instruction in elementary schools? American Educational Research Journal, 26, 403– 421. Fuson, K.C. (1986). Roles of representation and verbalization in the teaching of multidigit addition and subtraction. European Journal of Psychology of Education, 1, 35–56. Fuson, K.C., Wearne, D., Hiebert, J.C., Murray, H.G., Human, P.G., Olivier, A.I, Carpenter, T.P., & Fennema E. (1997). Children’s conceptual structures for multidigit numbers and methods of multidigit addition and subtraction. Journal for Research in Mathematics Education, 28, 130–162. Fuson, K.C., & Briars, D.J. (1990). Using a base-ten block learning/teaching approach for first and second grade place-value and multidigit addition and subtraction. Journal for Research in Mathematics Education 21, 180–206. Geary, D.C. (1993). Mathematical disabilities: Cognitive, neuropsychological, and genetic components. Psychological Bulletin, 114, 345–362. Geary, D.C. (1994). Children’s mathematical development: Research and practical applications. Washington, DC: American Psychological Association. Good, T.L., & Brophy, J.E. (2000). Looking in classrooms (8th ed.). New York: Longman. Good, T.L., Grouws, D.A., & Ebmeier, H. (1983). Active teaching. New York: Longman. Groves, S. (1993). The effect of calculator use on third graders’ solutions of real world division and multiplication problems. In I.Hirabayashi, N.Nodha, K.Shigematsu, & F.L.Lin (Eds.), Proceedings of the Seventeenth International Conference for the Psychology of Mathematics Education, (vol. 2, pp. 9–16). Tsukuba, Japan: PME Program Committee. (ERIC Document Reproduction Service No. ED 383 536). Groves, S. (1994a). Calculators: A learning environment to promote number sense. Paper presented at the meeting of the American Educational Research Association, New Orleans. (ERIC Document Reproduction Service No. ED 373 969). Groves, S. (1994b). The effect of calculator use on third and fourth graders’ computation and choice of calculating device. In J.da Ponte & J.F.Matos (Eds.), Proceedings of the Eighteenth International Conference for the Psychology of Mathematics Education (vol. 3, pp. 33–40). Lisbon, Portugal: PME Program Committee. (ERIC Document Reproduction Service No. ED 383 537). OCR for page 313 Adding + It Up: Helping Children Learn Mathematics Hatano, G. (1988). Social and motivational bases for mathematical understanding. In G. B.Saxe & M.Gearhart (Eds.), Children’s mathematics (pp. 55–70). San Francisco: Jossey-Bass. Hart, K. (1996). What responsibility do researchers have to mathematics teachers and children? In C.Alsina, J.M.Alvarez, B.Hodgson, C.Laborde, & A.Perez (Eds.), 8th International Congress on Mathematics Education: Selected lectures (pp. 251–256). Seville, Spain: S.A.E.M.Thales. Hembree, R., & Dessart, D.J. (1986). Effects of hand-held calculators in precollege mathematics education: A meta-analysis. Journal for Research in Mathematics Education, 17, 83–99. Hembree R., & Dessart, D.J. (1992). Research on calculators in mathematics education. In J.Fey & C.Hirsch (Eds.), Calculators in mathematics education (pp. 23–32). Reston, VA: National Council of Teachers of Mathematics. Henningsen, M., & Stein, M.K. (1997). Mathematical tasks and student cognition: Classroom-based factors that support and inhibit high-level mathematical thinking and reasoning. Journal for Research in Mathematics Education, 28, 524–549. Hiebert, J. (1990). The role of routine procedures in the development of mathematical competence. In T.J.Cooney & C.R.Hirsch (Eds.), Teaching and learning mathematics in the 1990s (1990 Yearbook of the National Council of Teachers of Mathematics, pp. 31–40). Reston, VA: NCTM. Hiebert, J., & Wearne, D. (1993). Instructional tasks, classroom discourse, and student learning in second grade. American Educational Research Journal, 30, 393–425. Hiebert, J., Carpenter, T.P., Fennema, E., Fuson, K.C., Wearne, D., Murray, H., Olivier, A., & Human, P. (1997). Making sense: Teaching and learning mathematics with understanding. Portsmouth, NH: Heinemann. Husén, T. (Ed.). (1967). International study of achievement in mathematics: A comparative study of twelve countries (vol. 2). New York: Wiley. Johnson, D., Johnson, R., & Maruyama, G. (1983). Interdependence and interpersonal attraction among heterogeneous and homogeneous individuals: A theoretical formulation and a meta-analysis of the research. Review of Educational Research, 53, 5– 54. Knapp M.S., Shields, P.M., Turnbull, B.J. (1995). Academic challenge in high-poverty classrooms. Phi Delta Kappan, 76, 770–776. Knapp, M.S. (1995). Teaching for meaning in high poverty classrooms. New York: Teachers College Press. Kulik, J.A. (1992). An analysis of the research on ability grouping: Historical and contemporary perspectives (Ability Grouping Research-Based Decision Making Series, No. 9204). Ann Arbor: University of Michigan. Ladson-Billings, G. (1999). Mathematics for all? Perspectives on the mathematics achievement gap. Unpublished paper prepared for the National Research Council, Washington, DC. Lampert, M. (1990). When the problem is not the question and the solution is not the answer: Mathematical knowing and teaching. American Educational Research Journal, 27, 29–63. Leder, G.C. (1987). Teacher student interaction: A case study. Educational Studies in Mathematics, 18, 255–271. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics Leder, G.C. (1992). Mathematics and gender: Changing perspectives. In D.Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 597–622). New York: Macmillan. Linchevski, L., & Kutscher, F. (1998). Tell me with whom you’re learning, and I’ll tell you how much you’ve learned: Mixed-ability versus same-ability grouping in mathematics. Journal for Research in Mathematics Education, 29, 533–554. Loveless, T. (1998). The tracking and ability grouping debate. Fordham Report, 2(8), 1– 27. Available: http:// www.edexcellence.net/library/track.html. [July 10, 2001]. Mason, D.A., & Good, T.L. (1993). Effects of two-group and whole-class teaching on regrouped elementary students’ mathematics achievement. American Educational Research Journal, 30, 328–360. Mason, D., Schroeter, D., Combs, R., & Washington, K. (1992). Assigning average achieving eighth graders to advanced mathematics classes in an urban junior high. Elementary School Journal, 92, 587–599. McKnight, C.C., Crosswhite, F.J., Dossey, J.A., Kifer, E., Swafford, J.O., Travers, K.J., & Cooney, T.J. (1987). The underachieving curriculum: Assessing U.S. schools mathematics from an international perspective. Champaign, IL: Stipes. Mitchell, J.H., Hawkins, E.F., Jakwerth, P.M,. Stancavage, F.B., & Dossey, J.A. (1999). Student work and teacher practices in mathematics (NCES 1999–453). Washington, DC: National Center for Educational Statistics. Available: http://nces.ed.gov/spider/webspider/1999453.shtml. [July 10, 2001]. Miller, D., & Kelley, M. (1991). Interventions for improving homework performance: A critical review. School Psychology Quarterly, 6, 174–185. Mosteller, F., Light, R.J., & Sachs, J.A. (1996). Sustained inquiry in education: Lessons from skill grouping and class size. Harvard Educational Review, 66, 797–842. National Council of Teachers of Mathematics. (1974, December). NCTM Board approves policy statement on the use of minicalculators in the mathematics classroom. NCTM Newsletter, 11, 3. National Research Council. (1999a). Global perspectives for local action: Using TIMSS to improve U.S. mathematics and science education. Washington, DC: National Academy Press. Available: http://books.nap.edu/catalog/9605.html. [July 10, 2001]. National Research Council. (1999b). Improving student learning: A strategic plan for education research and its utilization. Washington, DC: National Academy Press. Available: http://books.nap.edu/catalog/6488.html. [July 10, 2001]. Nunes, T., & Moreno, C. (1998). Is hearing impairment a cause of difficulties in learning mathematics? In C.Donlan (Ed.), The development of mathematical skills (pp. 227–254). East Sussex, UK: Psychology Press. Oakes, J. (1985). Keeping track: How schools structure inequality. New Haven: Yale University Press. Oakes, J., Gamoran, A., & Page, R.N. (1992). Curriculum differentiation: Opportunities, outcomes, and meanings. In P.W.Jackson (Ed.), Handbook of research on curriculum (pp. 570–608). New York: Macmillan. Porter, A. (1993). School delivery standards. Educational Research, 22, 24–30. Resnick, L., & Omanson, S. (1987). Learning to understand arithmetic. In R.Glaser (Ed.), Advances in instructional psychology (vol. 3, pp. 41–95). Hillsdale, NJ: Erlbaum. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics Ruthven, K. (1996). Calculators in mathematics curriculum: The scope of personal computational technology. In A.J.Bishop, K Clements, C.Keitel, J.Kilpatrick, & C. Laborde (Eds.), International handbook of mathematics education (pp. 435–468). Dordrecht, The Netherlands: Kluwer. Ruthven, K. (1998). The use of mental, written and calculator strategies of numerical computation by upper primary pupils within a “calculator-aware” curriculum. British Educational Research Journal, 24, 21–42. Schmidt, W.H., McKnight, C.C., & Raizen, S.A. (1997). A splintered vision: An investigation of U.S. science and mathematics education. Dordrecht, The Netherlands: Kluwer. Sharan S. (1980). Cooperative learning in small groups: Recent methods and effects on achievement, attitudes, and ethnic relations. Review of Education Research, 50, 241– 271. Shavelson, R.J., & Stern, P. (1981). Research on teachers’ pedagogical thoughts, judgments, decisions, and behavior. Review of Educational Research, 51, 455–498. Shuard, H. (1992). CAN: Calculator use in the primary grades in England and Wales. In J.T.Fey and C.R.Hirsch (Eds.), Calculators in mathematics education (1992 Yearbook of the National Council of Teachers of Mathematics, pp. 33–45). Reston, VA: NCTM. Siegler, R.S., & Stern, E. (in press). Conscious and unconscious strategy discoveries: A microgenetic analysis. Journal of Experimental Psychology: General. Silver, E.A., & Stein, M. (1996). The QUASAR Project: The “revolution of the possible” in mathematics instructional reform in urban middle schools. Urban Education, 30, 476–521. Slavin, R.E. (1980). Cooperative learning. Review of Educational Research, 50, 315–342. Slavin, R.E. (1983). Cooperative learning. New York: Longman. Slavin, R.E. (1987). Ability grouping and student achievement in elementary schools: A best-evidence synthesis. Review of Educational Research, 57, 293–336. Slavin, R.E. (1993). Ability grouping in the middle grades: Achievement effects and alternatives. Elementary School Journal, 93, 535–552. Slavin, R.E. (1995). Cooperative learning: Theory, research, and practice (2nd ed.). Boston: Allyn & Bacon. Smith, J.P., III. (1996). Efficacy and teaching mathematics by telling: A challenge for reform. Journal for Research in Mathematics Education, 27, 387–402. Sophian, C. (1997). Beyond competence: The significance of performance for conceptual development. Cognitive Development, 12, 281–303. Steele, C. (1992, April). Race and the schooling of black Americans. Atlantic Monthly, pp. 68–78. Stein, M.K., Grover, B.W., & Henningsen, M. (1996). Building student capacity for mathematical thinking and reasoning: An analysis of mathematical tasks used in reform classrooms. American Educational Research Journal, 33, 455–488. Stevenson, H.W., & Stigler, J.W. (1992). The learning gap: Why our schools are failing and what we can learn from Japanese and Chinese education. New York: Simon & Schuster. Stigler, J.W., & Hiebert, J. (1999). The teaching gap: Best ideas from the world’s teachers for improving education in the classroom. New York: Free Press. Thompson, A.G., & Briars, D.J. (1989). Assessing students’ learning to inform teaching: The message in NCTM’s Evaluation Standards. Arithmetic Teacher, 37 (4), 22–26. OCR for page 313 Adding + It Up: Helping Children Learn Mathematics Thompson, P.W., & Lambdin, D. (1994). Research into practice: Concrete materials and teaching for mathematical understanding. Arithmetic Teacher, 41, 556–558. Thornton, C.A., & Bley, N.S. (1994). Windows of opportunity: Mathematics for students with special needs. Reston, VA: National Council of Teachers of Mathematics. Wood, T. (1999). Creating a context for argument in mathematics class. Journal for Research in Mathematics Education, 30, 171–191. Wearne, D., & Hiebert, J. (1988). A cognitive approach to meaningful mathematics instruction: Testing a local theory using decimal numbers. Journal for Research in Mathematics Education, 19, 371–384.
{"url":"http://www.nap.edu/openbook.php?record_id=9822&page=313","timestamp":"2014-04-18T05:39:28Z","content_type":null,"content_length":"105622","record_id":"<urn:uuid:636204d1-71d0-4100-85b2-23faa7a9f975>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Is type=complex or type=multilevel the appropriate type of analysis? Alex Zablah posted on Thursday, March 10, 2005 - 11:31 am Hi, hope all is well. I was hoping to get your insight/input on the following: I have survey data (6 dependent variables) for 300 customers. These 300 customers belong to one of 10 different providers. Hence, the cases are not IID. I also have survey data from these 10 providers (1 independent variable). I want to test the relationship between the provider-level independent variable and three of the customer-level dependent variables using structural equation modeling (in Mplus 3.10). Given that my between-level sample size is only 10, that means that multi-level modeling is out of the question. Right? If I disaggregate the provider level data (i.e. assign the same value for the independent variable to each customer who shares the same provider) would it be appropriate to run the SEM model (n=300) using the type=complex design? I'd appreciate any feedback/suggestions you could offer. Best regards, bmuthen posted on Thursday, March 10, 2005 - 4:27 pm Yes, I think 10 clusters makes 2-level modeling not perform well. Unfortunately, our simulations suggest that type=complex also needs more clusters than 10 - at least 20. So perhaps the only way is to view "provider" as a fixed effect instead of a random effect and use 9 provider dummy variables as covariates. Bayesian analysis using priors is theonly way I have seen that attempts to deal with such a small number of clusters. Herb Marsh posted on Monday, April 08, 2013 - 6:33 pm Why is it no longer possible to use Type=complex to get correct standard errors for analyses are done at level 1 when there are three levels. For example, results at the student level when students are nested within classes, and classes are nested within schools. Linda K. Muthen posted on Tuesday, April 09, 2013 - 9:43 am COMPLEX TWOLEVEL, THREELEVEL, and COMPLEX THREELEVEL are all available. There have been no changes. I'm not sure I understand your question. Herb Marsh posted on Tuesday, April 09, 2013 - 9:14 pm Linda: Here is what I did and the error message that I got. I recall that it was possible to have two cluster variables when analyses were done only at level 1, but maybe I am mistaken. In any case, why is it apparently not allowed. cluster is ID_Schl Id_Class ; ANALYSIS: type= complex; ESTIMATOR=MLR; *** ERROR in VARIABLE command Two cluster variables are allowed for TYPE=TWOLEVEL COMPLEX. Only one cluster variable is allowed for TYPE=COMPLEX (single level). Limit on the number of cluster variables reached. Linda K. Muthen posted on Wednesday, April 10, 2013 - 6:54 am We've never allowed more than one cluster variable with TYPE=COMPLEX. You would need to use TWOLEVEL COMPLEX to handle two cluster variables. Melissa Kull posted on Monday, April 22, 2013 - 7:45 am I have a three level model with observations (level 1) nested within individuals (level 2) nested within cities (level 3). The data I am using requires sampling weights and we only have observations at 2 timepoints. From Chapter Nine of the users guide (p. 252) I am not sure whether I should treat this model as TYPE=TWOLEVEL (and treat our two observation points as "time") or TYPE=THREELEVEL and treat our first level as cross sectional? In addition, the outcome we are using is a count variable and it is my understanding that users can't use sample weights with count variables in TYPE=THREELEVEL? Many thanks, Linda K. Muthen posted on Monday, April 22, 2013 - 12:09 pm I would treat this a a TWOLEVEL analysis with data in the wide format. THREELEVEL is not available for count variables. anonymous posted on Wednesday, May 08, 2013 - 12:04 pm I'm aware that TYPE=COMPLEX with the cluster option adjusts for non-independence in terms of the chi-square statistic and the standard errors of the model, but not the parameter estimates (parameter estimates are adjusted for with the multi-level modeling). Is it that the TYPE=COMPLEX and cluster option only adjusts for the parameter estimates' significance, but not their magnitude? I'm wondering whether it is appropriate to estimate a model of treatment effects involving children nested within schools using the TYPE=COMPLEX and cluster option. Linda K. Muthen posted on Wednesday, May 08, 2013 - 12:17 pm Parameter estimates are adjusted if the WEIGHT option is used. There is no difference between COMPLEX and TWOLEVEL in this regard. Elina Dale posted on Sunday, May 12, 2013 - 9:11 am Dear Dr. Muthen, I am wondering about the difference between TYPE=COMPLEX ad TYPE=TWOLEVEL analysis of SEM in MPlus. In traditional regression modeling, there is a distinction between population average and subject specific models. Population average models such as GEE describe the covariance among clustered observations, whereas SS/hierarchical models explain the source of this covariance. So, the coefficients are interpreted differently: PA model estimates the difference in Y b/n group A with X and group B without X; the SS model the expected change in individual's probability of Y given change in X. I am wondering if I use TYPE=COMPLEX in my SEM as I have clustered data, the coefficient from my structural model - effect of treatment X on a latent factor F - is it interpreted as PA or SS? In other words, with specification COMPLEX, do we have a population average model or random effects model in MPlus? Do we need to specify TWOLEVEL to have a subject specific interpretation of coefficients? Thank you! Linda K. Muthen posted on Monday, May 13, 2013 - 8:46 am Subject-specific refers to random coefficients. You would need to use TYPE=TWOLEVEL RANDOM with random coefficients. TYPE=COMPLEX adjusts the standard errors for non-independence of observations. Elina Dale posted on Monday, May 13, 2013 - 10:34 am So, TYPE=COMPLEX is a marginal model? Are the coefficients interpreted as population average as in marginal models explained in papers by Zeger et al, 1988? It would be helpful to get a bit more explanation as to how some of MPlus specifications relate to more widespread / traditional types of analyses. Thank you! Bengt O. Muthen posted on Monday, May 13, 2013 - 8:46 pm A single-level regression model (linear or logistic) is a "widespread/traditional type of analysis" - if you have a regression model and use TYPE=COMPLEX you are doing regression analysis and you get your SEs adjusted for complex survey data features. So the interpretation is the usual one for regression modeling. Same for factor analysis. If you have two-level data and don't do TYPE=TWOLEVEL but do TYPE=COMPLEX you get a so called "aggregated" model using terms in well-known complex survey data literature such as the 1989 Analysis of Complex Surveys book edited by Skinner, Holt, and Smith. GEE is a limited-information estimator, not a full-information maximum-likelihood estimator. You can see the relationship between GEE estimation and the closely related limited-information WLSMV estimation in Mplus in the paper on factor analysis on our website: Muthén, B., du Toit, S.H.C., & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Unpublished technical report. Elina Dale posted on Tuesday, May 14, 2013 - 10:14 am Thank you, Dr. Muthen! So does this mean that TYPE=COMPLEX specifies design-based or model-based analysis? It is vital for me to understand what this specification implies, so please, forgive my persistence. Skinner et al. distinguish (A) design vs. (B) model based approaches to analysis. Within model-based approach we have (a) aggregated (marginal) and (b) disaggregated (random effects) models. "A basic distinction is between design-based and model-based inference.... Aggregated analysis may therefore alternatively be referred to as marginal modelling and the distinction between aggregated and disaggregated analysis is analogous, to a limited extent, to the distinction between population-averaged and subject-specific analysis, widely used in biostatistics." Thank you again! Bengt O. Muthen posted on Tuesday, May 14, 2013 - 6:13 pm You may be interested in the chapter: Muthén, B. & Satorra, A. (1995). Complex sample data in structural equation modeling. Sociological Methodology, 25, 267-316. The labels you refer to are not always clear cut (at least not to me) so I'll describe what we do instead. With TYPE=COMPLEX we do complex survey SEs using the Hubert-White sandwich estimator. The parameters are the usual single-level parameters. The fact that we can also handle SE calculations based on replicate weights might qualify us for the design-based camp; I am not sure about these distinctions. TYPE=COMPLEX does an aggregated analysis when data are hierarchical (say twolevel) because it doesn't model parameters on both levels. In contrast, TYPE=TWOLEVEL or TYPE = COMPLEX TWOLEVEL does a disaggregated analysis. I discuss the difference in the above chapter in terms of factor analysis. You can also read more about what we do by reading the papers under our Complex Survey Data section: Bengt O. Muthen posted on Tuesday, May 14, 2013 - 6:23 pm Another useful book is the 2003 Chambers & Skinner Wiley book. Elina Dale posted on Wednesday, May 15, 2013 - 5:37 am Thank you so much, Dr. Muthen! Really appreciate your response, I think I understand now. I have also started looking at Chambers & Skinner 2003 book. Will check out your paper for which you sent me the link. Elina Dale posted on Monday, August 12, 2013 - 10:06 am Dear Dr. Muthen, I have re-read your paper (B. Muthen & A.Sattora, 1995) on complex sample data in SEM and I still have clarifying questions on the procedure used by MPlus when I specify "COMPLEX" in the Analysis. On pp. 281-288, you describe the aggregated analysis, which Chambers & Skinner (2003) say "may alternatively be referred to as marginal modeling". I would greatly appreciate it if you could clarify: 1) whether the aggregated approach as described in Muthen & Sattora (1995) is a model or design-based approach to inference, b/c it can be used in either according to Chambers & Skinner (2003); 2) whether "COMPLEX" specification is a model-based aggregated approach. Last question. Typically, as you say, design-based analysis uses weights in parameter estimation. I wonder if weights are required when using "COMPLEX". Thank you! Bengt O. Muthen posted on Tuesday, August 13, 2013 - 10:30 am 1) I see it as a model-based approach 2) I see COMPLEX as a model-based aggregated approach. Weights are not required when using COMPLEX. For instance, there may be just clustering. Elina Dale posted on Wednesday, August 14, 2013 - 8:11 am Thank you, Dr. Muthen! This is very helpful. Christoph Weber posted on Friday, February 14, 2014 - 7:10 am Dear Dr. Muthen! I am analysing threelevel data (students, classes, schools). The question is, if a school system reform has an effect on the achievement of students. 8. grade students were tested before the reform was implemented and then 8. grade students after the reform (using the same schools). I'm using a threelevel model with "reform (0/1)" on the class level and estimate the effect on achievemnt (class level). Is this correct? Further I wonder why I get a different estimate for "achievement ON reform", when I use type = complex (cluster = class)? The estimate for type complex is equal to the simple mean difference between reform = 0 and reform = 1 using SPSS. christoph weber Bengt O. Muthen posted on Friday, February 14, 2014 - 11:57 am You say that this is a school system reform; isn't your "reform" variable a school-level variable and not a classroom-level variable? The 3-level model results won't agree with Type=complex with cluster=class because the latter takes only classroom clustering into account. You would do better with Type=Complex Twolevel and define 2 cluster variables: school and classroom (see UG). Christoph Weber posted on Friday, February 14, 2014 - 1:55 pm Thanks, I treat reform as a class level variable, because we have a kind of trend analysis. The same schools were tested two times (8.graders 2008 and 8.graders 2012), thus there is variation of "reform" within the school clusters. I thought that taking the complex design into account (complex or multilevel) just affects the SE, not the estimates. Sorry, what does UG mean? Christoph Weber posted on Friday, February 14, 2014 - 2:01 pm I get it, users guide Christoph Weber posted on Tuesday, February 25, 2014 - 6:10 am One more question. I read a pdf "Mplus Short courses topic 7 multilevel modeling with ...". There is a ranom effects ANOVA example comparing type = twolevel with type = complex. Both models yield the same mean and SE. When I compare the two types with my data I get different means. Is this because of different cluster sizes?. The Anova Example uses data with equal cluster size. Christoph Weber Bengt O. Muthen posted on Tuesday, February 25, 2014 - 12:20 pm Yes, my results were for equal cluster sizes. Christoph Weber posted on Tuesday, February 25, 2014 - 12:32 pm Will twolevel and complex only yield the same results with equal cluster sizes? Linda K. Muthen posted on Tuesday, February 25, 2014 - 1:16 pm Back to top
{"url":"http://www.statmodel.com/discussion/messages/12/587.html?1366657745","timestamp":"2014-04-19T05:48:10Z","content_type":null,"content_length":"58687","record_id":"<urn:uuid:98835620-72a0-426f-aad8-99d5ce47de14>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Hightstown SAT Math Tutor Find a Hightstown SAT Math Tutor ...My students usually feel much better about the algebra and then they have a solid foundation for future math courses/proficiency tests. "Pre-algebra" skills are the foundation for all high school mathematics. This grouping includes numerical operations, basic geometry, identifying patterns, dat... 10 Subjects: including SAT math, geometry, algebra 2, algebra 1 ...I look forward to working with you/your child! Other Hobbies:I am an avid traveler, and enjoy spending time with my family and friends.I am currently a tutor for a student struggling in Algebra I. The student's grade significantly improved upon beginning tutoring. 26 Subjects: including SAT math, chemistry, physics, calculus I am a graduate of The College of New Jersey with BA in Mathematics and Secondary Education, and am a certified K-12 teacher of mathematics. I have experience with test prep including: SSATSAT ACT Praxis Mathematics Content Knowledge I have tutoring experience with students as young as 5 years old ... 11 Subjects: including SAT math, calculus, geometry, algebra 1 ...He has been a full-time teacher for 2 years, including 2 years as a substitute classroom teacher in Middlesex County for grades K-12. Uri also tutored Spanish to children and adults of all levels, as well as other subjects at Middlesex County College. He loves to cook, watch documentaries, listen to all kinds of music, and traveling. 19 Subjects: including SAT math, Spanish, algebra 2, statistics ...Though I majored in biology and economics, my specialty for years has been tutoring Math -- for high school classes, SAT prep, and GMAT prep. With my help, my students have increased their scores on standardized tests, not only because they better understand the material but also because they be... 14 Subjects: including SAT math, geometry, statistics, Microsoft Excel
{"url":"http://www.purplemath.com/hightstown_sat_math_tutors.php","timestamp":"2014-04-20T21:07:29Z","content_type":null,"content_length":"24043","record_id":"<urn:uuid:9f28e66c-bdea-4fd4-aa31-75d2174aa38b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Dissecting trapezoids into triangles of equal area up vote 9 down vote favorite [Lightly edited for copy and proper formatting of mathematics. -- Pete L. Clark] The Background: Let $T$ be a trapezoid. Sherman Stein, using valuation theory, showed that if $T$ is dissectible into n triangles of equal area, then $\frac{n}{(r+1)}$, where $r$ is the ratio of the parallel sides, is an algebraic integer. See Monsky and Jepsen, Constructing equidissections for certain classes of trapezoids, Discrete Mathematics, 308 23 (2008) 5672-5681. In particular $r$ is algebraic. If $r$ is in $\mathbb{Q}$, a diagonal of $T$ divides $T$ into two triangles of commensurable area, so $T$ has an "equidissection". Stein showed that if $r$ is a root of a degree $2$ element of $\ mathbb{Q}[z]$ [i.e., the univariate polynomial ring over $\mathbb{Q}$ -- PLC] with both roots positive, then $T$ is equidissectible. Using ideas of Stein and Jepsen I showed that if $r$ is a root of a degree $3$ or $4$ element of $\mathbb{Q}[z]$ with all roots in the open right half-plane, then once more $T$ is equidissectible. (It's not clear that my proof can be adapted to degree greater than $4$). In contrast, whenever the algebraic $r$ has a conjugate over $\mathbb{Q}$ that's not in the open right half-plane, nothing is known about the equidissectibilty of $T$. Stein conjectured that when $r$ has a $\mathbb{Q}$-conjugate not in this half plane, then $T$ is not equidissectible. I ask: The Question: Is it true that if $r$ has a $\mathbb{Q}$-conjugate not in the open right half-plane then $T$ cannot be dissected into triangles of equal area? (For no such $r$ has an equidissection of $T$ or a proof of non-equidissectibility been found). geometry polynomials algebraic-number-theory open-problem Is this an open problem? If so, it should have an "open problem" tag. – Victor Protsak May 29 '10 at 4:25 Prof. Monsky is the founder of this subject area: see math.uga.edu/~pete/Monsky.pdf. So, yes, if he's asking the question, I think we may assume it is an open problem. – Pete L. Clark May 29 '10 at What is the meaning of the z in Q[z]? It doesn't seem to be defined anywhere. – Gerry Myerson May 29 '10 at 6:55 @Gerry: Apparently, the fact that Q isn't defined anywhere is OK by you? :) – Victor Protsak May 29 '10 at 7:12 Micro-comment-question: do we need axiom of choice to be able to extend p-adic valuation on rationals to reals? – Dror Speiser May 29 '10 at 8:46 show 4 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged geometry polynomials algebraic-number-theory open-problem or ask your own question.
{"url":"http://mathoverflow.net/questions/26330/dissecting-trapezoids-into-triangles-of-equal-area","timestamp":"2014-04-21T10:27:02Z","content_type":null,"content_length":"53619","record_id":"<urn:uuid:d0ae1ad4-bf58-438c-a19a-ffc4270ab7ab>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Tanking need help with advanced topic of damage reduction. [Archive] - TankSpot 02-14-2010, 10:21 PM I have a decent understanding of how the attack table for melee swings works assuming what i read was correct it is a roll system that checks down the list going normal hit not sure where crit would fall in that list but it is not important since you should be crit immune. The thing that I do not understand and i can not really find anywhere is what happens when it is determined that a hit is going to happen what is the order of how damage is reduced and I also would like to know how different abilities interact with each other. Example 1 lets say that a mob/boss hits for 100,000 damage on a melee swing and i have 30981 armor which tooltip reads as physical damage taken reduced by 67.04% I am a warrior in defensive stance it reads i get 10% reduced damage. How much damage would this melee swing hit me for? Example 2 same as above 100,000 physical damage 30981 armor in defensive stance and then i have the buffs of blessing of sanc 3% reduced damage and Inspiration 10% reduced physical damage how much would the swing hit for? Example 3 same as above except add in talented shield wall or pain suppression for another 40% reduced damage. Example 4 slightly different this time it is a 100,000 hit of spell damage lets say for simplicity that is is not resistable as a warrior you get the base 10% reduced damage from D stance then with talents it is another 6% so 16% magical damage reduction then add in 3% reduction from blessing of sanc and 40% from pain suppression/shield wall. What would the damage be on this attack? The last question i have is where do absorbs from sacred shield, power word shields, and divine aegis come in to absorb damage?
{"url":"http://www.tankspot.com/archive/index.php/t-63305.html","timestamp":"2014-04-20T06:06:32Z","content_type":null,"content_length":"6088","record_id":"<urn:uuid:97aebc08-9d18-41e4-87f3-bb09e5064a48>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Distributive Property Let's review the properties using variables instead of numbers: Addition Multiplication The properties above do NOT work with subtraction and division. Here's a handy visual aid on reviewing the properties: Distributive Property This one is very important when working with algebraic expressions. It basically says this: However, the distributive property does NOT work when the variables inside the parentheses are being multiplied or divided. Let's go through an example very carefully: By applying the distributive property, we can multiply each term inside the parentheses by 4. This is called "distributing". Since 12x and 4 are not like terms, this is as far as we can go with the problem. Well, what about subtraction? Let's look at a subtraction problem using two different methods. Method 1 Method 2 Leave as Subtraction Add the Negative
{"url":"http://www.shmoop.com/basic-algebra/distributive-property.html","timestamp":"2014-04-18T05:59:34Z","content_type":null,"content_length":"38987","record_id":"<urn:uuid:9fbf1ad1-6d18-41be-8d09-4bd015d85160>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
JOhn has ducs and pigs in one pen with a total of 40 legs in the pen. If john has 3 times the pigs than he has the... - Homework Help - eNotes.com JOhn has ducs and pigs in one pen with a total of 40 legs in the pen. If john has 3 times the pigs than he has the ducks how many pigs does he have? Let the number of ducks in John’s pen be x. The number of pigs is 3*x = 3x. A duck has two legs, and a pig has four. Therefore, total number of legs in his pen is: x*2 + 3x*4 = 2x+12x = 14x By the condition of the problem, 14x = 40 x = 40/14 = 20/7 = 6 6/7 This is a fraction. John can not have a fractional number of living beings; be it a duck or a pig in his pen. So there must be something wrong with the statement of the problem. (Assuming that john has 3 times the ducks than he has the pigs, then total number of legs in his pen would be: 3x*2+x*4 = 6x+4x = 10x and, the problem would then require, 10x = 40 x = 4 Number of ducks = 12, number of pigs = 4) That is what I thought. I thought I had something wrong or the instructor just wanted to throw me off by partial pigs and ducks. Maybe, a little short legged? :) Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/john-has-ducs-pigs-one-pen-with-total-40-legs-pen-438016","timestamp":"2014-04-17T10:22:54Z","content_type":null,"content_length":"27839","record_id":"<urn:uuid:fe7a9de5-9d52-49a8-b5ff-99e3c522e4d1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00452-ip-10-147-4-33.ec2.internal.warc.gz"}