content
stringlengths
86
994k
meta
stringlengths
288
619
Re: st: Fixed effects of cost function with interations [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Fixed effects of cost function with interations From Maarten buis <maartenbuis@yahoo.co.uk> To statalist@hsphsun2.harvard.edu Subject Re: st: Fixed effects of cost function with interations Date Fri, 16 Nov 2007 08:28:42 +0000 (GMT) --- Anthony Musonda <3kids.wkm@gmail.com> wrote: > I am tryignt o estimate a fixed effects (or even random effects) > model of the cost function in order to derive marginal cost. How does > Stata proceed when there are interation terms in the model. Any > advice on how to deal with this aspect is welcome. Interaction terms are just variables, we (humans) think they are special because we know they mean something different, but Stata just treats them as variables and that is fine: Computers do computing, humans do interpreting, and for computing you don't need to make a distinction between interaction terms and other types of variables. The ATS website has resources on how to interpret results with interaction terms, see for instance: Hope this helps, Maarten L. Buis Department of Social Research Methodology Vrije Universiteit Amsterdam Boelelaan 1081 1081 HV Amsterdam The Netherlands visiting address: Buitenveldertselaan 3 (Metropolitan), room Z434 +31 20 5986715 Yahoo! Answers - Got a question? Someone out there knows the answer. Try it * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-11/msg00542.html","timestamp":"2014-04-16T04:23:26Z","content_type":null,"content_length":"6727","record_id":"<urn:uuid:555f93cd-7284-4604-82f3-57216d274ce8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
It's time for the annual New Year posting. Last year , I talked about the potential power of affirmations -- in the sense of thinking about concrete goals you hope to accomplish for the year. This year, I'll work in the same theme, but I'll be a bit more specific, and specifically for graduate students. Now is a good time for graduate students to ask themselves some important questions: 1) If you're planning on graduating this coming semester (or the end of summer), what do you need to do to get there? Do you have a timeline -- with a few extra weeks of padding for unforeseen delays or writer's block? Are you on top of your job search? What can you do to make the thesis and job search go more smoothly? (Hint -- how can you make things easier on your advisor, faculty readers, letter-writers, etc.?) It's a challenging and scary time -- but you can and will do it, and graduating will feel good. Very, very good. 2) If you're not graduating this year, are you ready to commit to graduating next year? Now is a great time to talk to your advisor and plan what needs to be done to get you out the door within 18 months. (Almost surely your advisor will be thrilled to see such initiative -- even if they don't think you're ready...) There's an old saying that there's two types of students -- those who are 6 months from graduating and those who are 2 years+ from graduating. The point is that it's mindset -- when you start thinking you're ready to graduate, you'll start moving toward graduating. Are you ready to start thinking that way? 3) If you're early on in the process, what can you do to make this year a good one? Perhaps you can start a collaboration with someone somewhat outside your area -- generally a good experience -- through a class project this semester or by just talking to people about what they're up to. Figure out ways you can talk to people beside your advisor, like going to seminars or serving as a "grad student host" for visitors. Also, now is the time to be applying for summer internships. 4) Finally, if you've found yourself struggling with graduate school, and wondering if you've made the right choice, now is the time for some careful thinking. Think about what you want, possibly talk about it with friends and colleagues -- and then talk to your advisor. Maybe you and your advisor can come up with a plan to make things better. Or maybe you're better off leaving (with a useful Master's Degree), and now is the right time to look for jobs and finish off remaining projects before the end of the academic year. It can be much better to ponder this all now rather than wait until summer nears and realize your options are limited. Whatever you're up to, I wish you all a happy, healthy, successful 2010. Posting will be light over winter break, in part because so little is going on, but more because I'm busy working on papers for upcoming deadlines. I'll describe a fun little nugget, which is being submitted to ISIT, joint work with Adam Kalai and Madhu Sudan. (The starting point for the result was my giving my survey talk on deletion channels at MSRNE... always nice when something like that works out!) The goal was to get better bounds on the capacity of the deletion channel. (If you haven't been reading the blog long enough: In a binary deletion channel, n bits are sent, and the channel deletes each bit independently with probability p. So, for example, the message sent might be 00110011 and the received message could be 010011 if the 2nd and 4th bits were deleted.) It was known that for deletion probability p the capacity was at least 1-H(p). We show an essentially tight upper bound of 1-(1-o(1))H(p), where the o(1) term goes to 0 as p goes to 0. Here's the draft (a full two weeks before the submission deadline!). In English, the binary deletion channel looks very much like a standard binary symmetric error channel when p is small. This is not the case when p is larger. (Here's a link to my survey on deletion channels and related channels for more info.) Here's an attempt at describing the intuition. Let's first look back at the error channel. Suppose we had a code of N codewords each with n bits and a perfect decoder for at most pn errors. Then here's a funny way I could store data -- instead of storing n bits directly, I could store a codeword with pn errors that I introduce into it. To get back my data, I decode. Notice that when I decode, I automatically also determine the locations where the errors were introduced. This gives me N*{n choose pn} \approx N2^{nH(p)} possibilities, each of which I can use to represent a different data sequence. Since I'm only storing n bits, I better have N2^{nH(p)} <= 2^n, or else I've found a way to store more than n bits of data into n bits. So (log N)/n, or the rate, satisfies (log N)/n <= (1-H(p)). This is a different way of thinking about the Shannon upper bound on capacity. Of course, it's sweeping away details -- like what if you don't have a perfect decoder -- but it gives the right sort of insight into the bound. Now consider the deletion channel, and apply the same sort of reasoning. Suppose that we had a decoder for the deletion channel, and further, we had a method of determining which bits were deleted given the received string. Then we could use it to store data in the same way as above and obtain a similar (1-H(p)) upper bound on the rate. Now we have to worry about the details -- like what to do when you don't have a perfect decoder. But more importantly, we have to show that, most of the time with non-trivial probability, you can use the decoder to guess which bits were deleted. (This is where we use the fact that p is going to 0.) The details work out. Surprisingly, this doesn't seem to have been known before. The best (only) upper bound I know for this case previously was the work by Fertonani and Duman, mentioned in this blog post. Their upper bound as p goes to 0 was of the form 1 - cp for some constant c, so it was different in kind. Slowly but surely, the mysteries of the deletion channel become, well, less mysterious. Mikkel Thorup sent in the following guest post: Text-book algorithms at SODA This is a pitch for promoting good text-book algorithms at SODA. Erdos promoted book proofs, but book algorithms are in some sense far more important in that they could end up being understood and used by every programmer with a degree in CS. This can yield a huge external impact, and I do think we do ourselves and the world a big favor taking this work seriously. Instead of taking papers on this theme (which would, incidentally, be a great idea), perhaps the area could serve as the basis for a lighter afternoon entertainment session, providing cool stuff that one could take home and show students. To me the greatest text-book algorithms work well in both theory and practice. They have a cool non-obvious idea that will impress the students, yet, after first you get the idea, they are simple to understand and implement. Unfortunately, to get into a top conference, it is best if you also have 5-10 pages worth of complications. Typically the complications are not themselves that interesting, but they are helpful in making the paper look hard enough; otherwise some referees are bound to complain about lack of meat (fat?). Note that by insisting on complications, we narrow the contributers to the small pond of theoreticians. Simple efficient algorithms are sought by every smart practitioner, and it no coincidence that many of the elegant algorithms theorists analyze are discovered outside theory. On the other hand, I do think theorists are the ones in the best position to develop great simple algorithms thanks to our fundamental understanding, and I think we should celebrate it when it To be more clear, let me present a somewhat controversial example of what I consider a great text-book contribution which is buried in a paper [DHKP97] about very different issues. The example is for universal hashing (low collision probability) where the new scheme is simpler and much faster than the classic method. Suppose we want universal hashing of w-bit keys to u-bit indices. The classic solution is to pick a prime p>2^w, and a random a in [p], and then use the hash function h_a(x) = ((ax) mod p) mod 2^u --- math terminology. The alternative from [DHKP97] is to let b be a random odd w-bit number and use h_b(x) = ((bx) mod 2^w) div 2^(w-u) --- math terminology. To prove that this is universal is a nice text book exercise using that odd numbers are relative prime to powers of two. There may be no obvious advantage of one scheme over the other on an established theory model like the unit-cost RAM, but the difference is major in reality. Implementing the scheme from [DHKP97] is extremely simple with standard portable C-code. We exploit that C-multiplication (*) of unsigned w-bit numbers is done mod 2^w, and get h_b(x) = (b*x) >> (w-u) --- C code. By comparison, the implementation of the classic scheme is problematic. One issue is that the mod operation is very slow, and has been so for more than 30 years. Already when Carter and Wegman introduced universal hashing at STOC'77, they were aware of the issue. They suggested using Mersenne primes (p=2^i-1) allowing us to bypass mod p with some faster bit-trick operations. Even using that, we still have the issue that the classic scheme requires us to compute ax exactly, and ax has more than 2w bits. Since w-bit multiplication is mod 2^w, we need 6 w-bit multiplications to compute ax in its full length, and that is even ignoring the issue of mod p. If 2w-bit multiplication is available, it suffices with two multiplications, but these are often more expensive than w-bit multiplication. The impact of the scheme from [DHKP97] is big in that it unites theory and practice in what is probably the world's most common non-trivial inner loop. The classic prime based scheme is so slow that practitioners have come up with all kinds of alternatives that are not even remotely universal, e.g., some combination of shifts and xor, hence no entropy. The new scheme is faster than all these hacks, so now we can convince practitioners to use real universal hashing, often leading them to better more robust results. Is the above theory? It is certainly involves a mathematical observation about the use of relative primality, and I like to think of algorithms as math with mathematically well-defined impact on computing. To get a mathematically well-defined measure, we can, for example, look at how many operations are needed in C, which has been used for efficient portable code since 1972. A theoretical irritant is that we have a whole array of measures, e.g., depending on how we count 2w-bit multiplication and mod-operations. However, the new scheme is clearly better: the variations only affect exactly how much it is better---some factor between 3 and 15. It is a bit interesting to contrast the above situation with, say, the more robust world of polytime approximation with, say, a very well-defined difference between a worst-case factor 2 and 3. Translating to reality, if the polytime algorithm is superquadratic, it is typically too slow to finish on large scale problems. Moreover, one often gets much better results using simple heuristics with bad worst-case behavior. For the hashing we are not just talking worst-case but all-cases (same instructions performed on all keys), and I have never tried a real computer on which the new scheme didn't gain at least a factor 4 in speed compared with the classic scheme tuned with Mersenne primes. On top of that, the new scheme is much simpler to implement. While this difference is very convincing for anyone experienced with efficient programming, it may be a bit hard to appreciate for "THEORY of Computing" conferences like STOC/FOCS. However, I see algorithms and SODA more as a "Theory of COMPUTING" with a scope closer to the reality of computing, hence with a bigger interest in text-book algorithms that unite theory and practice. Highlighting such simple, but incredibly useful, practical computing algorithms would both increase the impact of SODA (and arguably theory more generally) and provide a useful distinguishing characteristic for the conference. [DHKP97] M. Dietzfelbinger, T. Hagerup, J. Katajainen, and M. Penttonen. A reliable randomized algorithm for the closest-pair problem. J. Algorithms, 25:19-51, 1997. I'm spending the morning as part of the committee for a thesis defense over in Europe. I'm watching the talk over Skype; we're using a conference call for sound (and as a backup); I have copies of the slides on my laptop. Is it the same as being there? No. But it's probably 90+% as good in terms of the experience of listening to the defense. It saves me a full day of plane travel (never mind other time overhead), it saves the institution multiple thousands of dollars in air fare and other travel expenses, and if you happen to feel that flying has negative externalities due to greenhouse gas emissions, then it's good for other reasons as well. If the timing had worked out better, I might have made the trip, and arranged to give some talks to amortize the "travel cost" over more than the defense. But I'm glad to avoid the travel -- thank goodness for modern technology. And as much as I enjoyed the NSDI PC meeting Monday, if it hadn't been down the block at MIT, I would have enjoyed it much less. (Indeed, the location of the meeting was one of the incentives to accept the PC invitation.) I'm still waiting for the tech to get good enough so we can have an online PC meeting (w/video, sound, etc. to mimic the face-to-face discussions of a face-to-face meeting) that we don't have to travel to. My post today is live-blogging the NSDI PC Meeting -- with a delay for security purposes, of course. My take on the reviews (and from past experience) is that the NSDI PC is a very, very tough committee. People are looking for exciting and novel ideas, with clear and detailed experiments demonstrating real-world benefits (which usually means comparing against a real implementation from previous work). It's hard to get all that into one paper -- and to get everything so that the reviewers are all happy. And once in a while you can run into a reviewer like me, who expects your "good idea" to also have a suitable mathematical formulation when that makes sense. (If you're claiming to optimize something, I -- and others -- want a clear notion of what you're trying to optimize, and why your idea should help optimize it.) So it's not surprising that, 4th paper in from the top, we've already hit our first paper where we're deferring our decision instead of accepting, and we're already getting some detailed discussions on whether a paper is good enough to accept. We'll have to speed up to make dinner.... As if to underscore that I did not have a great set of papers, I have just a few that I reviewed in our "DiscussFirst" pile, which takes us through lunch. Good thing I can keep busy with blog entries. And I have a review to write for another conference... My submission appears (tentatively) accepted. Hooray! For this PC, we're not kicking people out of the room for conflicts -- people are just supposed to be keeping their mouths shut on papers where they have a conflict. For PC members with papers, however, you get kicked out of the room. So I've just spent a tense 15 minutes or so outside, but I'm happy to see the news once I'm back in. (More on this paper in another post....) Overall, I'd say (as expected) PC papers had no special treatment -- they were as harshly judged as all the other papers. We're now having an interesting discussion about "experience" papers -- what do you learn after building/running a system after several years? A lot of people really think that having experience papers is a good idea, but there's some discussion of the bar -- what makes such papers interesting, and how interesting should they be? (Good anecdotes, but with quantifiable data to support the We're now about in the middle of the papers we're meant to discuss. Things here could go either way. Lots of technical discussion. As an aside, I can give my point of view on what are "hot topics". Data centers seems to be a big topic. There seemed to be a number of papers about scheduling/optimizing/choosing the right configuration in cloud computing systems -- how that could be done without making the programmer explicitly figuring out what configuration to use (but just give hints, or have the tools figure it out automatically). There's a significant amount of EE-focused papers -- essentially, trying to gains with some detailed, explicit look at the wireless signal, for example. Headed to the end, more or less on schedule. Now we're assigning shepherds to all of the papers. Sorry to say, I won't be revealing any information on specific papers -- everyone will find out when the "official" messages go out, or from their own connections on the PC... I think the PC chairs have done a good job pushing us to keep on schedule; I think the discussions have been detailed and interesting. I think the committee is perhaps overly harsh (a target of 30 papers for about 175 submissions, or 17-18% acceptance; we ended up with 29). But I think we did a good job overall, and have earned our post-meeting dinner out. Harry Lewis (and Fred Abernathy) take a stand against the Harvard Corporation. Required reading for anyone associated with Harvard, or interested in its current predicaments. The people at Microsoft Research New England are excited to announce that Boaz Barak will be joining them. I imagine the situation is similar to their hiring of Madhu Sudan. They continue to build up a remarkable collection of researchers. Harvard's Allston complex is officially suspended. JeffE doesn't post so much these days, so you might have missed this great post with his favorite useless theorem. (I hope this starts a series of "Favorite Useless Theorems" -- please send him, or me, if you have examples of your own.) Nela Rybowicz, staff senior editor for IEEE (and who I've dealt with for pretty much all of my Transactions on Information Theory papers), informed me that she'll soon be retiring. She'll be missed. You may have heard on the news that the TSA (temporarily) put up their Screening Management Standard Operating Procedure on the web. As pointed out on this blog (and elsewhere), they made a small "So the decision to publish it on the Internet is probably a questionable one. On top of that, however, is where the real idiocy shines. They chose to publish a redacted version of the document, hiding all the super-important stuff from the public. But they apparently don’t understand how redaction works in the electronic document world. See, rather than actually removing the offending text from the document they just drew a black box on top of it. Turns out that PDF documents don’t really care about the black box like that and the actual content of the document is still in the file." Oops. Apparently, somebody hasn't read Harry Lewis's book Blown to Bits; the start of Chapter 3 discusses several similar cases where somebody thought they had redacted something from a PDF file... but didn't. Perhaps the most basic breakdown one can make of a college faculty is into the categories Natural Sciences / Social Sciences / Humanities. (Yes, one could naturally think of "engineering" as a fourth category separate from "natural sciences"; feel free to do so.) So here are some basic questions: 1. What is the faculty breakdown in your institution into these three categories? 2. Do you think that breakdown is appropriate? 3. What do you think the breakdown will be (or should be) ten years from now? Twenty years from now? 4. Do you think your administration is thinking in these terms? Do you think they have a plan to get there? (And if so, are they open about it -- is the faculty involved in these decisions?) [These questions were inspired by a comment of Harry Lewis over at Shots in the Dark.] No, not for me. But Harvard has announced its plans to encourage faculty to retire. I won't call it an "early retirement" package, since it is for people over 65. Though I suppose that is early for Note that (according to the article) Harvard's Faculty of Arts and Sciences will have 127 offers to its 720 (junior and senior) faculty. So I conclude (as of next year) 1/6 of Harvard's faculty would be over 65. And the article states that the average age of tenured Harvard professors is 56. Can people from elsewhere tell me if this is unusual? Harvard has a reputation for having an older faculty; this seems to confirm it. Does his suggest a lack of planning somewhere along the line? Or is this a good thing? I don't expect drastic changes arising from this plan; it will be interesting to see how many faculty take the offer. In general, however, is it a good idea for a university to "encourage" retirement for older faculty? And if so, what means should they use to do it? Viewed as an optimization problem, one can ask what is the "best" distribution of faculty ages at a university, and what mechanisms could (and should) be used to maintain that distribution? As I promised sometime back, we now have a paper (up on the arxiv) giving the thresholds for cuckoo hashing, a problem that had been open and was part of my survey (pdf) on open problems in cuckoo One interesting thing about our paper is that our (or at least "my") take is that, actually, the thresholds were right there, but people hadn't put all the pieces together. The argument is pretty easy to sketch without any actual equations. In cuckoo hashing, we have n keys, m > n buckets, and each key is hashed to k possible buckets. The goal is to put each key into one of its k buckets, with each bucket holding at most one key. We can represent this as a random hypergraph, with each key being a hyperedge consisting of k buckets, represented by vertices; our goal is to "orient" each edge to one of its vertices. A natural first step is to repeatedly "peel off" any vertex that doesn't already have an edge oriented to it and has exactly one adjacent unoriented edge, and orient that edge toward it. At the end of this process, we're left with what is called the 2-core of the random hypergraph; every vertex has 2 adjacent edges. Can we orient the remaining edges of the 2-core? In particular, we're interested in the threshold behavior; as m,n grow large, what initial ratio m/n is required to have a suitable mapping of keys to buckets with high probability. (This corresponds to the memory overhead of cuckoo hashing.) Now consider the random k-XORSAT problem, where we have m variables and n clauses, where each clause is randomly taken from the possible clauses x_{a_1} + x_{a_2} + ... + x_{a_k} = b_a. Here the x_{a_i} are distinct variables from the m variables, b_a is a random bit, and addition is modulo 2. The question is whether a random k-XORSAT problem has a solution. Again, we can put this problem in the language of random hypergraphs, with each clause being a hyperedge of k variables, represented by vertices. Again, we can start by oriented edges to vertices as we did with the cuckoo hashing representation; here, a clause has an associated orientation to a variable if that variable is "free" to take on the value to make sure that clause is satisfied. Is the remaining formula represented by the 2-core satisfiable? The correspondence between the two problems is almost complete. To finish it off, we notice (well, Martin Dietzfelbinger and Rasmus Pagh noticed, in their ICALP paper last year) that if we have a solution to the k-XORSAT problem, there must be a permutation mapping keys to buckets in the corresponding cuckoo hashing problem. This is because if the k-XORSAT problem has a solution, the corresponding matrix for the graph has full rank, which means there must be a submatrix with non-zero determinant, and hence somewhere in the expansion of the determinant there's a non-zero term, which corresponds to an appropriate mapping. And the threshold behavior of random k-XORSAT problems is known (using the second moment method -- see, for example, this paper by Dubois and Mandler.) While in some sense it was disappointing that the result was actually right there all along, I was happy to nail down the threshold behavior. In order to add something more to the discussion, our paper does a few additional things. First, we consider irregular cuckoo hashing (in the spirit of irregular low-density parity-check codes). Suppose that you allow that, on average, your keys have 3.5 buckets. How should you distribute buckets to keys, and what's the threshold behavior? We answer these questions. Second, what if you have buckets that can hold more than one key? We have a conjecture about the appropriate threshold, and provide evidence for it, using a new fast algorithm for assigning keys to buckets (faster than a matching algorithm). I should point out that since I originally "announced" we had this result two other papers have appeared that also have nailed down the cuckoo hashing threshold (Frieze/Melsted ; and Fountoulakis/ Panagiotou). Each seems to have different takes on the problem.
{"url":"http://mybiasedcoin.blogspot.com/2009_12_01_archive.html","timestamp":"2014-04-18T15:40:40Z","content_type":null,"content_length":"125377","record_id":"<urn:uuid:4b11ff11-0ed1-4dd8-baf6-672340f2e823>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
P.Mean: The controversy over standardized beta coefficients (created I have a client who is working on her dissertation. I always warn people working on dissertations or theses that they should listen more to what their committee members say about statistics than what I say about statistics. If the committee loves the statistical analysis and I hate it, you still get your degree. If I love the statistical analysis and the committee hates it, you get nothing. For this client, a committee member asked if she could produce standardized beta coefficients in her regression models. I helped her write an argument as to why the unstandardized coefficients are better, but the committee member gave a reasonable counter-argument, so there was no point in persisting. Still, it would be helpful here to outline some of the controversy over standardized beta coefficients. I first talked about standardized beta coefficients (sometimes the name is shortened to just "beta coefficients") on my old website That page did not give a critical review of the pros and cons. The Wikipedia page on standardized beta coefficients did have some critical comments. Advocates of standardized regression coefficients point out that the coefficients are the same regardless of an independent variable's underlying scale of units. They also suggest that this removes the problem of comparing, for example, years with kilograms since each regression coefficient represents the change in response per standard unit (one SD) change in a predictor. However, critics of standardized regression coefficients argue that this is illusory: there is no reason why a change of one SD in one predictor should be equivalent to a change of one SD in another predictor. Some variables are easy to change--the amount of time watching television, for example. Others are more difficult--weight or cholesterol level. Others are impossible--height or age. -- en.wikipedia.org/wiki/Standardized_coefficient The Wikipedia page cites a webpage by Jerry Dallal and this webpage elaborates on some of the criticisms in greater detail. Andrew Gelman has some discussion of standardized beta coefficients on his blog, and he is somewhat supportive. He argues that many coefficients are uninterpretable because the scale is so small. What bugs me the most is regression coefficients defined on scales that are uninterpretable or nearly so: for example, coefficients for age and age-squared (In a political context, do we really care about the difference between a 51-year-old and a 52-year-old? How are we supposed to understanding the resulting coefficients such as 0.01 and 0.003?) or, worse still, a predictor such as the population of a country (which will give nicely-interpretable coefficients on the order of 10^-8)? I used to deal with these problems by rescaling by hand, for example using age/ 10 or population in millions. But big problems remained: manual scaling is arbitrary (why not age/20? Should we express income in thousands or tens of thousands of dollars?); still left difficulties in comparing coefficients (if we're typically standardizing by factors of 10, this leaves a lot of play in the system); is difficult to give as general advice. -- He suggests that during the model building phase that every continuous variable be centered and then divided by two standard deviations. This places the variable on the same scale as the indicator (0-1) variables for the categorical data. In my experience, unitless quantities are easier to work with, but they lack an interpretability that only comes when you discuss quantities that have a unit of measurement. For example, it is easy to get someone to specify a small, medium, or large effect size for a power calculation, but this does not lead to a fruitful consideration of clinical importance. Many meta-analyses will report estimates of continuous effects on a standardized scale, which helps when there is heterogeneity in outcome measures, but the resulting statistics are incapable of addressing the practical impact of a therapy. I have a joke in my book about a store that displays a large sign saying "Major Sale. All prices reduced by one half a standard deviation." I would argue that standardized beta coefficients lead to the same problem. If I know that a standard deviation increase in the weight of a car leads to a 0.25 standard deviation decline in mileage, what does that tell me exactly? I'd rather know that an extra thousand pounds leads to a 5 mile per gallon decline on average. (Sorry to my non-U.S. friends for not using metric here). I'm not saying that you should never use a unitless quantity. I'm just saying that unitless quantities have severe limits on their interpretability.
{"url":"http://www.pmean.com/09/StandardizedBetas.html","timestamp":"2014-04-20T19:29:04Z","content_type":null,"content_length":"7701","record_id":"<urn:uuid:ad91530b-03b8-412c-99f6-a07201de59ee>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Kakuro правила Kakuro are easy to learn yet highly addictive language-independent logic puzzles now following the footsteps of the worldwide Sudoku success. Requiring just pure logic and simple add/subtract calculations, these numerical-crossword puzzles will carry you into a fascinating world of number combinations you never imagined could exist. Best defined as number crosswords, Kakuro puzzles come in endless variations, are available in almost any grid size and range from very easy to extremely difficult taking anything from ten minutes to several hours to solve. Make one mistake and you’ll find yourself stuck later on as you get closer to the solution... If you like Sudoku and other logic puzzles, you will love Conceptis Kakuro as well! Classic Kakuro Each puzzle consists of a blank grid with sum-clues in various places. The object is to fill all empty squares using numbers 1 to 9 so the sum of each horizontal block equals the clue on its left, and the sum of each vertical block equals the clue on its top. In addition, no number may be used in the same block more than once. Holey Kakuro Each puzzle consists of a blank grid with sum-clues in various places. The object is to fill all empty squares using numbers 1 to 9 so the sum of each horizontal block equals the clue on its left, and the sum of each vertical block equals the clue on its top. In addition, no number may be used in the same block more than once. Round Kakuro Each puzzle consists of a blank grid with sum-clues in various places. The object is to fill all empty squares using numbers 1 to 9 so the sum of each horizontal block equals the clue on its left, and the sum of each vertical block equals the clue on its top. In addition, no number may be used in the same block more than once. Multi Kakuro Each puzzle consists of a blank grid with sum-clues in various places. The object is to fill all empty squares using numbers 1 to 9 so the sum of each horizontal block equals the clue on its left, and the sum of each vertical block equals the clue on its top. In addition, no number may be used in the same block more than once. Diamond Kakuro Each puzzle consists of a blank grid with sum-clues in various places. The object is to fill all empty squares using numbers 1 to 9 so the sum of each horizontal block equals the clue on its left, and the sum of each vertical block equals the clue on its top. In addition, no number may be used in the same block more than once. Heart Kakuro Each puzzle consists of a blank grid with sum-clues in various places. The object is to fill all empty squares using numbers 1 to 9 so the sum of each horizontal block equals the clue on its left, and the sum of each vertical block equals the clue on its top. In addition, no number may be used in the same block more than once. Kakuro Magic Blocks The secret to solving Kakuro puzzles is learning how to use magic blocks – those special situations where only a single combination of numbers can fit into a block of a given length. For example, if 6 is the sum-clue of a block of three squares then the block must consist of the numbers 1+2+3 but not necessarily in this order. When spotting a magic block, the Magic Blocks table can be helpful to identify which unique numbers must be used in that block. The only thing left is to find out in which order the numbers should be This Kakuro Magic Blocks table shows all the magic block possibilities which may occur in Kakuro puzzles.
{"url":"http://www.conceptispuzzles.com/ru/index.aspx?uri=puzzle/kakuro/rules","timestamp":"2014-04-16T22:27:04Z","content_type":null,"content_length":"21310","record_id":"<urn:uuid:990968af-92eb-49c8-9088-71198ea31296>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> LCA quetsions Ali posted on Thursday, January 30, 2014 - 6:13 pm I generated two groups data¡XC1~N(0,1) and C2~N(0,1) under 21 items condition. In 21 items, there are 6 invariant items. In each group, there are 1000 people. After I ran LCA model, I have problem knowing C1 is assigned to #1 or# 2 based on Mplus results. For example, if I set C1 belongs to 1 and C2 belongs to 2 in the generated data, how do I know after running LCA, Mplus will assign C1 as 1 or 2? Another question is in the Mplus result, how could I interpret threshold? Is it similar to item difficulty in IRT? In addition , is there any default in LCA model in Mplus? Thank you. Bengt O. Muthen posted on Friday, January 31, 2014 - 2:36 pm I don't understand your question. You first talk about C1 and C2 having the same univariate distribution and then you talk about C1 and C2 as if they are categorical. See the Topic 2 handout and video on our website for how to interpret thresholds and relate to IRT. The default LCA model in Mplus is the standard LCA model of the literature. See also UG examples in Chapter 7. Ali posted on Friday, January 31, 2014 - 4:31 pm I am sorry to confuse you. I generated the two groups data. One group¡¦s ability distribution is N(0,1), and the other group ability distribution is N(0.5,1). In this data, I set the lowest ability¡¦s membership as 1 , and the highest ability¡¦ membership as 2. So, I have known the real membership in generated data. I want to know what the percentage of membership will be assigned correctly in Mplus, so I need to compare the data output with my generated data. In the data output, the value of membership is 1 either 2, but I have problem knowing that which value is assigned to the lowest ability? Bengt O. Muthen posted on Friday, January 31, 2014 - 6:16 pm Are you saying that you generate an IRT model with different latent ability distribution means in two groups and then you analyze with LCA to try to recover the group membership? Or are you not using LCA but factor mixture modeling? Ali posted on Monday, February 03, 2014 - 9:39 am Yes, I was generating an IRT model with different ability distribution means in two groups and use LCA to find the correct group membership. In the generating data, I have the real membership in each group. So , now I try to know how LCA will classify the membership correctly. Bengt O. Muthen posted on Tuesday, February 04, 2014 - 2:44 pm So in your second step - not knowing the group membership - it sounds like you are saying that you use LCA, not factor mixture modeling. Note that LCA with m classes usually recovers factor analysis (IRT) with m-1 factors. It seem like you should instead use UG ex 7.17 to recover your unknown groups. You get the most likely class membership if you request cprobs in the Savedata command (see UG). Ali posted on Tuesday, February 04, 2014 - 3:41 pm Thank you! Sorry , I still have a questions. I have simulated 10 data with an IRT model with different ability distribution also I have the real membership in the generating data, then running LCA 10 times. By using command ¡§ SAVEDATA:SAVE=CPROB¡¨, the output shows the probability of a person belonging class 1 or class2. But, how could I tell if the class 1 in Mplus output corresponds to the class 1 in the generating data? I mean if I assign class 1 as 1 in my generating data, how could I know class 1 will be assigned 1 or 2 in Mplus output? Are there any parameter estimates in mplus that I could compare with the generating values? Bengt O. Muthen posted on Wednesday, February 05, 2014 - 2:26 pm You have to infer which class it is by comparing the estimated means/probabilities of the observed variables to those that generated the data. But, again, I am not sure that applying an LCA model to data generated by a multiple-group IRT model is a good idea - you need 2 classes to capture the IRT ability factor and then you need 2 more classes to capture the two groups; it might be hard to sort things out from those 4 classes. Ali posted on Wednesday, February 05, 2014 - 8:01 pm Thank you for your suggestion. I tried to use probabilities of the items, but I find it's not easy to match the item difficulties in the generated data. For example,the mplus result show Latent Class 1 Category 1 0.105 Category 2 0.895 Category 1 0.350 Category 2 0.650 Latent Class 2 Category 1 0.342 Category 2 0.658 Category 1 0.670 Category 2 0.330 And, I set the item 1 has the same item difficulty -1.5 in group 1 and group 2, and item 2 is -1 and 1 in group1 and group2,respectively. However, I could not tell the real membership from the And, why does mplus estimate thresholds in LCA, because from the LCA formula , it seems no paramters is for thresholds. Bengt O. Muthen posted on Thursday, February 06, 2014 - 2:11 pm I think your difficulties are related to my earlier statement: "But, again, I am not sure that applying an LCA model to data generated by a multiple-group IRT model is a good idea - you need 2 classes to capture the IRT ability factor and then you need 2 more classes to capture the two groups; it might be hard to sort things out from those 4 classes." Instead of LCA, I think you should use the model of ex 7.17 that I mentioned. All Mplus models with categorical outcomes use threshold parameters. See the handouts and videos for Topic 2 and Topic 5 on our website. Ali posted on Sunday, February 09, 2014 - 11:01 am In LCA ouput,does the following result provide class 1's mean? and , meanwhile does the mean of class 2 default as 0? Categorical Latent Variables Also, I tried example 7.17, I am confused the codes in the model part. f BY y1-y5; What does [f*1] mean? Thank you. Linda K. Muthen posted on Monday, February 10, 2014 - 3:37 pm The mean is the logit for the probability of being in class1. For k classes, k-1 logits are estimated. [f*1] means that the mean of f is free with a starting value of 1. yuki toyota posted on Tuesday, March 11, 2014 - 3:44 am Basic question Hello, I am trying to analyse my data with LPA method. My study outline is that (1)to examine how many latent classes with emotional intelligence(EI) will be appeared in LPA model. (2)to compare how the differences of mental health(such as depression and burnout) in each detected-EIclass are. So far, I managed to do (1) , and the results said that 3 class solution is the best. But I don't know how to conduct (2). Although some info told me that it can be carried out by ANOVA in SPSS, I don't understand how it is possible. I will attach my program. Could someone help meH FILE IS "F:\Latent profile analysis\EIrawdata.txt"; NAMES ARE v1 v2 v3 v4; USEVARIABLES ARE v1 v2 v3 v4; CLASSES = c(3); Series is v1(1) v2(2) v3(3) v4(4); CROSSTABS(ALL) SAMPSTAT STDYX MODINDICES(ALL) TECH3 TECH11 TECH14; Bengt O. Muthen posted on Tuesday, March 11, 2014 - 12:04 pm You can use Auxiliary DCAT/DCON. See the User's Guide. Back to top
{"url":"http://www.statmodel.com/discussion/messages/13/18182.html?1392075470","timestamp":"2014-04-18T20:43:34Z","content_type":null,"content_length":"36512","record_id":"<urn:uuid:b9b7f24c-2c07-47a0-9ea1-46fa8542a6de>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Curvature Problem (What Am I Doing Wrong?) October 21st 2012, 07:44 PM #1 Sep 2012 Corvallis, OR Curvature Problem (What Am I Doing Wrong?) Find the curvature "k" of r(t) = <-t+sint , 1+cost> I'm using the formula: (magnitude of cross product of a and v) / ((magnitude of velocity)^3) I got: v(t) = <-1+cost, -sint> a(t) = <-sint, -cost> mag. of cross prod. = 1+cost mag. of velocity = sqrt(2-2cost) k = (1+cost)/((2-2cost)^(3/2)) This answer is not correct. What am I doing wrong? For curvature use: $\kappa =\frac{x'(t) y''(t)-x''(t) y'(t)}{\left(x'(t)^2+y'(t)^2\right)^{3/2}}=$ $=-\frac{1}{2 \sqrt{2-2 \text{cos}(t)}}$ Re: Curvature Problem (What Am I Doing Wrong?) Check your cross product again. October 21st 2012, 09:28 PM #2 October 21st 2012, 09:59 PM #3 Super Member Jun 2012
{"url":"http://mathhelpforum.com/calculus/205848-curvature-problem-what-am-i-doing-wrong.html","timestamp":"2014-04-20T18:34:20Z","content_type":null,"content_length":"34491","record_id":"<urn:uuid:a07809b0-6a0b-4e81-a11a-75f1c621c184>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
mathematical demonstration of the advantage of keep betting in a lucky strike [Archive] - Ubisoft Forums 12-18-2010, 07:46 AM folklore knows if you are in a winning streak you have to keep betting while if you are losing is better to retire ill prove this is mathematically true and logic: lets analize the odds of following this startegy in a single game my startegy is bet in pairs, on draw(one win one lose) i keep gambling, on losing i retire (one lose one lose), i only keep gambling on win(one win one win) or on draw in this universe the first two posible outcomes are: 00=win or 11=lose 01 or 10 doesnt exist in this universe cause if this was the case i would keep gambling, i bet 0.5 euro in ech game so the first two posible games that exist in the universe of this staregy are: win or lose 00 or 11 for this two theres a chance of 1/4 each profit=0 the next posible outcomes in this universe are: ww or wl with a chance 1/16 profit 1 euro then www or wwl chance 1/64 profit 2 euro then wwww or wwwl chance 1/256 profit 3 euro then wwwww or wwwwl chance 1/1024 profit 4 euro these are the only posible outcomes of a single game with this strategy the rest just dont exist so now its just a matter of adding the odds: 1/16+ 2/64 +3/256 +4/ 1024 ...=0.109375 so following the staregy of keeping in a lucky strike and retiring on a bad one gives you a 10% edge popular folklore is right on this one
{"url":"http://forums.ubi.com/archive/index.php/t-392683.html","timestamp":"2014-04-16T05:42:54Z","content_type":null,"content_length":"10291","record_id":"<urn:uuid:26a53b5b-6935-4643-8724-c4ff13547a78>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
1-D Velocity Surveys as a Probe of Large Scale Structure Previous abstract Next abstract Session 87 - Large Scale Structure. Display session, Friday, January 09 Exhibit Hall, [87.13] 1-D Velocity Surveys as a Probe of Large Scale Structure H. Feldman (U. Kansas), R. Watkins (Dartmouth College) We propose a new strategy to probe the power spectrum on large scales using galaxy peculiar velocities. We explore the properties of surveys that cover only two small fields in opposing directions on the sky. Surveys of this type have several advantages over those that attempt to cover the entire sky; in particular, by concentrating galaxies in narrow cones these surveys are able to achieve the density needed to measure several moments of the velocity field with only a modest number of objects, even for surveys designed to probe scales \gtrsim 100h^-1 Mpc. We construct mock surveys with this geometry and analyze them in terms of the three moments to which they are most sensitive. We calculate window functions for these moments and construct a \chi^2 statistic which can be used to put constraints on the power spectrum. In order to explore the sensitivity of these surveys, we calculate the expectation values of the moments and their associated measurement noise as a function of the survey parameters such as density and depth and for several popular models of structure formation. In addition, we have studied how well surveys of this type can distinguish between different power spectra and found that cone surveys are as good or better than full-sky surveys in distinguishing between popular cosmological models. We find that a survey with 200-300 galaxy peculiar velocities with distance errors of 15% in two cones with opening angle of \sim 10^\circ could put significant constraints on the power spectrum on scales of 100-300h^-1 Mpc, where few other constraints exist. We believe that surveys of this type could provide a valuable tool for the study of large scale structure on these scales and present a viable alternative to full sky surveys.
{"url":"http://aas.org/archives/BAAS/v29n5/aas191/abs/S087013.html","timestamp":"2014-04-18T01:49:52Z","content_type":null,"content_length":"3048","record_id":"<urn:uuid:7ecabf63-46d1-43b5-9f2d-8ade0b896035>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Tesla coil operation A Tesla Coil is an air-cored resonant transformer. It has some similarities with a standard transformer but the mode of operation is somewhat different. A standard transformer uses tight coupling between its primary and secondary windings and the voltage transformation ratio is due to turns ratio alone. In contrast, a Tesla Coil uses a relatively loose coupling between primary and secondary, and the majority of the voltage gain is due to resonance rather than the turns ratio. A normal transformer uses an iron core in order to operate at low frequencies, whereas the Tesla Coil is air-cored to operate efficiently at much higher frequencies. A typical Tesla Coil circuit diagram is shown below. The operation of the Tesla Coil is as follows:- 1. The spark gap initially appears as an open-circuit. Current from the HV power supply flows through a ballast inductor and charges the primary tank capacitor to a high voltage. The voltage across the capacitor increases steadily with time as more charge is being stored across its dielectric. 2. Eventually the capacitor voltage becomes so high that the air in the spark gap is unable to hold-off the high electric field and breakdown occurs. The resistance of the air in the spark gap drops dramatically and the spark gap becomes a good conductor. The tank capacitor is now connected across the primary winding through the spark gap. This forms a parallel resonant circuit and the capacitor discharges its energy into the primary winding in the form of a damped high frequency oscillation. The natural resonant frequency of this circuit is determined by the values of the primary capacitor and primary winding, and is usually in the low hundreds of killohertz. During the damped primary oscillation energy passes back and forth between the primary capacitor and the primary inductor. Energy is stored alternately as voltage across the capacitor or current through the inductor. Some of the energy from the capacitor also produces considerable heat and light in the spark gap. Energy dissipated in the spark gap is energy which is lost from the primary tank circuit, and it is this energy loss which causes the primary oscillation to decay relatively quickly with time. │ │ I like this spiral diagram because I think it shows how the voltage and current are 90 degrees out of phase. The distance of the dot from the origin represents the amount of energy in the │ │ │ system as the oscillation decays. │ │ │ │ │ │ It also reminds me of a primary coil shape ! │ 3. The close proximity of the primary and secondary windings causes magnetic coupling between them. The high amplitude oscillating current flowing in the primary causes a similar oscillating current to be induced in the nearby secondary coil. 4. The self capacitance of the secondary winding and the capacitance formed between the Toroid and ground result in another parallel resonant circuit being made with the secondary inductance. Its natural resonant frequency is determined by the values of the secondary inductance and its stray capacitances. The resonant frequency of the primary circuit is deliberately chosen to be the same as the resonant frequency of the secondary circuit so that the secondary is excited by the oscillating magnetic field of the primary. 5. Energy is gradually transferred from the primary resonant circuit to the secondary resonant circuit. Over several cycles the amplitude of the primary oscillation decreases and the amplitude of the secondary oscillation increases. The decay of the primary oscillation is called "Primary Ringdown" and the start of the secondary oscillation is called "Secondary Ringup". When the secondary voltage becomes high enough, the Toroid is unable to prevent breakout, and sparks are formed as the surrounding air breaks down. "Primary Ringdown" to first primary notch "Secondary Ringup" to first maximum 6. Eventually all of the energy has been transferred to the secondary system and none is left in the primary circuit. This point is known as the "First primary notch" because the amplitude of the primary oscillation has fallen to zero. It is the first notch because the energy transfer process usually does not stop here. In an ideal system the spark gap would cease to conduct at this point, when all of the energy is trapped in the secondary circuit. Unfortunately, this rarely happens in practice. 7. If the spark gap continues to conduct after the first primary notch then energy begins to transfer from the secondary circuit back into the primary circuit. The secondary oscillation decays to zero and the primary amplitude increases again. When all of the energy has been transferred back to the primary circuit, the secondary amplitude drops to zero. This point is known as the "First secondary notch", because there is no energy left in the secondary at this time. 8. This energy transfer process can continue for several hundred microseconds. Energy sloshes between the primary and secondary resonant circuits resulting in their amplitudes increasing and decreasing with time. At the instants when all of the energy is in the secondary circuit, there is no energy in the primary system and a "Primary notch" occurs. When all of the energy is in the primary circuit, there is no energy in the secondary and a "Secondary notch" occurs. In the animation opposite the front pendulum represents the primary voltage and the rear pendulum represents the secondary voltage. Notice how the amplitude of each pendulum changes as energy is transferred back and forth from one to the other. A mechanical model such as this can easily be built and provides a good analogy to the electrical case. It really does work ! The "notches" are clearly visible when one pendulum appears to momentarily stop. 9. Each time energy is transferred from one resonant circuit to the other, some energy is lost in either the primary spark gap, RF radiation or due to the formation of sparks from the secondary. This means that the overall level of energy in the Tesla Coil system decays with time. Therefore both the primary and secondary amplitudes would eventually decay to zero. 10. After several transfers of energy between primary and secondary, the energy in the primary will become sufficiently low that the spark gap will cool. It will now stop conducting at a primary notch when the current is minimal. At this point any remaining energy is trapped in the secondary system, because the primary resonant circuit is effectively "broken" by the spark gap going 11. The energy left in the secondary circuit results in a damped oscillation which decays exponentially due to resistive losses and the energy dissipated in the secondary sparks. Secondary Ringdown after spark gap stops conducting 12. Since the spark gap is now open-circuit the tank capacitor begins to charge again from the HV supply, and the whole process repeats again. It should be noted that this repeating process is an important mechanism for the generation of long sparks. This is because successive sparks build on the hot ionised channels formed by previous sparks. This allows sparks to grow in length over several firings of the system. In practice the whole process described above may take place several hundred times per second. But how does the Tesla Coil produce such a massive secondary voltage ? Now for the maths bit… The terrific voltage gain of the Tesla Coil comes from the fact that the energy in the large primary tank capacitor is transferred to the comparatively small stray capacitance of the secondary circuit. The energy stored in the primary capacitor is measured in Joules and is found from the following formula: Ep = 0.5 Cp Vp² If, for example, the primary capacitor is 47nF and it is charged to 20kV then the stored energy can be calculated. Ep = 0.5 x 47n x (20000)² = 9.4 Joules If we assume there are no losses in the transfer of energy to the secondary winding, the theory of conservation of energy states that this energy will be transferred to the secondary capacitance Cs. Cs is typically around 25pF. If it contains 9.4 Joules of energy when the energy transfer is complete, we can calculate the voltage: Es = 0.5 x 25p x Vs² = 9.4 Vs² = 9.4 / (0.5 x 25p) Vs = 867 kV The theoretical voltage gain of the Tesla Coil is actually equal to the square root of the Capacitance ratio. Gain = sqrt (Cp / Cs) The voltage gain can also be calculated in terms of the inductances… For the Tesla Coil to work, the resonant Frequencies of the Primary circuit and the Secondary circuit must be identical. I.e. Fp must equal Fs. Fp = 1 / 2 pi sqrt (LpCp) = Fs = 1 / 2 pi sqrt (LsCs) Therefore: LpCp = LsCs The ratio of the inductances is the inverse of the ratio of the capacitances, and therefore the voltage gain is as follows: Gain = sqrt (Ls / Lp) All of the above equations calculate the theoretical maximum voltage gain. In practice the voltage at the top of the secondary will never get quite this high because of several factors:- 1. The above equations assume that all of the energy from the primary capacitor makes the journey into the secondary capacitor. In practice some energy is lost due to resistance of the windings of both coils. 2. A significant proportion of the initial energy is lost as light, heat, and sound in the primary spark gap. 3. The primary and secondary coils act like antennas and radiate a small amount of energy in the form of radio waves. 4. The formation of corona or arcs from the Toroid to nearby grounded objects ultimately limits the peak secondary voltage. The waveform opposite shows how the secondary voltage drops drastically when an arc forms between the toroid and a nearby grounded object. This is an approximate representation of a real waveform observed whilst a running coil discharged to an earthed target 12 inches away. The secondary voltage rises to around 300kV in only 3 cycles, This is sufficient to breakdown the 12 inch gap, and the arc which is formed loads the secondary reducing the voltage. The size of the Toroid (or discharge terminal) is very important. If it is small, it will theoretically result in a higher secondary voltage due to its lower capacitance (Cs). However, in practice its small radius of curvature will cause the surrounding air to breakdown prematurely at a low voltage before the maximum level is reached. A large toroid theoretically results in a lower peak secondary voltage (due to more Cs) but in practice gives good results because its larger radius of curvature delays the breakdown of the surrounding air until a higher voltage is reached. It is possible to fit a very large toroid to a Tesla Coil which actually prevents the surrounding air from breaking down. In this instance no power is dissipated in the form of secondary sparks, and energy from the tank capacitor is dissipated between the spark gap, stray resistances, and RF radiation. More theory of operation Click here for the next section on Quenching, Coupling and Frequency Splitting.
{"url":"http://www.richieburnett.co.uk/operation.html","timestamp":"2014-04-18T02:58:09Z","content_type":null,"content_length":"14447","record_id":"<urn:uuid:206bbe7f-e1b8-40b1-9dcc-cd613a5535ed>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
handyCalc: More calculator than you can handle Apr 12 AT 2:28 AM Guest Blogger 25 Comments handyCalc: More calculator than you can handle There are a lot of calculator apps in the Android Marketplace and a lot of them are pretty good, but while most are specialized tip calculators, handyCalc comes to the table with pretty much everything under the sun. Right off the bat, handyCalc scores points for offering a tutorial when you first run the application. This demo walks you through some of the UI gestures, solving expressions and using trigonometric and other advanced functions. You find out that it can solve linear and non-linear equations, draw graphs of functions and even let you define your own functions to use in advanced calculations. As a bonus handyCalc also acts as an almost universal converter (currency, measurement units, etc). It can subtract dates to find the number of days between them and help with statistical calculation by automatically suggesting the average of a sum. It even allows saving your work to a file for later use. The good: • Very sleek user interface • Full featured scientific calculator • Can solve fractions as integer or decimal and trigonometric problems as degree or radian • Draws the graph of any function • Supports saving and loading files • Free in the Android Marketplace Improvements I would like to see: • There’s a glitch when writing extremely long fractions (they go off screen) that I hope gets fixed before my next math exam The Bottom Line handyCalc is indeed very handy and I’d recommend it to anyone in need of a scientific calculator that does most of the job for you. Note: This review was submitted by Alexandru-Ioan Dobrinescu as part of our app review contest. This slideshow requires JavaScript. • Tagged • #bits • #free Most Tweeted This Week
{"url":"http://androidandme.com/2010/04/reviews/handycalc-more-calculator-than-you-can-handle/","timestamp":"2014-04-18T13:26:36Z","content_type":null,"content_length":"88547","record_id":"<urn:uuid:51334f9e-f998-4cda-ba73-94dc1cd0820a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic properties of definite integrals Definite integrals have many properties that make their calculation much simpler. Most properties of definite integrals are also properties of limits since an integral is a limit of a summation, as shown below: [a, b] Properties of Definite Integrals The Definite Integral as the Area of a Region is continuous and nonnegative on the closed interval [a, b] , then the area of the region bounded by the graph of , the x-axis, and the vertical lines x = a and x = b is given by: Two Special Definite Integrals is defined at x = a, then: is integrable on [a, b] , then: Additive Integral Property If f is integrable on the three closed intervals determined by , and , then: Preservation of Inequality are integrable and nonnegative on the closed interval [a, b] for all [a, b] , then: Other Properties of Definite Integrals is integrable on [a, b] is a constant, then the function c * f is integrable on [a, b] , and: are integrable on [a, b] , then the function is integrable on [a, b] , and • Note: this property holds true with subtraction as well as addition, and also can be extended to cover any finite number of functions (such as: f(x) + g(x) + h(x) + ...) 1. The integral of y = 3x + 4 from 2 to 2 is 0 because the integral of any function in which both bounds are the exact same number is zero. 2. If the integral from 1 to 4 of the function y = x² is 21, then the integral from 4 to 1 of x² is -21 because when the bounds of an integral are switched, the integral is the same, but has the opposite sign (positive changes to negative, or negative changes to positive). 3. If the integral of y = 2x + 3 from 1 to 3 is 14 and the integral of the same function from 3 to 5 equals 22, then the integral from 1 to 5 of the function y = 2x + 3 equals 14 + 22, or 36 because the two integrals (from 1 to 3 and 3 to 5) make up the whole integral (from 1 to 5). Conversely, since the integral of y = 4x³ - 2x from 2 to 4 equals 228, then you know that the sum of the integrals from 2 to 3 and 3 to 4 of the same function must also equal 228. 4. To calculate longer, more complicated integrals, you can take the original function, make each term a separate function, find the integral of each term, and then add them back together. In this example, the integral of y = 6x³ - 3x² from 1 to 3 can be rewritten as the integral of y = 6x³ from 1 to 3 plus the integral of y = -3x² from 1 to 3. Since the the integral of y = 6x³ from 1 to 3 equals 120 and the integral of y = -3x² from 1 to 3 equals -26, the integral of y = 6x³-3x² from 1 to 3 is 120 + -26, or 94. Looking for another example of definite integrals? Larson, Ron, Robert P. Hostetler, and Bruce H. Edwards. . 8th ed. Boston and New York: Houghton Mifflin Company, 2006. Pages: 271-281. Husch, Lawrence S. "Definite Integrals." Visual Calculus . University of Tennessee. 16 May 2007. <
{"url":"http://centralmathteacher.wikispaces.com/Basic+properties+of+definite+integrals?responseToken=01ba7b0d419c6d8e986360b470bf4aef6","timestamp":"2014-04-19T22:56:51Z","content_type":null,"content_length":"52056","record_id":"<urn:uuid:a3d8fd4b-9beb-4d66-8951-b27e3cc63d0d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
which number repesents the tens place in the number 743.25 Word Problem Answers (7,914) Statistics Answers (2,473) Calculus Answers (5,050) Trigonometry Answers (2,192) Geometry Answers (4,419) Algebra 2 Answers (9,143) Algebra 1 Answers (21,994) Pre-Algebra Answers (10,875) In the number 743.25 we have 5 which is the hundredths place, 2 is in the tenths place, 3 is in the ones place, 4 in the tens place and 7 in the hundreds place. Hopefully that helps you understand. Please add a comment if it is not clear. Ads related to tenecia is thinking of a number with 8 place valuer . the digit in the thousands place is 5 . the digits in the hundred thoasands and hundreds place are both an odd number greater than 7 .the digit in each other place value is one less than the digit in the thousands place . what is the number?
{"url":"http://mathhomeworkanswers.org/103/which-number-repesents-the-tens-place-in-the-number-743-25","timestamp":"2014-04-17T09:47:59Z","content_type":null,"content_length":"312909","record_id":"<urn:uuid:6457b7b4-1291-4e0b-bcf4-73d5e53298da>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
ODE's and Projectiles - Please Help May 12th 2008, 06:40 AM #1 Junior Member May 2008 Hi All, I need help with this question badly. I have am stuck on this one part, so I cannot finish the question. Here it is: "A 1-tonne(2000)kg object is launched vertically with an initial velocity of 30,000m/s at time t=0 and at height y=0. The only forces working on the object are gravity and air friction. (i.e. the object will land where it was launched from). Considering such high speeds and large distances are involved, the model must incorporate air friction and variations in gravity.Air resistance is a force that is proportional to the square of the speed, while gravity follows an inverse square law. A difficulty is that the air resistance co-efficient varies with the height of the object, because the atmospheres density changes The density at y=0 is 1.275kg/m^3. The density at y=6000m is half the density at y=0. There was some more information given in the question, which enabled me to determine the air density for any height,y. Which is(I think) D(y) = 1.275e^(-(ln2/6000)*y) Also, I have data regarding the drag force for certain speeds at certain air densities, which I have given to you in a jpeg below. My task is to determine how high the object will go, and what is the time taken to return to earth. I know that the co-efficient of air-resistance is a function of air density, and that the frictional co-efficient is related to air density by a quadratic relationship. My first question is: how do I determine this relationship from the data given, and once I have that (lets call it k), would the formula for Drag Force, be F(v) = kv^2, where v=velocity of the object? My second question is about the variations in gravity. Can ayone explain to me how the inverse-square law would apply to this scenario? I do have a model for the variations in gravity, but I dont think it is right. What formula would I use in determining the force opposing the direction of the initial velocity (i.e. -mg, since gravity isn't constant, what formula do I use for g?) If anybody could help me, I would really appreciate it. Thank you Last edited by woody198403; May 12th 2008 at 07:02 AM. Hi All, I need help with this question badly. I have am stuck on this one part, so I cannot finish the question. Here it is: "A 1-tonne(2000)kg object is launched vertically with an initial velocity of 30,000m/s at time t=0 and at height y=0. The only forces working on the object are gravity and air friction. (i.e. the object will land where it was launched from). Considering such high speeds and large distances are involved, the model must incorporate air friction and variations in gravity.Air resistance is a force that is proportional to the square of the speed, while gravity follows an inverse square law. A difficulty is that the air resistance co-efficient varies with the hight of the object, because the atmospheres density changes The density at y=0 is 1.275kg/m^3. The density at y=6000m is half the density at y=0. There was some more information given in the question, which enabled me to determine the air density for any height,y. Which is(I think) D(y) = 1.275e^(-(ln2/6000)*y) Also, I have data regarding the drag force for certain speeds at certain air densities, which I have given to you in a jpeg below. My task is to determine how high the object will go, and what is the time taken to return to earth. I know that the co-efficient of air-resistance is a function of air density, and that the frictional co-efficient is related to air density by a quadratic relationship. My first question is: how do I determine this relationship from the data given, and once I have that (lets call it k), would the formula for Drag Force, be F(v) = kv^2, where v=velocity of the object? My second question is about the variations in gravity. Can ayone explain to me how the inverse-square law would apply to this scenario? I do have a model for the variations in gravity, but I dont think it is right. What formula would I use in determining the force opposing the direction of the initial velocity (i.e. -mg, since gravity isn't constant, what formula do I use for g?) If anybody could help me, I would really appreciate it. Thank you You don't have to solve this explicitly, a rough calculation show that the projectile will be out of the atmosphere with a residial velocity in excess of 11,000 m/s which is escape velocity, so it will never come down. (consider the change in velocity due to 10s of maximum drag and how high the projectile will be at that point). Oh whoops. Im sorry, but the initial velocity is 3,000 m/s not 30,000m/s. May 12th 2008, 01:43 PM #2 Grand Panjandrum Nov 2005 May 12th 2008, 03:39 PM #3 Junior Member May 2008
{"url":"http://mathhelpforum.com/calculus/38084-ode-s-projectiles-please-help.html","timestamp":"2014-04-17T05:52:06Z","content_type":null,"content_length":"40533","record_id":"<urn:uuid:7f7da6a5-94de-4ee9-b05f-22f70ac436aa>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
mean square limit January 23rd 2008, 07:58 PM #1 mean square limit I'm spending hours trying to develop and prove the mean-square-limit. Does anyone know where I could find an exhaustive and detailed proof online? I would really appreciate! Here is the concept: Assume the finite time t is sliced into n identical partitions. Define tj as the time up to the end of the j-th partition. $<br /> t_j= \frac{jt}{n}<br />$ The mean-square-limit sais that $<br /> \lim_{n \rightarrow \infty} E\left[ \left( \sum_{j=1}^n\ (X(t_j)-X(t_{j-1}))^2-t \right)^2 \right] = 0<br />$ where X's are brownian motions and E is the expectation. Limit is zero because (developing and resolving) E=O(1/n) Last edited by CaptainBlack; January 24th 2008 at 05:27 AM. I'm spending hours trying to develop and prove the mean-square-limit. Does anyone know where I could find an exhaustive and detailed proof online? I would really appreciate! Here is the concept: Assume the finite time t is sliced into n identical partitions. Define tj as the time up to the end of the j-th partition. $<br /> t_j= \frac{jt}{n}<br />$ The mean-square-limit sais that $<br /> \lim_{n \rightarrow \infty} E[(\sum_{j=1}^n\ (X(t_j)-X(t_{j-1}))^2-t)^2] = 0<br />$ where X's are brownian motions and E is the expectation. Limit is zero because (developing and resolving) E=O(1/n) Check your brackets, they don't match. January 23rd 2008, 08:21 PM #2 Grand Panjandrum Nov 2005 January 23rd 2008, 08:28 PM #3
{"url":"http://mathhelpforum.com/advanced-statistics/26711-mean-square-limit.html","timestamp":"2014-04-20T09:02:51Z","content_type":null,"content_length":"38328","record_id":"<urn:uuid:a533caa7-2ea2-46ff-b4c5-d456f586cf50>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
January 28th 2009, 06:38 AM can someone help me?i haven't done any of these before and i need real help Given a sequence of n + 1, (n ¸ 1) real numbers {a(0), a(1), ..., a(n)} and a real number c, consider the following algorithm : p :=a(n) for i from n-1 downto 0 {,i.e. i=n-1,...,1,0} Check which function p(c) is computed for n = 3, 2, 1 ? f(n) the number of artithmetic operations needed to execute this Give alfa in f(n)=BIG THETA(n^alfa) thank you January 28th 2009, 10:17 PM I think the number of perations needed is 2n right?or 2n^2?
{"url":"http://mathhelpforum.com/discrete-math/70368-algorithm-print.html","timestamp":"2014-04-25T00:46:58Z","content_type":null,"content_length":"6266","record_id":"<urn:uuid:c272cb1a-0490-4340-864d-76df6650b9dd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Expected Value Significance of Expected Value Expected value (EV) is a central principle in the theory of probability. It is used for average estimation of some random value. Expected value is similar to a center of gravity assuming that the values of probability are the masses of solid point. Let us assume that the game has n different outcomes, probability of each outcome is p[i]. The expected value of some variable x that takes values x[i] can be calculated with the next formula: E(x) = x[1]p[1 ]+ x[2]p[2] + ... + x[n]p[n] , In case the probabilities of outcomes are equal ( p[1]=p[2]=...=p[n]=1/n ) the expected value equals to arithmetic mean: E(x) = x[1]/n+ x[2]/n + ... + x[n]/n = (x[1] + x[2 ] + ... + x[n]) / n Why is the expected value the most important principle in the probability theory? It helps to predict the estimation of some random variable for a long period of trials. The mean of any random variable in a long term gets close to its expected value. This fact is strictly proved in the course of probability theory.
{"url":"http://www.gamblecraft.com/tutorial/expval.htm","timestamp":"2014-04-20T21:23:34Z","content_type":null,"content_length":"4545","record_id":"<urn:uuid:e94832f8-cfc5-4153-b647-a5aa97d3bfec>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Solve this system of equations using the addition method? x-y=6 -3y=x+22 Solve this system of equations using the addition method? x-y=6 -3y=x+22 @Mathematics Best Response You've already chosen the best response. y=-7 x=13 Best Response You've already chosen the best response. that true? Best Response You've already chosen the best response. scale the 1st by -3 Best Response You've already chosen the best response. -3x +3y = -18 -3y = x+22 ---------------- -3x = x+4 -4x = 4 Best Response You've already chosen the best response. how do you get x+4 Best Response You've already chosen the best response. by using the "addition" method ... you add the 2 equations together after you modify one of them by a scalar Best Response You've already chosen the best response. a scalar just means some number that you multiply the whole equation by to scale it up or down to something that you can use Best Response You've already chosen the best response. o so i just substitute it Best Response You've already chosen the best response. well, substitution will work as well; but this is a new method for you to learn so that you have many ways you can attack a problem to get to a solution Best Response You've already chosen the best response. this "addition" method is also commonly refered to as "elimination" since the goal is to eliminate one of the variables Best Response You've already chosen the best response. o ok thank you u make it sound easier to solve Best Response You've already chosen the best response. so if i were to have -8x+3y=-5 and 8x-2y=6 i would add the _8x and the 8x to get zero then 3-2y to get y and -5+6 to get 1 right Best Response You've already chosen the best response. practice helps :) the steps are pretty much like this: x - y = 6 ; (1) -3y = x + 22 ; (2) given 2 equations lets multiply one of them by a "n"umber to scale it. the easiest one would be the first one to me n(x - y = 6) ; (1) -3y = x + 22 ; (2) nx - ny = 6n ; (1) -3y = x + 22 ; (2) now what does "n" have to be to get rid of the -3y? id say n = -3 will work -3x - -3y = 6(-3) ; (1) -3y = x + 22 ; (2) -3x +3y = -18 ; (1) -3y = x + 22 ; (2) now we can add the equations together: -3x +3y = -18 ; (1) -3y = x + 22 ; (2) ---------------------- -3x = x + 4 ; (3) and solve the new equation (3) for the remaining variable Best Response You've already chosen the best response. if they give you: -8x+3y =-5 8x-2y = 6 then yes the x parts are already set up for elimintion, just add the 2 equations together Best Response You've already chosen the best response. ok thank u sooo much i would give u thousands of medals if i could Best Response You've already chosen the best response. youre welcome, and good luck ;) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ea9af6ee4b02eee39d88f5c","timestamp":"2014-04-18T08:06:52Z","content_type":null,"content_length":"64883","record_id":"<urn:uuid:32f0bec2-4e12-4c57-a93c-3ab245709d0b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
NSString Format Specifiers Do you ever got confused when you want to NSLog a long integer? What to use to print a long %ld or %lx or %f or %d. Well Here is a complete list of all the NSString format specifiers - Specifier Description %@ Objective-C object, printed as the string returned by descriptionWithLocale: if available, or description otherwise. Also works with CFTypeRef objects, returning the result of the CFCopyDescription function. %% '%' character %d, %D, Signed 32-bit integer (int) %u, %U Unsigned 32-bit integer (unsigned int) %hi Signed 16-bit integer (short) %hu Unsigned 16-bit integer (unsigned short) %qi Signed 64-bit integer (long long) %qu Unsigned 64-bit integer (unsigned long long) %x Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0–9 and lowercase a–f %X Unsigned 32-bit integer (unsigned int), printed in hexadecimal using the digits 0–9 and uppercase A–F %qx Unsigned 64-bit integer (unsigned long long), printed in hexadecimal using the digits 0–9 and lowercase a–f %qX Unsigned 64-bit integer (unsigned long long), printed in hexadecimal using the digits 0–9 and uppercase A–F %o, %O Unsigned 32-bit integer (unsigned int), printed in octal %f 64-bit floating-point number (double) %e 64-bit floating-point number (double), printed in scientific notation using a lowercase e to introduce the exponent %E 64-bit floating-point number (double), printed in scientific notation using an uppercase E to introduce the exponent %g 64-bit floating-point number (double), printed in the style of %e if the exponent is less than –4 or greater than or equal to the precision, in the style of %f otherwise %G 64-bit floating-point number (double), printed in the style of %E if the exponent is less than –4 or greater than or equal to the precision, in the style of %f otherwise %c 8-bit unsigned character (unsigned char), printed by NSLog() as an ASCII character, or, if not an ASCII character, in the octal format \ddd or the Unicode hexadecimal format \udddd, where d is a digit %C 16-bit Unicode character (unichar), printed by NSLog() as an ASCII character, or, if not an ASCII character, in the octal format \ddd or the Unicode hexadecimal format \udddd, where d is a %s Null-terminated array of 8-bit unsigned characters. %s interprets its input in the system encoding rather than, for example, UTF-8. %S Null-terminated array of 16-bit Unicode characters %p Void pointer (void *), printed in hexadecimal with the digits 0–9 and lowercase a–f, with a leading 0x %L Length modifier specifying that a following a, A, e, E, f, F, g, or G conversion specifier applies to a long double argument %a 64-bit floating-point number (double), printed in scientific notation with a leading 0x and one hexadecimal digit before the decimal point using a lowercase p to introduce the exponent %A 64-bit floating-point number (double), printed in scientific notation with a leading 0X and one hexadecimal digit before the decimal point using a uppercase P to introduce the exponent %F 64-bit floating-point number (double), printed in decimal notation %z Length modifier specifying that a following d, i, o, u, x, or X conversion specifier applies to a size_t or the corresponding signed integer type argument %t Length modifier specifying that a following d, i, o, u, x, or X conversion specifier applies to a ptrdiff_t or the corresponding unsigned integer type argument %j Length modifier specifying that a following d, i, o, u, x, or X conversion specifier applies to a intmax_t or uintmax_t argument Hope it will save someone time! One Response 1. … [Trackback]… [...] Read More: makebetterthings.com/objective-c/nsstring-format-specifiers/ [...]…
{"url":"http://www.makebetterthings.com/objective-c/nsstring-format-specifiers/","timestamp":"2014-04-18T23:15:02Z","content_type":null,"content_length":"35108","record_id":"<urn:uuid:3fb2f99e-f5ab-434a-9ae2-68da8e75bbd2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrices problem (gps)? January 26th 2011, 11:39 PM #1 Jan 2011 Matrices problem (gps)? I am having some problem with my engineering maths project. i found out that the equation of circle is (x-h)^2 - (y-k)^2 = r^2 and that it is non-linear but do not know how to convert it to a matrix form to find the planar location. would appreciate it if someone would help or show the way to do it thanks the question is below. a) A satellite can be represented by a circle, centred at (a1, b1). The distance between the satellite and the GPS receiver is the radius of the circle, r1. The location of the GPS receiver is (x, y). Write down an equation that relates the centre & radius of the circle with (x, y). (b) GPS is based on satellite ranging, i.e. calculating the distances between a receiver and the position of 3 satellites. The 2 other satellites are centred at (a2, b2) and (a3, b3). Their radii are r2 and r3 respectively. Write down the corresponding equations that relate the centre & radius with (x, y) for these 2 satellites. (c) Explain whether equations from (a) and (b) are linear or non-linear. If they are non-linear, algebraically simplify the equations to get a linear system in x and y. (d) The location of the GPS receiver is on the circumference of each circle. It can be determined by computing the intercept of the circles. Express the system of equations from (a) and (b) in matrix form. (e) From the data in the given table, determine the planar location (x, y) of a GPS receiver. Circle Centre Radius 1 (-3,50) 41 2 (11,-2) 13 3 ( 13,34) 25 (f) Is matrix the only method to solve this problem, suggest an alternative method. It appears that this project is for a grade. Forum policy is not knowingly to help with any problem that counts towards a grade. This thread has been closed and will remain so unless the OP can satisfie the moderators that this is not assessed work. January 29th 2011, 02:51 AM #2 January 29th 2011, 03:10 AM #3 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/advanced-math-topics/169476-matrices-problem-gps.html","timestamp":"2014-04-18T18:26:21Z","content_type":null,"content_length":"37178","record_id":"<urn:uuid:be9bd4ab-3dfe-4710-9c9a-e6e8f84e409f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
tensor hierarchy for Lie groups from Maurer-Cartan form up vote 1 down vote favorite The question is about the family of tensors that are naturally associated to any nice Lie group. Take the Mauer-Cartan form, $\omega=g^{-1} dg$ and I would like to make the covariant index of this one-form explicit, $\omega_a =g^{-1}\partial_a g$. Then the Riemannian, left/right invariant, metric is just $g_{ab}=tr(\omega_a\omega_b)$. More generally one can construct a family of left-right invariant tensors $T_{a_1...a_s}=tr(\omega_{a_1}...\omega_{a_s})$. Does this family have any meaning/application/nice properties? dg.differential-geometry lie-groups maurer-cartan-equation Hmmm. You have some confusion here, and I'm not sure exactly what you mean by 'make the covariant index...explicit'. However, you should know that, if you just take any basis of left-invariant forms, then the formula you wrote down for a metric $g$ will not be bi-invariant. (In fact, many Lie groups do not even admit a bi-invariant volume form, let alone a metric, so you will need to say what you mean by 'nice', too.) The bi-invariant tensors on a given Lie group are controlled by its adjoint representation, and that gives the whole story there. Yes, these have lots of applications. – Robert Bryant Aug 6 '13 at 9:36 Indeed, the question is vague and a lot was put on 'nice Lie group term'. My interest is in higher rank tensors one can have and I will accept any Lie group that is needed for them to exists. – Eugene Starling Aug 6 '13 at 14:13 add comment 2 Answers active oldest votes Sounds like you are in physics; particle physicists often assume that all Lie groups are compact. Compact Lie groups admit biinvariant metrics. The tensors you are considering are essentially the characteristic polynomials in the adjoint representation, so if you insert wedge product signs (work with the associated alternating tensors) then they represent certain up vote 4 of the Chern-Weil invariant differential forms that give rise to characteristic classes, if I understand your notation correctly. There may be some other Chern-Weil forms (like the down vote Pfaffian, if $G=SO(2n)$). Thank you for the comment. What if I symmetrize instead of taking alternating tensors? – Eugene Starling Aug 6 '13 at 14:14 add comment Expanding on Ben's answer: If you symmetrize, you get the left (or bi) invariant extension of the characteristic polynomials. up vote 3 down Example: Let $G=SO(n,\mathbb R)$ be a simple matrix group. Then you get (symmetrized version) the Newton polynomials in the eigenvalues $\sum \lambda_i^p$ of $g^{-1} X$ if $(g,X)$ is vote a tangent vector with foot point $X$ of $SO(n)$ in $Mat(n)$. Thanks, this was really helpful! – Eugene Starling Aug 6 '13 at 20:37 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry lie-groups maurer-cartan-equation or ask your own question.
{"url":"http://mathoverflow.net/questions/138638/tensor-hierarchy-for-lie-groups-from-maurer-cartan-form","timestamp":"2014-04-17T01:27:07Z","content_type":null,"content_length":"58639","record_id":"<urn:uuid:cb9abf50-da6f-43a1-a2fa-0be1441178d8>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
For each natural number n, let An=(x in the real #s such that... November 13th 2011, 08:01 PM #1 Junior Member Nov 2011 For each natural number n, let An=(x in the real #s such that... For each natural number n, let A (subscript n) = (x in the real #'s such that n-1 < x < n) Prove that {A (subscript n) with n as an element of the natural numbers) is a pairwise disjoint family of sets and that the Union of (n in the natural numbers<-(below Union symbol)) of An <- (next to Union symbol) = (Positive real numbers - Natural Numbers) How do you do this proof I'm stuck, it's extra credit I'd like the bonus points Re: For each natural number n, let An=(x in the real #s such that... For each natural number n, let A (subscript n) = (x in the real #'s such that n-1 < x < n) Prove that {A (subscript n) with n as an element of the natural numbers) is a pairwise disjoint family of sets and that the Union of (n in the natural numbers<-(below Union symbol)) of An <- (next to Union symbol) = (Positive real numbers - Natural Numbers) Why don't you post in LaTeX? I cannot read that. In order to help you we need to see your work.. Post some of your own thoughts on this question. Tell us where your are having difficulties. November 14th 2011, 02:46 AM #2
{"url":"http://mathhelpforum.com/discrete-math/191844-each-natural-number-n-let-x-real-s-such.html","timestamp":"2014-04-18T18:54:29Z","content_type":null,"content_length":"34263","record_id":"<urn:uuid:725f6774-6ada-41a3-86ad-ad39ddce21ae>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Mount Prospect Algebra Tutor ...I currently tutor two students in Prealgebra and like to instill confidence in them so they know that math is something you can be good at if you do it enough. When I took the Basic Skills test for my teaching certificate I received a perfect score on the math section. Currently I am taking a L... 16 Subjects: including algebra 1, reading, English, Spanish ...Predominantly, we are learning how to think using algebra as a language, and we have lines, parabolas, etc. as a play-field. Now that we're comfortable thinking using the algebraic language, we start to think about new things. We flesh out the relations we had only touched on lightly before (e.... 14 Subjects: including algebra 1, algebra 2, geometry, ASVAB ...Since that time, I have been working and tutoring on the side. I recently went back to school at North Central College to get my teaching certification. I just completed my student teaching experience (teaching Algebra I and Algebra II) and will be certified June 2014. 7 Subjects: including algebra 1, algebra 2, geometry, trigonometry ...I am qualified in math by virtue of passing the Elementary Math test with 100%. I certain am qualified to assist in all aspects of elementary subjects although my only real interest would be in the math part. I am an actuary and one of the topics covered in our syllabus was Finite Difference, wh... 26 Subjects: including algebra 1, algebra 2, calculus, physics ...I enjoy helping others and I always struggle to make others understand the material to best of their ability. I do not have any work experience as a tutor (not official at least). I am looking to spend my summer earning some money and helping others.I have spent 14 years in Poland. I moved to United States when I was 14. 5 Subjects: including algebra 2, calculus, precalculus, Polish
{"url":"http://www.purplemath.com/Mount_Prospect_Algebra_tutors.php","timestamp":"2014-04-16T16:39:29Z","content_type":null,"content_length":"24195","record_id":"<urn:uuid:5d2e26f3-eebd-4f59-b898-9f61c5219ce4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Permutations with short monotone subsequences Permutations with short monotone subsequences Dan Romik We consider permutations of 1,2,...,n2 whose longest monotone subsequence is of length n and are therefore extremal for the Erdős-Szekeres Theorem. Such permutations correspond via the Robinson-Schensted correspondence to pairs of square n× n Young tableaux. We show that all the bumping sequences are constant and therefore these permutations have a simple description in terms of the pair of square tableaux. We deduce a limit shape result for the plot of values of the typical such permutation, which in particular implies that the first value taken by such a permutation is with high probability (1+o(1))n2/2. Full Text: GZIP Compressed PostScript PostScript PDF original HTML abstract page
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/proceedings/article/viewArticle/dmAE0112","timestamp":"2014-04-19T07:22:46Z","content_type":null,"content_length":"10679","record_id":"<urn:uuid:b6cc9419-e673-461c-ab2b-b32e7b61f2e6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of isosceles Use Isosceles in a sentence Origin: 1545–55; Late Latin Greek isoskelḗs with equal legs, equivalent to iso- iso- ) leg + adj. suffix World English Dictionary isosceles (aɪˈsɒsɪˌliːz) 1. (of a triangle) having two sides of equal length 2. (of a trapezium) having the two nonparallel sides of equal length [C16: from Late Latin, from Greek isoskelēs, from iso- + skelos leg] Collins English Dictionary - Complete & Unabridged 10th Edition 2009 © William Collins Sons & Co. Ltd. 1979, 1986 © HarperCollins Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009 Cite This Source Word Origin & History American Heritage Science Dictionary isosceles (ī-sŏs'ə-lēz') Pronunciation Key Of or relating to a geometric figure having at least two sides of equal length. The American Heritage® Science Dictionary Copyright © 2002. Published by Houghton Mifflin. All rights reserved. Cite This Source Example sentences Draw a circle in the center and then five lines from the center to the corners to create five isosceles triangles. The figure shows a row of six solid white isosceles triangles. Ask students to find two right angle isosceles triangles and predict what will be formed when they slide them together. They will then find measures of the angles using the properties of isosceles and right triangles. The atoms in a water molecule-two hydrogen and one oxygen-are arranged at the corners of an isosceles triangle. When students work with tangrams, two congruent isosceles right triangles can be put together to compose a square. FAll isosceles triangles are also equilateral triangles. Most of the shapes of elementary geometry relate to the rectangle, and to the isosceles triangles generated by its diagonals. The nose cone triangle can either be an isosceles or equilateral triangle.
{"url":"http://dictionary.reference.com/browse/isosceles","timestamp":"2014-04-16T22:58:24Z","content_type":null,"content_length":"97288","record_id":"<urn:uuid:0698cc6e-d55d-40dc-8f36-e83248b24253>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Is time quantum at the microscopic level? Time has always baffled me. I have two questions for you. 1. What's the genesis of time? 2. Is time quantum at the microscopic level? Thank you. SinghRP, thanks for your inquiry. Your probing into the understanding of time at the fundamental level reflects the kind of curiousity and inquisitiveness that has propelled physics throughout its development. You are in good company with the attitude reflected in your posts. I wish I could help you understand time. At the fundamental level physicists do not comprehend what accounts for our notion of time--the "passage of time." It's psychological, but also it's something that physicists have been able to account for in some sense as a parameter in their mathematical equations. With the advent of special relativity, time becomes even more mysterious in the way it is incorporated into the space-time concept. This removes us even further from the understanding of time at a fundamental level--although, in a mathematical sense, the space-time theory enhanced the advancement of physics and even gave us a deeper insight into the universe. Many of those aspects would be considered too philosophical for discussion here, although some of these discussions have been given much latitude and flexibility by the forum monitors. Some would say that time is a value read on a clock. The problem is that clocks themselves are not time, they are physical objects that occupy space. We can put numbers on the clock and talk about the rate of rotation of the hands of the clock, but that is not time, intrinsically. Yes, physics can calibrate the clock and assign a definitional meaning to the readings on a clock, but that is not the same as providing a fundamental understanding of time. I don't think there will be much help for you here, but you might search the topic on amazon.com where you will find books like "About Time", "The Fabric of Time", "The Labyrinth of Time", and "The End of Time." Much of the discussions found in those books are not appropriate for this forum, since the kind of probing you are doing is not found in the formal scientific journals of physics, and much of it is considered speculative by the standards of this forum. Kurt Godel (one of the great logicians of mathematics and colleague of Einsteins at Princeton) once presented what he felt was a logical proof that time in physics was invalid. But, his arguments would be considered speculative and philosophical--not appropriate for this type of forum where we emphasize help in understanding physics based on concepts in the main stream, universally accepted concepts with support from peer-reviewed literature. You should particularly avoid discussions here that seem to fall into the philosophical category rather than physics.
{"url":"http://www.physicsforums.com/showthread.php?p=4201307","timestamp":"2014-04-20T23:33:03Z","content_type":null,"content_length":"75503","record_id":"<urn:uuid:8d66b8eb-16b6-48e0-8d6c-e63b2cbd3af1>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] Why does this code pause at the end From: wooks (wookiz at hotmail.com) Date: Fri Mar 5 12:40:31 EST 2010 Running this in Advanced Student. It's a fast version of Pascals Triangle implemented by computing each row before it returns a result. The program returns it's result but a (relatively) long pause ensues before the scheme prompt appears. Wondering why. (define (pascal-row row) (cons 1 (local [(define (sum-neighbours row) (if (or (empty? row) (empty? (rest row))) (cons (+ (first row) (second row)) (sum-neighbours (rest row)))))] (sum-neighbours row)))) (define (pascal n k) [(> k n) (pascal k n)] [else (local [(define (pascal m previous-row) [(= m n) (+ (list-ref previous-row (sub1 k)) (list-ref previous-row k))] [else (pascal (add1 m) (pascal-row previous- (pascal 0 empty))])) (pascal 345 235) Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2010-March/038422.html","timestamp":"2014-04-18T03:05:38Z","content_type":null,"content_length":"6335","record_id":"<urn:uuid:93ecc78c-d2ec-4f5a-ada2-aa42acd96a55>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
View topic - Asking for help: Reforge + UW nerfed + Matrix Restabilizer Posts: 3 Joined: Tue May 17, 2011 10:09 am Hello there, fellow druids! I'm confused how to reforge after picking up 391 Fandral's, UW nerf and using Matrix Restabilizer. Here's the I also have 2 372 trinkets from Nef and Halfus. As far as i know, crit procs are better than haste or mastery because bleeds applied during the proc are modified even after it fades, right? I also want to aim for hit/exp caps... So, how do you think i should reforge? I think it's a very complicated question in this case =/ You have done 6/7 in firelands hc and you ask about reforging? Anyways, I cant read russian, but the short answer to your question is: Reforge to whatever suits your playstyle. According to what you write, thats hit/exp cap then crit. I personally prefer hit/exp cap then mastery, then crit. Plug the relative stat values, you like, in to wowreforge.com and you're done. Maybe, Floofles can give some more specific advice how to reforge for ragnaros hc... (grats on the kill btw) Posts: 3 Joined: Tue May 17, 2011 10:09 am Lax wrote:You have done 6/7 in firelands hc and you ask about reforging? Yeah, that's right) I'm very confused as it is the first time i don't know how to do it properly. I will do some calculations in Mew later, but that is going to take a looong time. The problem is Matrix Restabilizer proc, mostly. And for the ragnaros i roll with caps + haste usually. calibro wrote:The problem is Matrix Restabilizer proc, mostly. I just picked that up monday aswell. I didnt expect it to really matter what it procced, but now you got me curious Just did some Mew runs with lots of different reforge strategies (both hit/exp capped and no hit cap). I had raid buffs/food/pot ticked on, 100k iterations, 10% duration randomization (300 sec base), cat simulator script (no args). Bascially, the difference between the reforge strategies was around 200 dps from the worst to the best for my character (377 ilvl). In other words it didnt really matter. I mean ... you might gain 0.6% dps in theory. How it plays out in the real fight, RNG and all, matters a lot more though. Posts: 457 Joined: Sat Jan 29, 2011 7:17 am For the reforge questions, wowreforge.com is a good answer. Just yank up your character, enter the hit/exp values you want to hit, win win. If you value Crit as your main secondary stat for H Rag specifically for the Matrix proc, you can probably edit the stat weights to reflect Crit as your secondary stat of choice, which should in turn make the program favor reforging Mastery or Haste when trying to calculate the hit/exp caps as well, as by default wowreforge uses Mastery>Crit>Haste. Posts: 355 Joined: Wed Jun 29, 2011 4:49 am calibro wrote:As far as i know, crit procs are better than haste or mastery because bleeds applied during the proc are modified even after it fades, right? That's not true - the crit rate of a DoT is updated on the fly with the caster's crit chance. This could be easily verified using the (pre-nerf) moonkin tier 11 4-piece bonus. Mastery, however, is only taken when the DoT is applied, so if you are looking for this property then you should go with a mastery proc. Posts: 116 Joined: Thu May 19, 2011 7:34 am How you reforge for ragnaros depends on your tactic and playstyle. The way Inner Sanctum do it means I get to aoe pewpew some adds occasionally and there are times where I multidot stuff (Scions for example) or when i'm not in range of the boss but want to still do damage to him (P4) so I went with mastery > hit > expertise > all the other junk. I was using seed and UW, i'll be using seed and FD now since I have nothing better. Posts: 11 Joined: Sun Jul 31, 2011 4:30 am That's not true - the crit rate of a DoT is updated on the fly with the caster's crit chance. This could be easily verified using the (pre-nerf) moonkin tier 11 4-piece bonus. I was definitely under the impression that Rip and Rake 'snapshot' stats (AP, Crit%, Mastery, %dmg modifiers like Tiger's Fury) when first applied and then: * Doesn't 'update' if you change stats * Doesn't 'update' stats if you Shred-Extend * Does update stats if you refresh by 'Rip'ping again or Ferocious Biting. I went to a training dummy and waited until Essence of the Cyclone ("Twisted") procced, and if I reapplied rake it seems like I crit more (after the buff wears off). That doesn't mean much when we're talking about going from 40% to 50% crit though. Konungr linked his Rag log in the other thread: ... 933&e=5536 Here he has 4 crits in a row after his Agi & Crit buff fades (but on the same Rip) Code: Select all [22:12:13.817] Konungr gains Ancient Petrified Seed from Konungr [22:12:13.817] Konungr casts Ancient Petrified Seed [22:12:16.684] Konungr gains Twisted from Konungr [22:12:18.128] Konungr casts Rip on Ragnaros [22:12:18.357] Ragnaros afflicted by Rip from Konungr [22:12:20.325] Konungr Rip Ragnaros *22102* [22:12:22.364] Konungr Rip Ragnaros *22102* [22:12:24.393] Konungr Rip Ragnaros 10730 [22:12:26.377] Konungr Rip Ragnaros *22102* [22:12:26.723] Konungr's Twisted fades from Konungr [22:12:28.396] Konungr Rip Ragnaros *22102* [22:12:28.840] Konungr's Ancient Petrified Seed fades from Konungr [22:12:30.440] Konungr Rip Ragnaros *22102* [22:12:32.466] Konungr Rip Ragnaros *22102* [22:12:34.376] Konungr Rip Ragnaros *22103* [22:12:36.597] Konungr Rip Ragnaros *22102* [22:12:37.624] Konungr casts Ferocious Bite on Ragnaros [22:12:38.043] Konungr Ferocious Bite Ragnaros 8542 [22:12:38.377] Konungr Rip Ragnaros 8191 [22:12:40.663] Konungr Rip Ragnaros 8191 [22:12:42.493] Konungr Rip Ragnaros 8190 [22:12:44.391] Konungr Rip Ragnaros 8191 [22:12:46.297] Konungr Rip Ragnaros 5075 [22:12:48.309] Konungr Rip Ragnaros *16872* [22:12:50.341] Konungr Rip Ragnaros 8190 Posts: 24 Joined: Sat May 28, 2011 4:30 pm i call statistical anomality on this... one sample is simply not enough to determine such behaviour. Posts: 355 Joined: Wed Jun 29, 2011 4:49 am The moonkin t11 4-piece bonus used to increase crit chance by 99% when it triggered and it was easy to verify that this updated the crit chance of any active moonfire or insect swarm on the fly. It's possible that feral DoTs behave differently, or that crit from this set bonus is treated differently than crit from rating, but a sample size of 5-10 ticks is not nearly enough to even warrant Posts: 11 Joined: Sun Jul 31, 2011 4:30 am Apologies then, I know that log is definitely not strong evidence, 4 crits in a row when you have a decent chance to crit is not that unlikely, I was just looking through it for the other thread and noticed those strings of dot-crits so thought I'd bring it up. Who is online Users browsing this forum: Yahoo [Bot] and 4 guests
{"url":"http://fluiddruid.net/forum/viewtopic.php?f=3&t=441","timestamp":"2014-04-20T03:22:03Z","content_type":null,"content_length":"47868","record_id":"<urn:uuid:41972996-f150-4410-a54a-4876a364b51c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
How many ounces is 1 and one fourth cup? United States customary units are a system of measurements commonly used in the United States. The U.S. customary system developed from English units which were in use in the British Empire before American independence. Consequently most U.S. units are virtually identical to the British imperial units. However, the British system was overhauled in 1824, changing the definitions of some units used there, so several differences exist between the two systems. The majority of U.S. customary units were redefined in terms of the meter and the kilogram with the Mendenhall Order of 1893, and in practice, for many years before. These definitions were refined by the international yard and pound agreement of 1959. The U.S. primarily uses customary units in its commercial activities, while science, medicine, government, and many sectors of industry use metric units. The SI metric system, or International System of Units is preferred for many uses by NIST The system of imperial units or the imperial system (also known as British Imperial) is the system of units first defined in the British Weights and Measures Act of 1824, which was later refined and reduced. The system came into official use across the British Empire. By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system of measurement, but some Imperial units are still used in the United Kingdom and Canada. Gold mining in Alaska, a state of the United States, has been a major industry and impetus for exploration and settlement since a few years after the United States acquired the territory from Russia. Russian explorers discovered placer gold in the Kenai River in 1848, but no gold was produced. Gold mining started in 1870 from placers southeast of Juneau, Alaska. Gold is found and has been mined throughout Alaska; except in the vast swamps of the Yukon Flats, and along the North Slope between the Brooks Range and the Beaufort Sea. Areas near Fairbanks, Juneau, and Nome are responsible for most of Alaska's historical and current gold production. Nearly all of the large and many of the small placer gold mines currently operating in the US are in Alaska. Six modern large-scale hard rock mines operate in Alaska in 2008; four of those are gold-producing mines (an additional gold mine suspended production in late 2007). There are also some small-scale hard rock gold-mining operations. Alaska currently produces more gold (in 2007: 673,274 troy oz from lode mines, and 53,848 troy oz from placer deposits) than any state except Nevada. In 2007, gold accounted for 15% of the mining wealth produced in Alaska. Zinc and lead, mainly from the Red Dog mine, accounted for 73%; silver, mainly from the Greens Creek mine, accounted for 8%; coal and aggregates accounted for nearly 2% each. Alaska produced a total of 40.3 million troy ounces of gold from 1880 through the end of 2007. Gold mining in the United States has taken place continually since the discovery of gold at the Reed farm in North Carolina in 1799. The first documented occurrence of gold was in Virginia in 1782. Some minor gold production took place in North Carolina as early as 1793, but created no excitement. The discovery on the Reed farm in 1799 which was identified as gold in 1802 and subsequently mined marked the first commercial production. . The large scale production of gold started with the California Gold Rush in 1848. In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that brings about interest, sympathy or motivation in the reader or viewer. Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during wartime, an interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement. Related Websites:
{"url":"http://answerparty.com/question/answer/how-many-ounces-is-1-and-one-fourth-cup","timestamp":"2014-04-19T13:27:11Z","content_type":null,"content_length":"29128","record_id":"<urn:uuid:81ec9b5a-921a-405e-929c-936f002c970f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
GMAT Tip of the Week: The Silent G in GMAT We’re back with another GMAT Tip of the week for Hip Hop Month – with a side note that an eye for the number line should show you that this looks to be a Hip Hop month for the ages with five Fridays! (You could probably check a calendar, too, but knowing that today is the 9th, that gives us 16, 23, and 30 as Fridays to follow before the calendar flips to another month and another theme.) This week, let’s talk about GMAT difficulty, and especially quantitative problem difficulty. Search online and you’ll probably find quite a few “GMAT-style” quant questions in forums and on blogs that are simply diabolical, requiring a dozen steps and some obscure mathematical knowledge. In most cases, those questions really aren’t GMAT-style. Check the harder questions in the Official Guide for GMAT Review or the GMAT Prep practice tests, and you’ll find that they tend to resemble this Lil Wayne lyric: Paper chasing, tell that paper “look I’m right behind ya” Real Gs move in silence like lasagna. Think about that lyric for a second. Lasagna doesn’t make noise, of course…but is that really what he means? And what does that have to do with Gs? If you’re like most who hear that line for the first few times, you file that under “I guess I’m getting too old for urban slang and metaphor” until… Oh, wow. I see what he did there. That’s clever – that line went from “weird” to “very impressive”. (The reason is below…we’ll let you make the discovery yourself if you want) If you see what Weezy did with that lyric, you’ll start to notice much more of that when you consult explanations for hard GMAT quant problems that you missed. Yes, there will be a handful of “I never would have thought to do that / would have never remembered that rule” questions out there, but not nearly as many as the “oh, wow – I can’t believe I didn’t see that at first but…yeah, they got me with something I really should seen” questions. Consider the example: In triangle PQS above, if PQ = 3 and PS = 4, then PR = (A) 9/4 (B) 12/5 (C) 16/5 (D) 15/4 (E) 20/3 Take a few seconds to play with the problem. You should find quite quickly that the larger triangle is a 3-4-5 triangle, making side QS 5. But from there… Have you gone down the messy quadratic path yet? Theoretically you could call, say, line SR x, and line QR 5-x, and then call line PR y and use Pythagorean Theorem to create equations for both of the smaller triangles, using x and y in both so that you can solve for both variables. But…that’s likely well more than a 2-minute process and will involve some ugly, ugly math. But remember: real Gs move in silence like lasagna. Good, difficult GMAT problems are often much more about finding creative ways to apply simple rules than they are about trying to flex your mathematical muscle. What if you were to rotate the triangle to look more like: Then it should become clearer: if you consider line QS the base, then line PR is the height. And since we know that Area=1/2(base)(height), using 3 and 4 as the base and height (the natural way to look at the triangle in its first view), the area must be 6. So if 5 is the base, then the area (6) = 1/2 * 5 * line PR. Line PR must be 12/5. And know this: this is a difficult question for most students. But it’s not difficult because it’s diabolical. It’s difficult because you have to think deeply about some fairly basic surface knowledge (really, 3-4-5 triangles and area of a triangle are all you need). The GMAT, or at least its initial G, moves in silence like lasagna. If you’re beating your head against a wall with hard math, step back and re-analyze what you have. And in practice, learn to appreciate that “oooohhhhh” moment when you realize what the “trick” is on these hard problems. In the spirit of Hip Hop Month let’s also paraphrase TLC on this topic — on hard questions, don’t go chasing waterfalls…the answer is probably going to be found in the rivers and lakes that you’re used to, but just at deeper depths than you typically go. (The Lil Wayne answer – the G in “lasagna” is silent.) Getting ready to take the GMAT soon? On March 21 we will run a free live online seminar to help you get up to speed on the new Integrated Reasoning section coming in June. And, as always, be sure to find us on Facebook and Google+, and follow us on Twitter! 3 Responses 1. Silent G was my nickname in high school. 2. Just one word WOW :) 3. eyeballing. PQ 3 PS 4 SQ 5 (triplets). SQ/2 = 5/2 = 2.4 only answer choice 12/5 gives 2.4. the rest choices increase…………so b) :)
{"url":"http://www.veritasprep.com/blog/2012/03/gmat-tip-of-the-week-the-silent-g-in-gmat/","timestamp":"2014-04-20T10:46:12Z","content_type":null,"content_length":"52548","record_id":"<urn:uuid:171e4fe1-318c-4c06-a3f8-1de34a99d3d1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction to OpenSSL Part One by Holt Sorenson An Introduction to OpenSSL, Part One: Cryptographic Functions last updated August 22, 2001 This is the first article in a four-part series on OpenSSL, a library written in the C programming language that provides routines for cryptographic primitives utilized in implementing the Secure Sockets Layer (SSL) protocol. OpenSSL also includes routines for implementing the SSL protocol itself, and it includes an application called openssl that provides a command line interface to both sets of routines. This articles introduce some cryptographic tools that the SSL protocol has borrowed from cryptographer's bags of tricks to accomplish its design goals. While readers who are already familiar with cryptographic concepts may be familiar with the content presented in this installment, the following "Brief Overview of Cryptographic Primitives" section should nevertheless be interesting reading and will certainly set up the rest of this discussion. Brief Overview of Cryptographic Primitives This section will begin by first introducing symmetric ciphers. Next, asymmetric ciphers will be examined. Following that, hash functions and Message Authentication Codes (MACs) will be explored. Lastly, cryptographic randomness will be explained with an ardent rant/plea to be zealous and vigilant about properly providing cryptographic randomness for cryptosystems. Symmetric (Single key) ciphers Symmetric, single key, or secret key ciphers utilize an algorithm, whose inputs are a key and some plaintext. The resultant output is ciphertext. When one wishes to recover the plaintext, one feeds the ciphertext and the key into the algorithm. Such an algorithm has the property that if an attacker knows the ciphertext and the algorithm, neither the plaintext nor the key can be recovered. This makes the key a critical piece of information that should be protected from would be attackers. There are two commonly-used families of symmetric ciphers. One family is called stream ciphers, as they are mathematically designed in such a way as to encipher a constant stream of information. The other family is called block ciphers because these ciphers operate on blocks of data. One can learn more about both types of ciphers by consulting [RSAFaq2]. A common block cipher implemented by OpenSSL is Triple DES (3DES) [3DESCite] that is based on the Data Encryption Standard (DES). Another is Blowfish [BFcite]. Both 3DES and Blowfish will be used in the examples later in the article. 3DES's progenitor, DES, is now obsolesced by the relentless march of technology. Its successor is called the Advanced Encryption Standard [AESCite]. The nascent AES has yet to be included in OpenSSL, however. An example of a common stream cipher implemented by OpenSSL is RC4 [RC4cite]. Asymmetric (Dual key) ciphers Asymmetric, dual key, or public key ciphers are a bit more complex in their design and implementation. Alice and Bob are two persons who only have a public channel over which to communicate. They wish the content of their messages to be known only by each other. This requirement keeps Alice and Bob from transmitting a secret key to one another across the public channel, because if eavesdropping Eve was monitoring the channel she could use the key she learned from Bob and Alice's transmission to decrypt all transmissions enciphered with that key. Bob has information for his friend, Alice, that he wishes no one else to know. He also wishes to receive an acknowledgement of the message that he sends. His requirement for this acknowledgment is that it have the property that only Alice could create the acknowledgment. Such a property allows Bob to authenticate that Alice created the acknowledgement message. Both Bob and Alice know that their nosey adversary, Eve, is eavesdropping on their communication. They decide utilize asymmetric encryption because of their security requirements for this transaction. Bob and Alice agree upon an asymmetric encryption algorithm and Alice creates a key pair that contains two keys. One is known by both Bob and Eve because Alice sends the key to Bob, and Eve eavesdrops on the transmission. This key is called the public key. The other key, known only to Alice, is called the private key. Bob feeds the Alice's public key and plaintext to the asymmetric algorithm and the resultant output is ciphertext. Bob sends the ciphertext to Alice. Alice recovers the plaintext by feeding the ciphertext and the private key to the algorithm. The algorithm is termed asymmetric because there are two keys involved; there is the public key, used to encrypt the plaintext, and private key, used to decrypt the ciphertext. The algorithm has the property that if Eve knows the ciphertext, the algorithm, and the public key, neither the plaintext nor the public key's complement, the private key, can be recovered. This makes the private key a critical piece of information that should be protected from Eve and her cohorts. Alice then crafts her acknowledgment message and feeds the asymmetric algorithm the private key and the acknowledgement message. The result of the operation is the signature. The signature and the message are published. Bob who has the public key corresponding to the private key held by the Alice, can verify the signature with Alice's public key. Since only Alice has the private key, only Alice could have performed the initial signing operation that yielded the published signature that is linked to the message by the private key and the algorithm. Although Eve knows the algorithm, the public key, the message, and its signature, Eve can't recover Alice's private key. Bob is assured that Alice really received the message, because only she could craft the acknowledgement that he verified she created. A common asymmetric cipher is RSA (named for its creators, Rivest, Shamir, and Adleman) [RSACite]. A common asymmetric signature scheme is DSA (Digital Signature Algorithm) [DSSCite]. The Diffie-Hellman algorithm is often used in conjunction with DSS when a use of DSS also needs asymmetric encryption [DHCite]. Whether using a symmetric or asymmetric cipher, care should be taken to use keys of a sufficient length that will protect your transaction for the amount of time that its data is sensitive. In general, the longer you wish to protect the data the longer the key should be. [KeyLengthCite] discusses this in greater detail. Also, the scenario explained above only protects against passive attackers such as Eve. A malicious attacker, Mallory, could actively attack Alice and Bob's transmissions by posing as Alice to Bob, and as Bob to Alice. Such an attack is called a man-in-the-middle attack and more steps such as out-of-band verification are needed in a protocol to thwart such attacks. Hash functions and Message Authentication Codes (MAC) are two cryptographic mechanisms that allow parties to verify the integrity of data that has been transmitted or has been stored for a period of Hash Functions A cryptographic hash, also known as a message digest or a modification detection code, is an algorithm that, when fed a set of data, computes a unique identifier. This identifier cannot be replicated by feeding any other data to the algorithm except that particular set of data. This property is called collision-resistant. Also, if an attacker knows the algorithm and the unique identifier, the attacker cannot recover the set of data by feeding the algorithm the unique identifier. This property is called one-way. One example of a hash function is the Secure Hash Algorithm (SHA) [SHACite]. Another hash function that is often used is Message Digest 5 (MD5) [MD5Cite]. SHA is being updated to be usable with the new blocksizes of AES [SHAupdate]. MD5 is faster then SHA, but a recent paper has shown that it is not as collision resistant as originally conjectured [Dobbertin1996]. Message Authentication Code (MAC) A message authentication code, or MAC, feeds a key and plaintext to an algorithm to create the MAC. Both stream and block ciphers can be used as MACs [RSAFaq4]. Hash functions can also be used as MACs in conjunction with a shared secret, or key. Since MACs use keys as part of their algorithm, one should take the same care with these keys as one would take with keys used in any other cryptographic operation. An example: Alice and Bob's brother, Bobcheck, have decided they want to exchange data. They want assurance that that the data hasn't changed while in transit. They aren't worried about the data remaining confidential. Bobcheck and Alice have previously agreed on a shared secret, and have agreed to use the HMAC [Krawczyk1997] function. Alice takes the key and message and runs them through the HMAC algorithm. She transmits the message and the HMAC result to Bobcheck. Bobcheck runs the key and the message through the HMAC algorithm. He verifies that the result of the HMAC is the same as that which Alice transmitted. Since it is, he knows the message has traversed the hostile environment without modification. Bobcheck can trust that Alice was the person that sent the message since she is the only other person in possession of the shared secret. Cryptographic Randomness One of the easiest places to break a cryptographic system is to use values for keys that aren't cryptographically random. [Eastlake1994] more colorfully articulates this idea by stating: "For the present, the lack of generally available facilities for generating such unpredictable numbers is an open wound in the design of cryptographic software... the only safe strategy so far has been to force the local installation to supply a suitable routine to generate random numbers. To say the least, this is an awkward, error-prone and unpalatable solution". A cryptographically "random number is one that the adversary has to guess, [and for which] there is no strategy for determining [the number] that is better than brute force" [Callas1996]. When random numbers are needed, one should utilize a source of randomness that, if identically replicated by an attacker, would not generate the same numbers for your application as it does for the attacker. Sources of randomness are said to be truly random if the attacker has infinite computing resources and the difficulty of computing the random numbers that you are generating remains intractable. Sources of randomness are said to be pseudo random if they are only safe against an attacker with limited computational resources [Ellison1995]. Unfortunately many people implementing cryptography have failed to grasp the concepts elucidated above. Recently a minor change was made to OpenSSL to check the random value seeding the OpenSSL pseudo random number generator. This resulted in persons asking the OpenSSL mailing list how to skirt the check so that they could utilize OpenSSL without providing an unguessable seed. They received recommendations such as: • use a constant text string • use a DSA public key • seed the OpenSSL random number generator with bytes generated by rand(3) • read /etc/passwd • read the OpenSSL executable off the disk • hash files in the current directory • use a dummy file to trick the check The worst advice given was to rip the check code out of the OpenSSL library [Guttman2000]. This would cause the prng to be seeded with nothing, resulting in consistently predictable output. The query has been put forth enough by users of OpenSSL that it is now a FAQ item [OpenSSLFAQRNG]. One last example of poor suggestions on seeding a PRNG is to "use several files as random seed enhancers which will help to make the key more secure. Text files that have been compressed with a utility such as gzip are good choices." [Slacksite]. Random Seed Enhancers ? Bollocks. gzip has a regular file format that allows manipulation of the file. This format is predicatable. It is also trivial for an attacker to create the same seed by gzipping the same files that you have. How many files are on your system ? I found a little over 45,000 on the system on which I am writing this article. 45,000 is a small enough number that a determined, if bored, attacker can gzip each file and try the result as a seed to the PRNG that generated your SSL enabled server's private key. Even using multiple files with this method to seed OpenSSL's prng is silly given the number of off the shelf solutions to generating cryptographically random seeds referenced below. I imploringly beseech that you not subjugate yourself to such putrescent refuse! Such statements are great for fertilizing farmland occupying hundreds of acres, but don't cut the proverbial mustard as adequate recommendations for how to seed a PRNG. Remember that PRNGs are just that, pseudo. Assisting PRNGS in having a proper start is a human problem because PRNGs don't know better. Innumerable are the triage group therapy sessions cryptographers have been forced to convene because they had to console traumatized PRNGs that had been abused by being fed predicable input [Goldberg1996]. Have you ever seen a PRNG get it's stomach pumped ? It's not pretty. If the algorithms used by gzip really were good enough for this purpose then said algorithms would alone be used as PRNGs. Some possible user land remedies, in the form of software packages, that are thought to be acceptable for generating randomness suitable for cryptographic use on most Unices are the Entropy Gathering Daemon (EGD) [egd] and the Pseudo Random Number Generator Daemon (PRNGD) [prngd]. Kernel-based devices are available in Linux [LinuxRandom], Solaris [MaierAndi] or [SUNWski], and OpenBSD [OpenBSDRandom]. FreeBSD offers a /dev/random. Intel Pentium III CPUs or newer come with a hardware based random number generator [IntelRNG]. Linux 2.2.18 and newer makes it accessible as /dev/ intel_rng [Garzik2000] with properly configured kernels. Please, please, *PLEASE* go to the trouble of making sure that you are using sources of randomness that consensus has deemed worthy of generating key material. If you use broken randomness I can guarantee that your crypto will be broken no matter how well designed the algorithm and protocol is, or what the length of your keys are. If you are going to the trouble of using crypto, why compromise the system for the attacker ? Securing systems is about securing systems, not pretending to secure systems.. As I've brushed over some cryptographic primitives above, I've made many statements which imply that the science of cryptography is absolute and that breaking ciphers is impossible. This is not the case. Many aspects of cryptography are only conjectured, not proven, to work. Cryptography utilizes complex algorithms which use BFNs (really big numbers) to maintain the integrity of data, to keep it confidential, or to authenticate what key was used to sign a message for a LFT (a really long time). It's also important to catch the nuance of the last phrase in the previous sentence. Keys are what are used in cryptographic operations, not people or computers. You can't really tell if a person encrypted something because humans use computers to perform the operations on their behalf. To make matters worse, keys are used on systems that you can't trust, systems which are notorious for having their security (if the designers even attempted to include security) obliterated regularly. So the reality is that we presume a key was used to perform an operation; We presume that a given operation was performed; We presume that a person was somehow involved and that this key is theirs. Continuing to delve into all the details of this quandary is outside of the scope of this article, but a recent publication [Schneier2000], explicates these issues in easy to understand terms. It's even probable that your pointy-haired-boss can be somewhat security literate after reading it. I will resist saying something like there is a limit that your PHB would approach in trying to understand security/cryptography because we have been discussing concepts that are built on math and anyone who takes Calculus 101 has dealt with limits and knows how inexcusably gruesome a pun that would be. Holt Sorenson works for Counterpane Internet Security where he wrangles tuples of bits so his colleagues can get their work done. He's always serious and never jokes or laughs. When he is not surgically attached to computers he likes to hang out with his family and engage in frivolous pursuits that purportedly keep him out of trouble [Iterata1999]. To read An Introduction to OpenSSL, Part Two: Cryptographic Functions Continued, click here. This article originally appeared on SecurityFocus.com -- reproduction in whole or in part is not allowed without expressed written consent. Login or register to post comments
{"url":"http://www.symantec.com/connect/articles/introduction-openssl-part-one","timestamp":"2014-04-21T04:51:06Z","content_type":null,"content_length":"58344","record_id":"<urn:uuid:918a6a85-8142-4a30-8840-daecec5057cf>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
topological space David Roberts: How about mentioning Alexander Grothendieck’s notion of tame topology? I saw in the video of Scharlau’s talk that G. wrote a lot on this, but the manuscript is lost. Do we have any idea past a vague description? (in Recoltes et Semailles or La longue marche I think.) Todd Trimble: Quite a few people have thought long and hard about Grothendieck’s speculations on tame topology in his Esquisse d’un Programme. The approach with which I am most familiar comes from model theory, and falls under the rubric of “o-minimal structures” (the “o” standing for “order”). See the book Tame topology and o-minimal structures by van den Dries. A space which belongs to an o-minimal structure is a subspace of some Euclidean space $\mathbb{R}^n$ and turns out to be indeed nice (it admits nice triangulations for instance). In a sense this is more of a “nice categories” approach than a “nice spaces” approach, because there is no known global property which would express what it means for a space to be tame. That is, there are many examples of o-minimal structures, but (it is conjectured) there is no maximal o-minimal structure, therefore no overarching meaning of what it would mean for a space to be tame. Basically, an o-minimal structure $T$ is a collection $T_n \subseteq P(\mathbb{R}^n)$ which is closed under all first-order logic operations (e.g., complements, finite intersections, direct images under projections = existentially quantified sets, equality predicates, and the binary predicate $\lt$ on $\mathbb{R}$), and which satisfies the all-important o-minimality condition: the only sets belonging to $T_1$ are finite unions of points and intervals. The elements of $T$ may be called $T$-definable sets; the archetypal example is where $T$ is the collection of semi-algebraic sets (loci of polynomial inequalities) – cf. the Tarski-Seidenberg theorem. The thrust of the o-minimality condition is to forbid sets like $\mathbb{N} \subseteq \mathbb{R}$ from being $T$-definable, which (following Gödel, Turing, Robinson, Matiyasevich, and others) would open the door to all sorts of pathological sets being $T$-definable as well. So you could think of o-minimality as a kind of logical “monster-barring” device, which happens to be quite effective. See van den Dries’s book for a very illuminating discussion. There are other approaches to tame topology (such as via Shiota’s $\mathcal{X}$-sets), but I am less familiar with them. Tim: As I read the entry on nice topological spaces, it really refers to ‘nice categories’ rather than ‘nice spaces’! I have always thought of spaces such as CW-complexes and polyhedra as being ‘locally nice’, but the corresponding categories are certainly not ‘nice’ in the sense of nice topological space. Perhaps we need to adjust that other entry in some way. Toby: You're right, I think I've been linking that page wrongly. (I just now did it again on homotopy type!) Perhaps we should write locally nice space? or locally nice topological space? (you pick), and I'll fix all of the links tomorrow. Tim:I suggest locally nice space?. (For some time I worked in Shape Theory where local singularities were allowed so the spaces were not locally nice!) There would need to be an entry on locally nice. I suggets various meanings are discussed briefly, e.g. locally contractible, locally Euclidean, … and so on, but each with a minimum on it as the real stuff is in CW-complex etc and these are the ‘ideas’. Mike: Why not change the page nice topological space to be about CW-complexes and so on, and move the existing material there to something like convenient category of spaces, which is also a historically valid term? I am probably to blame for the current misleading content of nice topological space and I’d be happy to have this changed. Toby: I thought that nice topological space was supposed to be about special kinds of spaces, such as locally compact Hausdorff spaces, whose full subcategories of $\Sp$ are also nice. (Sort of a counterpoint to the dichotomy between nice objects and nice categories, whose theme is better fit by the example of locally Euclidean spaces). CW-complexes also apply —if you're interested in the homotopy categories. Mike: Well, that’s not what I thought. (-: I don’t really know any type of space that is nice and whose corresponding subcategory of Top is also nice. The category of locally compact Hausdorff spaces, for instance, is not really all that nice. In fact, I can’t think of anything particularly good about it. I don’t even see any reason for it to be complete or cocomplete! I think it would be better, and less confusing, to have separate pages for “nice spaces” and “nice categories of spaces,” or whatever we call them. And, as I said, I don’t see any need to invent a new term like “locally nice.” When algebraic topologists (and, by extension, people talking about $\infty$-groupoids) say “nice space” they usually mean either (1) an object of some convenient category of spaces, or (2) a CW-complex-like space, between which weak homotopy equivalences are homotopy equivalences. Actually, there is a precise term for the latter sort: an m-cofibrant space, aka a space of the (non-weak) homotopy type of a CW complex. Toby: I thought the full subcategory of locally compact Hausdorff spaces was cartesian closed? Maybe not, and it's not mentioned above. But you can see that most of the examples above list nice properties of their full subcategories. And the page begins by talking about what a lousy category $\Top$ is. So it seems clearly wrong that you can't make $\Top$ a nicer category by taking a full subcategory of nice spaces. (Not all of the examples are subcategories, of course.) Mike: It’s true that locally compact Hausdorff spaces are exponentiable in $Top$. However, I don’t think there’s any reason why the exponential should again be locally compact Hausdorff. I guess you are right that one could argue that compactly generated spaces themselves are “nice,” although I think the main reason they are important is that the category of compactly generated spaces is nice. I propose the following: 1. Move the current content of this page to convenient category of spaces. 2. Create m-cofibrant space (I’ll do that in a minute). 3. Update most links to point to one or the other of the above, since I think that in most places one or the other of them is what is meant. 4. At nice topological space, list many niceness properties of topological spaces. Some of them, like compact generation, will also produce a convenient category of spaces; others, like CW complex es, will be in particular m-cofibrant; and yet others, like locally contractible spacees, will do neither. Toby: I believe that the compact Hausdorff reflection (the Stone–Čech compactification) of $Y^X$ is an exponential object. Anyway, your plan sounds fine, although nice category of spaces might be another title. (I guess that it's up to whoever gets around to writing it first.) Although I'm not sure that people really mean m-cofibrant spaces when they speak of nice topological spaces when doing homotopy theory; how do we know that they aren't referring to CW-complexes? (which is what I always assumed that I Mike: I guess nice category of spaces would fit better with the existing cumbersomely-named dichotomy between nice objects and nice categories. I should have said that when people say “nice topological space” as a means of not having to worry about weak homotopy equivalences, they might as well mean (or maybe even “should” mean) m-cofibrant space. If people do mean CW-complex for some more precise reason (such as wanting to induct up the cells), then they can say “CW complex” instead. Re: exponentials, the Stone-Čech compactification of $Y^X$ will (as long as $Y^X$ isn’t already compact) have more points than $Y^X$; but by the isomorphism $Hom(1,Y^X)\cong Hom(X,Y)$, points of an exponential space have to be in bijection with continuous maps $X\to Y$. Toby: OK, I'll have to check how exactly they use the category of locally compact Hausdorff spaces. (One way is to get compactly generated spaces, of course, but I thought that there was more to it than that.) But anyway, I'm happy with your plan and will help you carry it out.
{"url":"http://www.ncatlab.org/nlab/show/nice+topological+space","timestamp":"2014-04-16T04:20:38Z","content_type":null,"content_length":"52703","record_id":"<urn:uuid:ac5b4352-4df4-4c32-9f3a-0e4772c8f1fb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Overview - DOT MULTIPLICATION - SET OF DOT CARDS This two binder set, Dot Skip Counting - Get Ready, Get Set, Go ... Multiply and Dot Multiplication - Multiplying and Beyond by author Kathleen Strange, makes the process up to and through multiplication simple. The strategies and skills are appropriate for various ages and abilities as an introduction or review and reinforcement of multiplication. BINDER 1 Dot Skip Counting - Get Ready, Get Set, Go ... Multiply is the first binder in the set. This 150-page binder features seven units that begin with skip-count songs covering numbers 1-12 and progress to using the skip-count system to multiply. Binder includes a set of 78 colorful “Dot Cards;” activities and worksheets; teacher ideas; “hands-on,”“real-world” applications; and a set of reproducible Dot Cards. BINDER 2 Dot Multiplication-Multiplying and Beyond is the second binder of the set. This 150-page binder contains eight units. The first unit reviews the dot multiplication process, and progresses to multiplication and division. Binder features a set of 78 colorful Dot Cards; activities and worksheets; teacher ideas; “hands-on,”“real-world” applications; and a section with questions in test-prep format. Includes a set of reproducible bingo cards and reproducible Dot Cards. DOT CARDS The main focus of each binder is the dot-card concept. Each binder contains 78 colorful Dot Cards, a critical, visual component intended for demonstrations, individual, or small group use. Cards show “dots” that equal two multiplication problems, both appearing on the cards. For example, “3 x 4” and “4 x 3” are shown on the same card. Students quickly learn to apply skip counting by 3’s or 4’s to solve the problem. Cardstock cards can be laminated for durability. COMPONENTS Each 150-page binder includes one set of Dot Cards, reproducible activities, parent letter, teacher ideas, and “real-world” applications. Binders sold separately or as a set. An extra set of colorful Dot Cards is sold separately.
{"url":"http://www.pcieducation.com/store/item.aspx?ItemId=42010","timestamp":"2014-04-19T04:27:44Z","content_type":null,"content_length":"33465","record_id":"<urn:uuid:2f1cec44-7629-4c38-bcc6-74d5cd649ec9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Alien Mathematics, Numbers, and Polynomial Centric Societies Probably due to the influence of Plato, mathematics is widely conceived of as universal. In fact, it is widely accepted that the constant π would be equally well known to a reasonable advanced alien species as it is to us, albeit likely in a different base — an idea that has a recurring theme both in fiction and serious discussion about communication with an alien species. I’m sceptical. Not because I think that the value of π is subjective, but because it is not at all clear to me that aliens would share our abstraction of numbers. Two things have brought me to this view: reading about how humans count, and reading about the development of numbers. Experiment with how long it takes to count the number of dots on a page and you will notice an odd pattern: when the number of dots increases past three or four, the amount of time it takes to count them suddenly jumps, while accuracy falls and brain activity changes. (You can try the test in this BBC article) While the small numbers can be almost instantly determined, with the larger ones people generally have to shift their focus on to small groups and increment a number in their head. This ability to rapidly recognize the number of something there is is called subitizing, and it is observed not just in adults, but in babies before they can speak and in non-Human animals (though the threshold at which subitization stops occurring varies between species). This evidence for an intrinsic biological root (almost for a numerical equivalent of Chomesky’s linguistic innateness hypothesis) casts doubt on the view that any reasoning species would develop numeracy. However, evidence against some sort of universal literacy extends well beyond mere theoretical arguments: despite the advantage of subitizing, there are some very real counterexamples to numeracy. One need only look at the historical development of numbers: it was no trivial task for zero and negative numbers to be accepted as numbers, and people literally died over whether irrational numbers like e and π were numbers. And even in the modern world there are societies lacking our modern conception of numbers — the Piraha tribe lack words for precise numbers greater than two and its members have difficulty working with even modest numbers. Still, it is reasonable to be sceptical, as many I describe this view to are, regarding whether a society could become advanced without numbers. Don’t we need them for trade? For Chemistry, Physics, Engineering and much of Biology? Where would computers be without numbers? What alternative could there possibly be? It is hard, of course, for me to come up with an alternative to numbers. Even if they weren’t, to some extent, hard-wired into my brain at birth, one of the most parts of elementary education is the development of the numerical abstraction — it is ingrained deeply into our thought process so that we use it without consciously deciding to (and while I may not like how modern math education works, that ingraining is a good thing! Most of the utility and beauty of math is only available when it becomes part of the way you think rather than just something you know). One can easily imagine aliens with their own abstraction, as foreign to our thought-process as numbers are to theirs. However, while it is impossible for me to fathom how deeply alien an alien’s numerical abstraction alternative might be, it is possible to construct something. And so we might imagine a species of photosynthetic aliens with square bodies in a square environment. We shall dub them the Squariens. The Squariens spend their lives sitting in tightly packed grids, jumping up in the air and turning or flipping to face other neighbours. They have little concern for the amount of food they must eat, but are deeply concerned with the symmetries of the square, to us the dihedral group of order 8. And just like we may occasionally muse about other bases, they occasionally spend their time imagining the symmetries of other shapes, of triangles and pentagon, and so on: other dihedral groups. Eventually the Squariens discover group theory — they find the integers mod n in the rotational subgroups. At some point, a Squarien mathematician thinks about the symmetries of something with infinitely many sides… and when they consider the rotational subgroup, the discover a group isomorphic to the integers! Except they don’t think of them like that, don’t think of them like we do: just like we write elements of dihedral groups in terms of numbers (what else is s¹r³?) they describe their integer isomorphic group in terms of how they think of dihedral groups… But while these aliens might find our obsession with numbers odd (and our obsession with prime numbers even odder…) and we might find their obsession with symmetries strange, we’d still be able to understand each other quite easily. However, that is a rather silly example. And I’m afraid I can’t provide a better example of an alternative to numbers. However, I believe that I can provide a serious and satisfying alternative to polynomials. It is my hope that the reader might take this a slight piece of evidence for the existence of alternatives to numbers. When I was younger, I remember wondering why we were so concerned with polynomials. Why did we care about things of the form $a + bx + cx^2$… rather than $a + b2^x + c3^x$… ? At that age, polynomials won me over when I realized that I could `push’ them into arbitrary shapes by using higher degree terms to control the shape farther from zero. What I had discovered was that one can find a polynomials arbitrarily close to any continuous functions of a closed interval (for a large variety of meanings of close, in fact). But what I didn’t realize at the time was that this was very much non-unique to polynomials — for example, it is also true for things of the form $a + b2^x + c3^x$… (this follows easily from the Stone-Weistrass Theorem). A deeper and more satisfying answer lies in Taylor’s Theorem. If we know the derivatives of a function at a point k, the natural way to approximate it is: $f(x) = f(k) + f'(k)*(x-k) + \frac{f''(k)}{2!}*(x-k)^2...$ A polynomial! So polynomials are a natural result of the idea of a derivative. And while I have nothing more than anecdotal evidence for this, I’d suggest that, while many people may not know the formalities of calculus or the word derivative, everyone innately understands the idea. Certainly, young children understand ideas like speed. And the `rules of differentiation’ can be told to you by a lay person, if you ask in the right manner (sum rule: speed of a person walking on a ship, chain rule: speed of person in a movie when you fast-forward, and so on…). But what is a derivative? One could just say that it is the rate at which something is changing and be done, but let’s go a little deeper. The definition of a derivative is usually: $\frac{dy}{dx} = \lim_{h\to 0} \frac{f(x+h)-f(x)}{h}$ It’s the ratio between change in y and change in x. And so if the derivative is A, we write dy/dx = A, meaning that A is the ratio between dy (the change in y) and dx (the change in x). One may also write it in the `differential form,’ dy = Adx, meaning that the change in y is A times the change in x. This latter form leads to a natural way to describe the derivative: it, times the change in x, is the amount one needs to add to move forward. But there’s another interesting question: how much does one need to multiply by to move forward? This question gives birth to what we now call multiplicative or geometric calculus (Wikipedia deleted the main article, sadly, but this one still has useful content). $dy^{dx} = f^*(x) = \lim_{h\to 0} \left(\frac{f(x+h)}{f(x)}\right)^h$ In many circumstances, multiplicative calculus is highly natural; for example, the decay of radioactive materials or the unconstrained growth of bacterial colony have constant multiplicative Just as normal derivatives naturally give rise to polynomials, multiplicative derivatives naturally give rise to things of the form $a*b^x*c^\frac{x^2}{2!}$… So you see, if we naturally thought of rates in terms of “how much do I need multiply by to move forward” instead of “how much do I need to add to move forward” things would be rather different. We live in a society with a very particular way of thinking, one might call us `polynomial centric’ or `linear approximation focused’. Now an argument can be made that multiplicative derivatives would make standard physics problems more challenging. While I think it is important to keep in mind that this is simply a tradeoff and that there are other problems that are easier with the multiplicative approach, it is a valid point to consider. The easy solution would be for the Aliens to measure distance and related values in exponential form, in which case the problems would become computationally isomorphic to our normal ones. Still, there’s a lot of anthropomorphism in the assumption that they’d have the same “standard problems” as us. We like to consider point sources and things moving at constant speeds because they are simple and intuitive cases, but what’s to say that those would be the `simple’ cases they’d Regardless, there is clearly a serious contender as an alternative to polynomials that we humans can see. One of the things that really excites me about the possibility of Humanity contacting an Alien species (even if they are at a similar level of technology to us and are too far away for us to ever physically interact) is that it seems very likely that they will have a profoundly and inconceivably different perspective, and by extension abstractions, than us. I want to learn alien mathematics! And, if nothing else, I think that seeing another way of thinking will give us a lot of insight into our own. And on a similar note, I think a lot of the value of thought experiments like this one is the insight they can give us into our selves… alex Says: June 10, 2011 at 19:52 | Reply nah that’s a power series not a polynomial. The reason polynomials are so fundamental is because that’s what you get from adding and multiplying stuff together. • christopherolah Says: June 10, 2011 at 22:39 | Reply Sort of. If I approximate only to finitely many terms, it is a polynomial. The power series is the limit of a sequence of approximation polynomials. Your explanation of why polynomials are so fundamental is also valid, but it leaves a number of questions. For example, if it depends on what you start with in `adding and multiplying stuff together’ (what some would call a function algebra). In this case, it is the constant functions and the identity function. George G. Says: June 11, 2011 at 15:19 | Reply “chain rule: speed of person in a movie when you fast-forward” This is an extremely good explanation of the chain rule. I didn’t know it, and I’ve always found the chain rule to be slightly unintuitive, but not anymore. After reading your post, I tried to justify to myself, why do polynomials seem more fundamental to me than uh… exponential sums, or whatever you call them. I couldn’t think of any good reason and even the argument you considered (and sorta rejected) – connection to Taylor series doesn’t sound very persuasive. The best explanation why polynomials are so important I could come up with is the one alex already gave: polynomials are easy to evaluate. To find the value of a polynomial in the arbitrary point, you only need to know how to add and to multiply, and these two operations are the easiest to carry out. At the same time to evaluate an “exponential sum” you need to know how to raise integers to arbitrary real powers, which seems more time consuming to me. (Try comparing computing 2.71828^2 and 2^2.71828. I am pretty sure I could find the value of the first expression without the use of a computer, but I am not sure what to do with the second one.) Here’s my attempt at describing an alien civilization with a different math: imagine aliens, who are extremely good at mental computations. Let’s say that they can mentally compute anything my computer can: multiply two thousand-digit numbers in almost immediately, find 1000.000! in a matter of minutes, etc. These aliens would have no particular need for calculus. Instead, they would develop what we would call numerical methods. They would have no concept of real numbers, instead, they would use rationals. They would think of functions either in terms of lists, or in terms of computational algorithms. Instead of derivatives they would have finite differences, instead of differential equations – huge systems of linear equations. Their geometry would be entirely “pixel-based”. Eventually, they would have to introduce reals though, but for them it would be a strange, paradoxical abstraction. • christopherolah Says: June 12, 2011 at 04:37 | Reply >This is an extremely good explanation of the chain rule. I didn’t know it, and I’ve always found the chain rule to be slightly unintuitive, but not anymore. Thanks! I was rather proud of myself when I came up with it. :) Though, I’m sure I’m not the first person to come up with it. I’ve been spending a lot of my time going over basic math and thinking about what it really means. Basically, I think that one of the serious problems with modern mathematical education is that we become alienated from what the symbols represent… I’m planning to do a post on my results for single variable calculus soon. >I couldn’t think of any good reason and even the argument you considered (and sorta rejected) – connection to Taylor series doesn’t sound very persuasive. To be clear, I think that the Taylor series argument is a great reason for us Humans to use polynomials. They’re almost intrinsic to our way of thinking. Consider the familiar formula Δd = vᵢΔt + ½aΔt² . It’s a polynomial describing a physical phenomena, and it exists because we have the ideas of velocity and constant acceleration. > (calculatory ease of Taylor Series) I can see where you’re coming from. It’s a… dissatisfying answer (but this may reflect more on my desire for profound reasons rather than reality), and I can’t help but wonder to what extent my difficulty in calculating floating point powers is because is because, unlike multiplying floating points, it wasn’t drilled into my head for years in elementary school. > … They would have no concept of real numbers, instead, they would use rationals. They would think of functions either in terms of lists, or in terms of computational algorithms. Instead of derivatives they would have finite differences, instead of differential equations – huge systems of linear equations. Their geometry would be entirely “pixel-based”. Eventually, they would have to introduce reals though, but for them it would be a strange, paradoxical abstraction. This is a very interesting idea. I’m rather sleepy at the moment and don’t think I can give an intelligent comment on it right now, but I’m going to sleep on it and get back to you. I can imagine them developing the idea of real numbers very early on if they investigate geometry (because of π) but I can see them very much thinking of them as a limit of a sequence of rational numbers (as we typically construct them, but more explicit). They keep being more and more precise in their measurement of the area of a circle with radius one… And I agree that it would likely seem very strange, as it did for us. Regarding calculus, wouldn’t they have to have an idea of an infinitesimally small difference that they were approximating? I completely agree that their approach to differential equations would focus on numerical methods, though. With geometry, I can see them using pixels in any non-trivial scenario. What’s really interesting about the species you’re describing though is that I can see Humanity becoming them over the course of the next few decades as if we get computer implants with direct interaction with our brains… Or even if computing just becomes so ubiquitous and part of the way we live that we’re essentially always using one and they become part of our thought process. What will that do to mathematics, I wonder? Will we loose interest in our present techniques for finding precise solutions in favor of numerical methods? □ George G. Says: June 12, 2011 at 16:26 | Reply “I’m planning to do a post on my results for single variable calculus soon. ” Understanding what do symbols and formal operations mean on a gut level is 90% of the difference between a good mathematician and a bad one. I am looking forward to your future posts. What slightly bothers me about “Taylor series argument”, as you named it is that saying “polynomials are the most fundamental and important functions of all, because they are used in that one method of approximation” is a little bit like saying “sine function is the most fundamental of all, because it is used in Fourier sequences” or even “gamma function is the most important, because it is used in [obscure identity #45.]” Taylor series are overwhelmingly important, but it’s not *the* method how to approximate something, and not even the most convenient. They require lots of smoothness, break down at the first sign of a singularity, real or imaginary, and then there are always such non-analytic monsters as exp(-1/x^2). Fourier series are in many ways more natural and more powerful. Besides, people started working with polynomial long before they came up with Taylor series. “I can’t help but wonder to what extent my difficulty in calculating floating point powers is because is because, unlike multiplying floating points, it wasn’t drilled into my head for years in elementary school.” I was under impression that all the known algorithms for raising numbers to real powers are much more convoluted than the standard multiplication algorithm, but I am not sure. I thought you would know, since you work with computers and stuff. “I can imagine them developing the idea of real numbers very early on if they investigate geometry (because of π) but I can see them very much thinking of them as a limit of a sequence of rational numbers” Maybe my aliens wouldn’t feel the need to even think of any limits, they would be only interested in guaranteeing the certain high number of correct digits of the result of a computation. They’d say: “3.1415926, what more to ask for?” Yes, eventually they would be forced to introduce reals, if only to be able to prove the correctness of their differential schemes, but they would think of them as an abstract, theoretical construct: “It’s like a regular number, but we will pretend that digits go on forever, and even though we don’t know what those digits are we will act like we do”. “Will we loose interest in our present techniques for finding precise solutions in favor of numerical methods?” You are aware that this is already happening, right? As computers become faster and faster, and numerical methods improve more and more, many people start to wonder if there is any point in trying to solve equations analytically, when you can open MatLab and find the numeric solution in a matter of seconds, especially in cases when you only need the precise solution in order to turn it into a column filled with numbers to start with. Mathematics has been human-oriented for 2000 years, and during the last 50 it has started to become machine-oriented. I can’t say I am not a little bit frightened. ☆ christopherolah Says: June 24, 2011 at 12:53 | Reply > What slightly bothers me about “Taylor series argument”, as you named it is that saying “polynomials are the most fundamental and important functions of all, because they are used in that one method of approximation” is a little bit like saying “sine function is the most fundamental of all, because it is used in Fourier sequences” or even “gamma function is the most important, because it is used in [obscure identity #45.]” Taylor series are overwhelmingly important, but it’s not *the* method how to approximate something, and not even the most convenient. They require lots of smoothness, break down at the first sign of a singularity, real or imaginary, and then there are always such non-analytic monsters as exp(-1/x^2). Fourier series are in many ways more natural and more powerful. So, first of all, I should make sure that we both understand I’m not making an argument for polynomials being universal in some abstract Platonic way. I mean it in a very Human way. You’re right that there isn’t really something mathematically superior about Taylor approximations, and in fact that in many ways Fourier series (tolerant of discontinuity on measure zero sets, etc). But they aren’t part of the way people think in the way the polynomials for Taylor series are. We naturally think in terms of derivatives which are just a factorial away from being the coefficients of Taylor series approximation polynomial. If we naturally abstracted into sine Fourier series coefficients, I’d argue that they were deeply fundamental too. (And while they aren’t natural in the same way Taylor polynomials are, they’re definitely more natural than Legendre Polynomials, speaking of Fourier Series. It has more to do with the Fourier Transform though, and its connection to our hearing.) > I was under impression that all the known algorithms for raising numbers to real powers are much more convoluted than the standard multiplication algorithm, but I am not sure. I thought you would know, since you work with computers and stuff. I don’t usually do anything that low level and hadn’t really thought about it. The closest I got was when I when I was younger and spent weeks of Summer vacation drawing a simple processor out of logic gates on paper. But I never implemented anything higher up the hyper operation chain than multiplication, so I don’t know much about how one implements Some research seems to suggest that you’re right, however. >You are aware that this is already happening, right? As computers become faster and faster, and numerical methods improve more and more, many people start to wonder if there is any point in trying to solve equations analytically Computers are definitely becoming more important, but I haven’t been doing math long enough to actually see trends. I was hoping that, as would seem natural, we were deemphasizing the importance of calculation and focusing instead on understanding. >Mathematics has been human-oriented for 2000 years, and during the last 50 it has started to become machine-oriented. I can’t say I am not a little bit frightened I’m actually not sure how to respond to this, even after spending a number of days reflecting on it. I’m not sure that what you say is happening is what is happening, and I’m not sure that what is happening is a bad thing. Being able to calculate is essentially useless if you don’t understand what you’re calculating. (And understanding is deeply tied to knowing how to find exact solutions.) And, short of the singularity, computers aren’t going to be doing that for us. It just means that humans aren’t trying to be calculators anymore. And that doesn’t seem too bad. But there’s definitely an extent to which culture misunderstands mathematics and doesn’t see anything beyond calculating. So it could go with society abandoning real mathematics as obsolete… I don’t know. ○ George G. Says: June 24, 2011 at 13:28 | Reply I thought about these things for a little while, and it occurred to me that there is indeed something very special about the tree classes of the most frequently functions: polynomials, exponential and trigonometric functions – they are tied to the operation of the differentiation each in their own way. Polynomials form the kernel of the (iterated) differentiation operation, and this is what makes Taylor series possible, and exponents and combinations of sines are eigenfunctions of the iterated differentiation, that’s why they are so important for differential equations and Fourier series. So, I think every alien civilization that uses our version of differentiation must also be very fond of these three classes of functions, but it may not be so if they use, for example, “geometric calculus” you are such a big fan of, or some other modification like that. “But they aren’t part of the way people think in the way the polynomials for Taylor series are. We naturally think in terms of derivatives which are just a factorial away from being the coefficients of Taylor series approximation polynomial.” I agree with the basic idea, but if I wanted to play Devil’s advocate I’d say: “polynomials are also one factorial away from the coefficients of [obscure formula #56], and [obscure formula #56] uses gamma function, hence gamma function is the most important function of all! QED.” The main reason I think that Fourier series (trigonometry based or your-orthogonal-system-of-choice based) are more natural than Taylor series, is because they basically generalize the idea of coordinates to the spaces of functions. Expanding a function in a linear combination of some other functions, forming an orthogonal system is basically the same as writing a vector as a linear combination of vectors forming the basis. Anonymous Says: June 28, 2011 at 14:41 | Reply I think you’re just looking at it too much from a calculus perspective. Linear algebra, ring theory, field theory, galois theory, algebraic geometry even, all involve polynomials. Polynomials are more immediate from the basic operations of multiplication and addition than what you are proposing. Whether there are polynomial-free fields worth studying that are being neglected somehow by the the idiosyncratic way in which humans view the world who knows though. ACW Says: August 8, 2011 at 21:58 | Reply Your tale of a culture that is far more interested in the order-8 dihedral group than it is in the integers seems farfetched; I think most technological cultures could not help discovering the That having been said, it is uncanny that you picked D8: the Warlpiri people of central Australia do indeed have a far more advanced nomenclature concerning D8 than they do for Z. They have names for all the elements of D8 and can certainly compose them effortlessly in their heads, but they do not have a native word for 4. (They aren’t a very technological culture, though.) I think I’m going to let you have the pleasure of doing the research to find out why D8 is of interest to them. • christopherolah Says: August 8, 2011 at 22:17 | Reply That’s fascinating. I will indeed have to research that :) Douglas Gray Says: May 3, 2012 at 23:11 | Reply Here is a quote from an article about a mathematical savant, Danile Tammet: “Daniel Tammet is able to see and feel numbers. In his mind’s eye, every digit from zero to 10,000 is pictured as a 3-dimensional shape with a unique color and texture. For example, he says, the number fifteen is white, yellow, lumpy and round. Synesthesia occurs when regions of the brain associated with different abilities are able to form unusual connections. In most people’s brains, the recognition of colors, the ability to manipulate numbers, or language capacity all work differently in separate parts, and the information is generally kept divided to prevent information overload. But in synesthetes, the brain communicates between the regions. Tammet doesn’t need a calculator to solve exponential math problems such as 27 to the 7th power — that’s 27 multiplied by itself seven times — he’ll come up with the answer, 10,460,353,203, in a few Tammet visualizes numbers in their unique forms and then melds them together to create a new image for the solution. When asked to multiply 53 by 131, he explains the solution in shapes and textures: “Fifty-three, which is round, very round…and larger at the bottom. Then you’ve got another number 131, which is longer a little bit like an hourglass. And there’s a space that’s created in between. That shape is the solution. 6,943!” Perhaps a better understanding of this phenomena will shed some light on different ways of doing mathematics.
{"url":"https://christopherolah.wordpress.com/2011/06/10/alien-mathematics-numbers-and-polynomial-centric-societies/","timestamp":"2014-04-20T08:29:29Z","content_type":null,"content_length":"85262","record_id":"<urn:uuid:559769d0-aec2-40a7-929e-ac635e720518>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
I have a vector physics problem, its for tonights homework so General Question I have a vector physics problem, its for tonights homework so could some one help me A.S.A.P? Okay, so the problem is: A boat is traveling in the ocean, 40 Newtons @ 45° North of East, the wind is blowing against it 75 Newtons @ 75° South of East, the current is moving 120 newtons due West. How many newtons and what direction and degrees is the boat moving… thanks :) Observing members: 0 Composing members: 0 9 Answers Read your book. It tells you how to solve this kind of problem. What part are you confused about? Sounds pretty straight forward to me. What, specifically, are you having trouble with? I just picked up the Vector physics for dummies book, be right with you in like ten minutes… Treat each force as a vector, breaking them each into their x (east-west) and y (north-south) coordinates. Then sum each vector by adding up the x and y components. Once you have that, find the magnitude of the resultant vector and use the inverse tangent function to find the angle from the x-axis. Draw a diagram to show the direction and magnitude of each force. From there it is simple vector addition. We are not here to do your homework for you – if you don’t understand it, your teacher will be able to explain it perfectly well. There is no point to pretending you can do something you can’t. @Ivan There’s no point in breaking it into x and y components for this, adding head to tail with trig is adequate. Either way works equally well, I just happen to like component form better. @Ivan Fair enough. I just don’t like adding extra steps. Take each vector and take its magnitude (amount of Newtons) and then multiply the cosine of its angle (ex: cos(45)) This is the X component. Then take its magnitude and multiply by the sine of its angle. This is the Y component. Do this for each of the 3 forces and then simply add up the X and Y components. Now square each number (X and Y component) and add them up, then take the square root. This is the final magnitude (amount of newtons the boat is actually moving) Then take the inverse tangent of the Y value over the X value (tan^-1(Y/x)) and this is the angle the boat is actually For everyone who is telling him not to ask homework questions on here, just don’t answer if it bothers you. Also, I don’t even think fluther generally has a problem with homework questions. thank you for the people who answered, and by the way, I don’t want people to do it for me, I just need help grasping the new concept, luckily since someone who did try to help answer my question, I youtubed head-to-tail method, and breaking down x and y components and I understand it now, so more or less I got what I needed, and I want to let the people know who helped me thanks for their time! Answer this question This question is in the General Section. Responses must be helpful and on-topic.
{"url":"http://www.fluther.com/75660/i-have-a-vector-physics-problem-its-for-tonights-homework-so/","timestamp":"2014-04-21T00:07:09Z","content_type":null,"content_length":"45687","record_id":"<urn:uuid:56d10c72-e6e5-4c27-a07b-b06ed4e62f25>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of isosceles Use Isosceles in a sentence Origin: 1545–55; Late Latin Greek isoskelḗs with equal legs, equivalent to iso- iso- ) leg + adj. suffix World English Dictionary isosceles (aɪˈsɒsɪˌliːz) 1. (of a triangle) having two sides of equal length 2. (of a trapezium) having the two nonparallel sides of equal length [C16: from Late Latin, from Greek isoskelēs, from iso- + skelos leg] Collins English Dictionary - Complete & Unabridged 10th Edition 2009 © William Collins Sons & Co. Ltd. 1979, 1986 © HarperCollins Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009 Cite This Source Word Origin & History American Heritage Science Dictionary isosceles (ī-sŏs'ə-lēz') Pronunciation Key Of or relating to a geometric figure having at least two sides of equal length. The American Heritage® Science Dictionary Copyright © 2002. Published by Houghton Mifflin. All rights reserved. Cite This Source Example sentences Draw a circle in the center and then five lines from the center to the corners to create five isosceles triangles. The figure shows a row of six solid white isosceles triangles. Ask students to find two right angle isosceles triangles and predict what will be formed when they slide them together. They will then find measures of the angles using the properties of isosceles and right triangles. The atoms in a water molecule-two hydrogen and one oxygen-are arranged at the corners of an isosceles triangle. When students work with tangrams, two congruent isosceles right triangles can be put together to compose a square. FAll isosceles triangles are also equilateral triangles. Most of the shapes of elementary geometry relate to the rectangle, and to the isosceles triangles generated by its diagonals. The nose cone triangle can either be an isosceles or equilateral triangle.
{"url":"http://dictionary.reference.com/browse/isosceles","timestamp":"2014-04-16T22:58:24Z","content_type":null,"content_length":"97288","record_id":"<urn:uuid:0698cc6e-d55d-40dc-8f36-e83248b24253>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Castle Point, NJ New York, NY 10016 GRE, GMAT, SAT, NYS Exams, and Math ...I specialize in tutoring math and English for success in school and on the SAT, GED, GRE, GMAT, and the NYS Regents exams. Whether we are working on high school proofs or GRE vocabulary, one of my goals for each session is to keep the student challenged,... Offering 10+ subjects including geometry
{"url":"http://www.wyzant.com/Castle_Point_NJ_geometry_tutors.aspx","timestamp":"2014-04-19T22:17:43Z","content_type":null,"content_length":"60674","record_id":"<urn:uuid:c2c7f2ad-81e9-4be1-a48d-156deaea3e32>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
On the 12th Day of Christmas… A KitchenAid Stand Mixer Giveaway! (Winner Announced) UPDATE: The winner of the special edition KitchenAid mixer is: #154 – Miss Christine: Her comment on being Brown Eyed Baker fan on Facebook was the winner. What she’s baking this holiday season: “I am baking four recipes for holiday parties and two for the holidays! I I would LOVEE, to have that beauty in my kitchen!” Congratulations Christine! You should have already received an email from me; make sure you reply with your mailing address so I can get your new KitchenAid mixer out to you! Thanks everyone for entering, and thank you for making these 12 Days of Giveaways so much fun for me! Welcome to Day 12 of the “12 Days of Giveaways”! I’m totally bummed that today is the last day of the giveaways; I’ve had so much fun playing Santa! But, I wanted to end the twelve days with a big ol’ bang, and thought that this mixer fit the bill. I hope you all agree :) Not only is this a KitchenAid stand mixer (my kitchen couldn’t survive without one!), but it has one of those oh-so-pretty glass bowls and, more importantly, sales of this special edition model benefit Susan G. Komen Cook for the Cure®’s effort to end breast cancer. So not only do you get a shiny (and gorgeous!) new toy, but I also get to donate to a good cause. It’s a holiday win-win! Giveaway Details The winner will receive one (1) KitchenAid Artisan Susan G. Komen Stand Mixer. How to Enter To enter to win, simply leave a comment on this post and answer the question: “How many different things are you planning to bake this holiday season?” You can receive up to FOUR additional entries to win by doing the following: 1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment. 2. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment. 3. Tweet the following about the giveaway: “Day 12 of “12 Days of Giveaways” from @browneyedbaker: Win a special edition KitchenAid Artisan Stand Mixer! http://wp.me/p1rsii-3Wz”. Come back and let me know you’ve Tweeted in an additional comment. 4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment. Deadline: Today (Friday), December 16, 2011 at 11:59pm EST Winner: The winner will be chosen at random using Random.org and announced here tomorrow. If the winner does not respond within 24 hours, another winner will be selected. Disclaimer: This giveaway is sponsored by Brown Eyed Baker. GOOD LUCK!! 6,006 Responses to “On the 12th Day of Christmas… A KitchenAid Stand Mixer Giveaway! (Winner Announced)” 1. At least 4 things but possibly 6… Or 7. 2. I’m now following you on twitter too! 4. I subscribe to your feed via Google Reader. 5. I follow you on twitter @sharonjo2 6. I’ll bake at least a cake, pie and cookies. At least three 8. I will be making fudge, caramel, and three types of cookies. 9. I follow you on facebook. (Sharon O.) 10. I followed you on twitter 13. I don’t plan on baking anything, my wife is! 14. I’ve got about twelve recipes lined up….happy baking! 16. I subscribe to your emails. 17. I follow you on Facebook. 18. I’ll be baking bread and orange chocolate chunk cake 19. Nothing this year. We are traveling to the west coast and cannot imagine trying to fly something in an all day trip! Generally I am the host of Christmas and bake oodles of things. A different Christmas for me 20. I’ve already baked some cookies and I’m not really sure what else I’ll be baking for Christmas. Cinnamon rolls, maybe? 22. just subscribed via e-mail! 23. I subscribe through Google. 24. Wow! I will be baking a plethora of various goodies for the holidays! 27. ZERO! I’m visiting family and not cooking or baking a thing! 31. oooh love that color, and the cause! baked 3 things today, at least a few more on the list for tomorrow! 33. I just became a fan on FB! 34. I have baked 12 different things today and that was one day so I’m really not sure how many more I will get to… 35. I will be baking as often as I can! 36. Hi BrownEyedBaker, Love your site and recipes! I’m a fan on facebook so I can keep up with new goodies to experiment with! I’ve never considered using a stand-up mixer, because they are pricey! But here is a great opportunity to possibly own one, that will accompany me in making whipped shortbreads, butter tarts, and classic egg nog for christmas! 37. So many things! Including christmas cookies, cheesecake, and persimmon pudding! 39. Planning to bake at least 5 different things and hopefully not gain 5 pounds in the process of “taste testing” 45. Definitely two (cookies and brownies), but probably more now that I’m on break for three weeks! 47. I have no idea how many things I will bake, but it will be a lot! 48. I subscribed to your emails! 49. Where do I start ?! Cookies for sure and want to try a Pavlova if I win a stand mixer ! 50. Cookie bake with friends tomorrow, and more baking on Sunday. Maybe a good 10 different things! 51. I am subscribed to your emails! 53. Just subscribed by email! 54. Just followed you on twitter! 58. I follow you on facebook. 60. Subscribed via email too ! 62. I’m Baking Pannettone, almond biscottis and sugar cookies !! Yummm 63. subscribed to the RSS feed as well 65. this is soo pretty!! I plan on making 6 diff. items give or take. I love muffins! 67. Just liked you on facebook! 68. i’ll be making 4 different kinds of cookies 69. Following you on twitter. 71. also gave you a shout out on Twitter (tweeted the contest) 72. i bake cookies all the time! om nom nom!! 74. I am now a new fan of your FB page! Love the pic of your dog! 75. This coming week, I plan to make a different type of cookie on each day of it! That way my holiday cookie plates will be plentiful! 77. i tweeted about your giveaway. 78. One kaluha cake and two batches of cookies. 79. I have done my tweeting, entered, liked, etc. Hope to win… it is my birthday today 83. 3….your recipe for chocolate dipped macaroons (a favorite!), monster cookies, and peanut butter blossoms with dark chocolate kisses. Thanks for your lovely blog and happy holidays! 84. Definitely cooking some pies and cookies.. not sure how many yet! 85. Already baked 6 things, still have 10 or so left!
{"url":"http://www.browneyedbaker.com/2011/12/16/on-the-12th-day-of-christmas-a-kitchenaid-stand-mixer-giveaway/comment-page-60/","timestamp":"2014-04-16T10:16:39Z","content_type":null,"content_length":"107781","record_id":"<urn:uuid:32a100e6-cbf0-4110-adf9-eb6dbd4cd995>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Maths - Measurement May 20th 2008, 04:30 PM #1 May 2008 Complex Maths - Measurement As part of my Residential Drafting course I'm required to complete Complex Mathematical concepts, which I'm really struggling with. Your answer is really appreciated: Q: A mass rests on an area of 80mm2 and exerts a force of 320 kg m/s2. Express the pressure exerted on this area in kN/cm2. $P = \frac{F}{A} = \frac{320~N}{80~mm^2} = 4~\frac{N}{mm^2}$ Now to change the unit: $\left ( \frac{4~N}{1~mm^2} \right ) \cdot \left ( \frac{1~kN}{1000~N} \right ) \cdot \left ( \frac{10~mm}{1~cm} \right )^2$ (Note that the last factor is squared!) Fantastic ... thanks for your quick reply! May 20th 2008, 04:41 PM #2 May 20th 2008, 06:10 PM #3 May 2008
{"url":"http://mathhelpforum.com/math-topics/39068-complex-maths-measurement.html","timestamp":"2014-04-16T16:50:40Z","content_type":null,"content_length":"36700","record_id":"<urn:uuid:3f377261-2e2f-488d-b0e5-58f3181d22aa>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00288-ip-10-147-4-33.ec2.internal.warc.gz"}
On the 4th Day of Christmas… A HUGE Cookbook Giveaway! (winner announced) UPDATE: The winner of the set of 10 cookbooks is: #4,156 – Danielle M: “Baking Illustrated is my favorite this year! This giveaway is awesome! I am also a cookbook addict!” Congratulations, Danielle! Be sure to reply to the email you’ve been sent, and I’ll get your cookbooks shipped out to you! I am a certified cookbook addict; I can’t help myself when it comes to buying new one. I eat up the “best of” lists, the James Beard and IACP winners, and am always adding to my wish list. As a result, I have more than 130 cookbooks lining my shelves. Each has a special place there, but some (both old and new) are hands-down favorites. It was an excruciating task, but I narrowed down my collection to 10 of my most favorite cookbooks, and am giving away copies of all 10 to one lucky reader! Continue reading below for details on how to enter! One (1) winner will receive one (1) copy of each of the following cookbooks: To enter to win, simply leave a comment on this post and answer the question: “What’s your favorite cookbook from 2012?” You can receive up to FIVE additional entries to win by doing the following: 1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment. 2. Follow @thebrowneyedbaker on Instagram. Come back and let me know you’ve followed in an additional comment. 3. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment. 4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment. 5. Follow Brown Eyed Baker on Pinterest. Come back and let me know you became a fan in an additional comment. Deadline: Friday, December 7, 2012 at 11:59pm EST. Winner: The winner will be chosen at random using Random.org and announced at the top of this post. If the winner does not respond within 48 hours, another winner will be selected. Disclaimer: This giveaway is sponsored by Brown Eyed Baker. Good Luck!! 5,100 Responses to “On the 4th Day of Christmas… A HUGE Cookbook Giveaway! (winner announced)” 2. I LOVE cookbooks! I read them like novels!! 3. I follow you on Pinterest! 4. I follow you on Facebook! 5. I am an email fan of yours! 6. My favorite cookbook of 2012 is: Regali golosi by Sigrid Verbert! It has been translated in French and Polish. It’s amazing the good gifts ideas it contains. There are also tips on how to package cookies and jars in an original way! And the recipes I’ve tried are delicious, the procedure is super simple! 7. hedy goldsmith’s baking out loud! 8. I also follow you on Pinterest! 9. My favourite cookbook of 2012 would be Smitten Kitchen 10. I love the recipes that I’ve tried from the Feed Zone Cookbook! 11. I follow you on pinterest 12. I subscribe to your blog! 13. My favorite is Martha Stewart’s Baking Handbook. I subscribe by email. 14. I am a big fan of Ina and the Barefoot Contessa. She and her husband are so adorable on TV! 15. My favorite cookbook this year is The Pioneer Woman Cooks. 16. i love martha stewart bakings cookbook 17. I follow You on Pinterest. 18. haven’t purchased a cook book this year, unfortunately:( 19. I subscribe to our emails. 20. I subscribe to The Brown Eyed Baker by email. 21. I subscribe to your Emails. 24. Jerusalem! or smitten kitchen. both so beautiful! 25. i am also a subscriber via emails 26. Thanks for giving a chance to win… 27. I’m following you on Twitter 28. I’m following you on Pinterest 29. I like your page on Facebook 31. My favorite cookbook of 2012 is The Pioneer Woman Cooks. 32. I subscribe to your emails. 33. I love all cookbooks, so will choose the one I’m currently looking at. “Cooking Italian with the Cake Boss by Buddy valastro 35. I think the only new cookbook I have is the newer Pioneer Woman one. I honestly have not even made anything out of it, but the photos are beautiful. I am a cookbook addict though, and would love to have these! 36. That’s a tough one..I think my count is up to about 20 this year 40. I follow you on Facebook. 43. I follow you on Pinterest. 44. I follow you on pinterest 45. I subscribe to your emails 47. My favourite has been the Jamie Oliver 15 minute meals book, I’ve already cooked so many recipes from it. 48. And I subscribe to your blog 49. I subscribe to you in Google Reader! 50. I follow you on Pinterest! 51. I didn’t buy any ‘new’ cookbooks this year unless you consider that I got ‘My Bread’ for my husband which he LOVES!!!!!!! I’m the type to buy one, make 1 or 2 recipes hen it sits on the shelf looking pretty 52. My favourite cookbook from 2012 is Nigellissima by Nigella Lawson 53. This is a great giveaway because I haven’t read any new cookbooks this year! I read a ton of baking and cooking blogs – I think it’s time to try a proper cookbook! 54. I’m already following you on instagram 56. It’s funny…I used to buy cookbooks like like crazy until I found cooking blogs. Yours was that first hen I made your coconut chocolate chunk blondies which are always a hits. The baked cookbook sounds fantastic. I would love to win that. 57. The ‘Back in the Day’ cookbook! 58. Favorite cookbook is Joy the Baker. I had my mom go to her book signing and mail me the book. Now my mom is hooked on her! 61. Aaaand a pintrest subscriber! 62. I’ve liked you on facebook 63. I haven’t bought a 2012 cookbook but I’ve had my eye on Ina Garten’s. 64. I subscribe via email. I also follow you on Pinterest and twitter. 65. I’ve subscribed to your e-mail feed 66. Joy the Baker is always a fun read! 67. I’m following you on Pinterest 68. I think my favorite cookbook of the year is The Smitten Kitchen Cookbook by Deb Perelman, which I got a few months ago, but I’m still awaiting two on my holiday wishlist- Bouchon Bakery by Thomas Keller and Pure Vanilla by Shauna Sever. So many great new books to chose from! I would love to add your top 10 to my collection… thank you, Michelle, for your generosity in running this 70. I’m subscribed via RSS feed! 71. I follow you on Pinterest! 73. I’m a collector of cookbooks too, but mine pales in comparison to your collection. My favorite cookbook this year is Desserts in Jars. So fun and lots of cute/clever dessert ideas. This is an awesome giveaway…I WANT!! 74. No question my favorite cookbook of this year is the Smitten Kitchen cookbook. Great recipes, great pictures. 75. Fav cookbook is Grandma’s Secret 52 Sunday Recipes 78. I’m really loving the Picky Palate cookbook! 81. I follow you on instagram 82. To be honest most of my recipes these days come from blogs but I would love a great new cookbook! 87. I follow you on Pinterest 88. My latest book is “Chocolates and Confections” by Peter P. Greweling. Really like it too. 90. I subscribe to your email 92. I love the cookbook Well Fed. 94. Now that I think about it, I don’t think I bought or received a new cookbook this year! I have gotten a lot of cooking magazines, but I’d love this amazing cookbook package! 95. I subscribe to Brown Eyed Baker by email 96. I subscribe to your RSS feed. 98. I haven’t picked up a cookbook in a while- my latest resource has been blogs like this one. Keep up the good work! I really want to get a good look at David Lebovitz’s Perfect Scoop, though. 99. I subscribe to you via email! 100. Discovered The Georgian Feast (Darra Goldstein) after a vacation there.
{"url":"http://www.browneyedbaker.com/2012/12/06/on-the-4th-day-of-christmas-a-huge-cookbook-giveaway/comment-page-4/","timestamp":"2014-04-17T21:29:31Z","content_type":null,"content_length":"124172","record_id":"<urn:uuid:f006ac76-965e-4974-9f8d-3852adad70e2>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
June 26 - June 30, 2006 Visualizing Functions Summary Monday - Friday, June 26 - June 30, 2006 Matt Bracher Matthew Carpenter Douglas Lutz Gregory Monson Craig Morgan Rebecca Neighborgall Monte Saxby Mario Shaunette Monday, June 26: We had Rita Kabasakalian, Harvey Keynes, Gonzalo Riera from DIPD program join us today. After introductions, we dove into an activity involving graph complexity. We answered open-ended questions related to three applications of graphs: networks, routes and scheduling. This gave us enough mathematical background to define a notion of graph complexity. Specifically, participants were asked to define a function that assigns nonnegative numbers to graphs; higher numbers are interpreted as more complicated graphs. At the end of the two hours, four sample graphs were presented. Each small group was asked to come up with functions that could be used to explain why one of the example graphs is the most complicated of the four. Group A: Monte, Doug Group B: Harvey, Rita, Becca, Beverly Group C: Craig, Matt C., Greg Group D: Mario, Matt B. Tuesday, June 27: During the 90 minutes, we continued our investigation of graph complexity from yesterday and each small group presented its ideas to the rest. The complexity measures that came up were the diameter, number of distinct vertex degrees, and measures related to the number of edges in the graph. In the last 30 minutes, we discussed the reason why we pursued the graph complexity activity, briefly looked at the main ideas from the "Habits of Mind" article by Cuoco, Goldenberg and Mark, and described the purpose of the working group - to come up with some sort of product related to functions that could be used in a classroom. Each participant was asked to cull the PCMI web site for ideas from previous years and to come to the next meeting ready to put forth a few ideas, no matter how well formed. Thursday, June 29: We spent the first hour brainstorming about possible ideas for projects. After the first hour, small groups of individuals formed to further explore different ideas before committing to any project plans. Beverly, Doug, Mario, Monte worked with probes. Matt C. surfed the web to look for ideas. Craig, Greg, Matt B., Becca, and Darryl discussed functions on things other than numbers. Friday, June 30: Darryl had organized the brainstorming from Thursday into three big ideas: • know what functions are • modeling with functions • relate symbolic expression for function to graph The group then expressed which idea they felt most attached to. Some found it difficult to commit to exclusively to one. Based on this, three groups, Mario and Greg, Matt C and Doug, Matt B, Becca, Craig and Monte, were formed to come up with possible project they could pursue. Towards the end, we met as a whole group to debrief. Back to Journal Index PCMI@MathForum Home || IAS/PCMI Home © 2001 - 2013 Park City Mathematics Institute IAS/Park City Mathematics Institute is an outreach program of the School of Mathematics at the Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540 Send questions or comments to: Suzanne Alejandre and Jim King With program support provided by Math for America This material is based upon work supported by the National Science Foundation under Grant No. 0314808. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
{"url":"http://mathforum.org/pcmi/hstp/sum2006/wg/functions/journal/week1.html","timestamp":"2014-04-19T17:39:45Z","content_type":null,"content_length":"5050","record_id":"<urn:uuid:c2fbe51e-4143-44a7-b570-0e7563189de6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] help comparing two median with R Frank E Harrell Jr f.harrell at vanderbilt.edu Wed Apr 18 18:48:01 CEST 2007 Cody_Hamilton at Edwards.com wrote: > Has anyone proposed using a bootstrap for Pedro's problem? > What about taking a boostrap sample from x, a boostrap sample from y, take > the difference in the medians for these two bootstrap samples, repeat the > process 1,000 times and calculate the 95th percentiles of the 1,000 > computed differences? You would get a CI on the difference between the > medians for these two groups, with which you could determine whether the > difference was greater/less than zero. Too crude? > Regards, > -Cody As hinted at by Brian Ripley, the following code will approximate that. It gets the nonparametric confidence interval for the median and solves for the variance that would give the same confidence interval width if normality of the median held. g <- function(y) { y <- sort(y[!is.na(y)]) n <- length(y) if(n < 4) return(c(median=median(y),q1=NA,q3=NA,variance=NA)) qu <- quantile(y, c(.5,.25,.75)) names(qu) <- NULL r <- pmin(qbinom(c(.025,.975), n, .5) + 1, n) ## Exact 0.95 C.L. w <- y[r[2]] - y[r[1]] ## Width of C.L. var.med <- ((w/1.96)^2)/4 ## Approximate variance of median c(median=qu[1], q1=qu[2], q3=qu[3], variance=var.med) Run g separately by group, add the two variances, and take the square root to approximate the variance of the difference in medians and get a confidence interval. > Frank E Harrell > Jr > <f.harrell at vander To > bilt.edu> Thomas Lumley > Sent by: <tlumley at u.washington.edu> > r-help-bounces at st cc > at.math.ethz.ch r-help at stat.math.ethz.ch > Subject > Re: [R] help comparing two median > 04/18/2007 05:02 with R > AM > Thomas Lumley wrote: >> On Tue, 17 Apr 2007, Frank E Harrell Jr wrote: >>> The points that Thomas and Brian have made are certainly correct, if >>> one is truly interested in testing for differences in medians or >>> means. But the Wilcoxon test provides a valid test of x > y more >>> generally. The test is consonant with the Hodges-Lehmann estimator: >>> the median of all possible differences between an X and a Y. >> Yes, but there is no ordering of distributions (taken one at a time) >> that agrees with the Wilcoxon two-sample test, only orderings of pairs >> of distributions. >> The Wilcoxon test provides a test of x>y if it is known a priori that >> the two distributions are stochastically ordered, but not under weaker >> assumptions. Otherwise you can get x>y>z>x. This is in contrast to the >> t-test, which orders distributions (by their mean) whether or not they >> are stochastically ordered. >> Now, it is not unreasonable to say that the problems are unlikely to >> occur very often and aren't worth worrying too much about. It does imply >> that it cannot possibly be true that there is any summary of a single >> distribution that the Wilcoxon test tests for (and the same is true for >> other two-sample rank tests, eg the logrank test). >> I know Frank knows this, because I gave a talk on it at Vanderbilt, but >> most people don't know it. (I thought for a long time that the Wilcoxon >> rank-sum test was a test for the median pairwise mean, which is actually >> the R-estimator corresponding to the *one*-sample Wilcoxon test). >> -thomas > Thanks for your note Thomas. I do feel that the problems you have > rightly listed occur infrequently and that often I only care about two > groups. Rank tests generally are good at relatives, not absolutes. We > have an efficient test (Wilcoxon) for relative shift but for estimating > an absolute one-sample quantity (e.g., median) the nonparametric > estimator is not very efficient. Ironically there is an exact > nonparametric confidence interval for the median (unrelated to Wilcoxon) > but none exists for the mean. > Cheers, > Frank > -- > Frank E Harrell Jr Professor and Chair School of Medicine > Department of Biostatistics Vanderbilt University > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. Frank E Harrell Jr Professor and Chair School of Medicine Department of Biostatistics Vanderbilt University More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2007-April/129910.html","timestamp":"2014-04-20T04:14:40Z","content_type":null,"content_length":"10229","record_id":"<urn:uuid:2b6eb029-eb98-41ac-aae9-9e507c401d63>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating the sum of numbers in a user defined range. Author Calculating the sum of numbers in a user defined range. Ranch Hand Joined: Sep 05, 2012 I have the error of " not a statement " for the line in which I applied the formula for summation. The formula given is (n-m+1)(n+m) /2 , where m is the starting number and n is Posts: 85 the last number. Joined: Nov 09, Posts: 973 For multiplication, you need to use the * character. 5 So, for a variable x, multiplying it by 2 would be 2 * x, not 2x. I like... Ranch Hand James Boswell wrote:For multiplication, you need to use the * character. Joined: Sep 05, 2012 So, for a variable x, multiplying it by 2 would be 2 * x, not 2x. Posts: 85 Oh gosh, I totally missed the * after trying to debug my code for quite some time.. Thanks a million. subject: Calculating the sum of numbers in a user defined range.
{"url":"http://www.coderanch.com/t/595718/java/java/Calculating-sum-numbers-user-defined","timestamp":"2014-04-20T14:03:09Z","content_type":null,"content_length":"23552","record_id":"<urn:uuid:e580100f-940e-4b1d-a86f-0629824d7985>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
February 2003 <<January February March>> We will build a sequence of cards from our collection. We will maintain a "position" S[t] (after t steps), namely a vector of N integers from {0,+1}. Initially S[0]=[0,0,...,0]. At each step, use the "position" to select the next card in the sequence, as follows. Convert the "position" to a "tag" by changing each +1 to -1 and changing each 0 to +1. So a position of [0,1,1,0] determines a tag of [+1,-1,-1,+1]. For our next card in the sequence, select the unique card with this tag. (Our first selected card will have tag [+1,+1,...,+1], obtained from our initial position of [0,0,...,0].) When we add the "value" of the chosen card to the "position", we obtain a new "position" whose entries are still in {0,+1}. To see this: if the i^th entry in the old position was 0, then that entry in the tag of the chosen card will have +1, and its value will be either +1 or 0; adding that to the entry in the old position, the entry in the new position will be +1 or 0. Similarly, if the i^th entry in the old position was +1, then the tag of the chosen card will have -1, and its value will be either -1 or 0; adding that to entry in the old position, the entry in the new position will be 0 or +1. Let us continue selecting more such cards through the same procedure: at each step, the position is always in {0,1}^N, so it can always be mapped into a card. We add the resultant card into the sequence to continue the sequence. If we reach all-zeros, we're done. If we go through all 2^N sets, and don't reach zero, we must have reached the same position twice (there are only2^N possible positions, so, barring all-zeros, we have more sets than positions). Supposing we reached the same position both in step X and in step Y. Then difference set {X+1,X+2,...,Y-1,Y} is the desired set, since 0 = S[Y] - S[X] = V[X+1] + V[X+2] + ... + V[Y-1] + V[Y]. We can, therefore, stop the process as soon a we hit the same position twice, and we don't need to finish building all 2^N. What if we're not able to put the right card into the set, because we've already used it before? Since the mapping from positions to cards is one-to-one, the only way for us to need the same card twice is if we've reached the same position twice. However, as we've already seen, if that happens we merely need to take the subtraction set in order to find the solution. If you have any problems you think we might enjoy, please send them in. All replies should be sent to: webmster@us.ibm.com
{"url":"http://domino.research.ibm.com/Comm/wwwr_ponder.nsf/Solutions/February2003.html","timestamp":"2014-04-20T13:45:43Z","content_type":null,"content_length":"13631","record_id":"<urn:uuid:6987cf0d-c5ac-4495-9dda-81263e756e4a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Scatterplots and Regressions Scatterplots and Regressions (page 2 of 4) You may be asked about "correlation". Correlation can be used in at least two different ways: to refer to how well an equation matches the scatterplot, or to refer to the way in which the dots line up. If you're asked about "positive" or "negative" correlation, they're using the second definition, and they're asking if the dots line up with a positive or a negative slope, respectively. If you can't plausibly put a line through the dots, if the dots are just an amorphous cloud of specks, then there is probably no correlation. • Tell whether the data graphed in the following scatterplots appear to have positive, negative, or no correlation. Plot A Plot B Plot C Plot D Plot A: Low x-values correspond to high y-values, and high x-values correspond to low y-values. If I put a line through the dots, it would have a negative slope. This scatterplot shows a negative Plot B: Low x-values correspond to low y-values, and high x-values correspond to high y-values. If I put a line through the dots, it would have a positive slope. This scatterplot shows a positive Plot C: There doesn't seem to be any trend to the dots; they're just all over the place. This scatterplot shows no correlation. Copyright © Elizabeth Stapel 2005-2011 All Rights Reserved Plot D: I might think that this plot shows a correlation, because I can clearly put a line through the dots. But the line would be horizontal, thus having a slope value of zero. These dots actually show that whatever is being measured on the x-axis has no bearing on whatever is being measured on the y-axis, because the value of x has no affect on the value of y. So even though I could draw a line through these points, this scatterplot still shows no correlation. You may also be asked about "outliers", which are the dots that don't seem to fit with the rest of the dots. (There are more technical definitions of "outliers", but they will have to wait until you take statistics classes.) Maybe you dropped the crucible in chem lab, or maybe you should never have left your idiot lab partner alone with the Bunsen burner in the middle of the experiment. Whatever the cause, having outliers means you have points that don't line up with everything else. • Identity any points that appear to be outliers. Most of the points seem to line up in a fairly straight line, but the dot at (6, 7) is way off to the side from the general trend-line of the points. The outlier is the point at (6, 7) Usually you'll be working with scatterplots where the dots line up in some sort of vaguely straight row. But you shouldn't expect everything to line up nice and neat, especially in "real life" (like, for instance, in a physics lab). And sometimes you'll need to pick a different sort of equation as a model, because the dots line up, but not in a straight line. • Tell which sort of equation you think would best model the data in the following scatterplots, and why. Graph A: The dots look like they line up fairly straight, so a linear model would probably work well. Graph B: The dots here do line up, but as more of a curvy line. A quadratic model might work better. Graph C: The dots are very close to the x-axis, and then they shoot up, so an exponential or power-function model might work better here. In general, expect only to need to recognize linear (straight-line) versus quadratic (curvy-line) models, and never anything that you haven't already covered in class. For instance, if you haven't done logs yet, you won't be expected to recognize the need for a logarithmic model for a given scatterplot. The next lesson explains how to define these models, called "regressions". << Previous Top | 1 | 2 | 3 | 4 | Return to Index Next >> Cite this article as: Stapel, Elizabeth. "Scatterplots and Regressions." Purplemath. Available from http://www.purplemath.com/modules/scattreg2.htm. Accessed
{"url":"http://www.purplemath.com/modules/scattreg2.htm","timestamp":"2014-04-17T21:30:31Z","content_type":null,"content_length":"29589","record_id":"<urn:uuid:3aa6c937-ab34-4886-b3e7-dd7f5ab7d825>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
When quoting this document, please refer to the following URN: urn:nbn:de:0030-drops-25043 URL: http://drops.dagstuhl.de/opus/volltexte/2010/2504/ Go to the corresponding Portal Dell, Holger ; van Melkebeek, Dieter Satisfiability Allows No Nontrivial Sparsification Unless The Polynomial-Time Hierarchy Collapses Consider the following two-player communication process to decide a language $L$: The first player holds the entire input $x$ but is polynomially bounded; the second player is computationally unbounded but does not know any part of $x$; their goal is to cooperatively decide whether $x$ belongs to $L$ at small cost, where the cost measure is the number of bits of communication from the first player to the second player. For any integer $d geq 3$ and positive real $epsilon$ we show that if satisfiability for $n$-variable $d$-CNF formulas has a protocol of cost $O(n^{d-epsilon})$ then coNP is in NP/poly, which implies that the polynomial-time hierarchy collapses to its third level. The result even holds when the first player is conondeterministic, and is tight as there exists a trivial protocol for $epsilon = 0$. Under the hypothesis that coNP is not in NP/poly, our result implies tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs. By reduction, similar results hold for other NP-complete problems. For the vertex cover problem on $n$-vertex $d$-uniform hypergraphs, the above statement holds for any integer $d geq 2$. The case $d=2$ implies that no NP-hard vertex deletion problem based on a graph property that is inherited by subgraphs can have kernels consisting of $O(k^{2-epsilon})$ edges unless coNP is in NP/poly, where $k$ denotes the size of the deletion set. Kernels consisting of $O(k^2)$ edges are known for several problems in the class, including vertex cover, feedback vertex set, and bounded-degree deletion. BibTeX - Entry author = {Holger Dell and Dieter van Melkebeek}, title = {Satisfiability Allows No Nontrivial Sparsification Unless The Polynomial-Time Hierarchy Collapses}, booktitle = {Parameterized complexity and approximation algorithms}, year = {2010}, editor = {Erik D. Demaine and MohammadTaghi Hajiaghayi and D{\'a}niel Marx}, number = {09511}, series = {Dagstuhl Seminar Proceedings}, ISSN = {1862-4405}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2010/2504}, annote = {Keywords: Sparsification, Kernelization, Parameterized Complexity, Probabilistically Checkable Proofs, Satisfiability, Vertex Cover} Keywords: Sparsification, Kernelization, Parameterized Complexity, Probabilistically Checkable Proofs, Satisfiability, Vertex Cover Seminar: 09511 - Parameterized complexity and approximation algorithms Issue Date: 2010 Date of publication: 11.03.2010 DROPS-Home | Fulltext Search | Imprint
{"url":"http://drops.dagstuhl.de/opus/frontdoor.php?source_opus=2504","timestamp":"2014-04-19T11:59:20Z","content_type":null,"content_length":"6096","record_id":"<urn:uuid:fd388c4d-926d-43c7-a912-0c43f4fdc346>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: how many different string can be made from the word PEPPERCORN when all letters are used and such strings do not contain the substring CON? • one year ago • one year ago Best Response You've already chosen the best response. im interested in seeing how this one is done! Best Response You've already chosen the best response. no of different strings that can be made from the word PEPPERCORN are \[10!/(3!*2!)\] as there are 10 letters with one letter being repeated thrice and one leter being repeated twice No of strings with sub string CON are \[8!/(3!*2!)\] as there are 8 letters (consider whole CON as one letter or unit) and P repeated thrice and E repeated twice Hence no of different strings without the substring CON are \[10!/(3!*2!) - 8!/(3!*2!)\] Best Response You've already chosen the best response. where did you get the (3!*2!)? Best Response You've already chosen the best response. okay i get it :) thanks... 3! and 2! is from the letters which are repeated right Best Response You've already chosen the best response. yup...10! is assuming all the letters are different but our word has three P's which when interchanged do not change the arrangement but have been in included in the 10! as different arrangements...now no of times the each unique arrangement is reapeated is equal to no of times the 3 P's have been interchanged or permuted among themselves. That is 3! hence divide by 3!. Like wise 2! for E's Best Response You've already chosen the best response. and why must be 8!, i know 8 is from the prob of CON in string with length 10, and we substract it..hm..why? Best Response You've already chosen the best response. @sumanth4phy There are two letter Rs in PEPPERCORN. Best Response You've already chosen the best response. Imagine gluing all the letters of CON togetehr as single unit. Now we have the letters "P, E, P, P, E, R, CON" we should not count CON as three letters but one as we need permutations where CON is clubbed hence 8! CON has to exist as substring which implies u cant treat C, O, N as individual letters any more. There are fixed with respect to each other only the clubbed substring can be shifted here and there with other letters Best Response You've already chosen the best response. yeah, i found 2 letters which are repeated twice, R and E, then 1 letter which is repeated thrice, it's P so 8!/(3!2!2!) or 8!/(3!2!) ? Best Response You've already chosen the best response. yup I have overlooked R's solution is \[10!(3!*2!*2!) - 8!/(3!*2!*2!)\] Best Response You've already chosen the best response. okay thanks :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d90498e4b0d6c1d54268ee","timestamp":"2014-04-18T16:04:06Z","content_type":null,"content_length":"52598","record_id":"<urn:uuid:60401295-b677-4c7f-98b7-2e1f6cc9ee43>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
SQL Aggregate Functions SQL Aggregate functions return a single value, using values in a table column. In this chapter we are going to introduce a new table called Sales, which will have the following columns and data: OrderID OrderDate OrderPrice OrderQuantity CustomerName 1 12/22/2005 160 2 Smith 2 08/10/2005 190 2 Johnson 3 07/13/2005 500 5 Baldwin 4 07/15/2005 420 2 Smith 5 12/22/2005 1000 4 Wood 6 10/2/2005 820 4 Smith 7 11/03/2005 2000 2 Baldwin The SQL COUNT function returns the number of rows in a table satisfying the criteria specified in the WHERE clause. If we want to count how many orders has made a customer with CustomerName of Smith, we will use the following SQL COUNT expression: SELECT COUNT (*) FROM Sales WHERE CustomerName = 'Smith' Let’s examine the SQL statement above. The COUNT keyword is followed by brackets surrounding the * character. You can replace the * with any of the table’s columns, and your statement will return the same result as long as the WHERE condition is the same. The result of the above SQL statement will be the number 3, because the customer Smith has made 3 orders in total. If you don’t specify a WHERE clause when using COUNT, your statement will simply return the total number of rows in the table, which in our case is 7: SELECT COUNT(*) FROM Sales How can we get the number of unique customers that have ordered from our store? We need to use the DISTINCT keyword along with the COUNT function to accomplish that: SELECT COUNT (DISTINCT CustomerName) FROM Sales The SQL SUM function is used to select the sum of values from numeric column. Using the Sales table, we can get the sum of all orders with the following SQL SUM statement: SELECT SUM(OrderPrice) FROM Sales As with the COUNT function we put the table column that we want to sum, within brackets after the SUM keyword. The result of the above SQL statement is the number 4990. If we want to know how many items have we sold in total (the sum of OrderQuantity), we need to use this SQL statement: SELECT SUM(OrderQuantity) FROM Sales The SQL AVG function retrieves the average value for a numeric column. If we need the average number of items per order, we can retrieve it like this: SELECT AVG(OrderQuantity) FROM Sales Of course you can use AVG function with the WHERE clause, thus restricting the data you operate on: SELECT AVG(OrderQuantity) FROM Sales WHERE OrderPrice > 200 The above SQL expression will return the average OrderQuantity for all orders with OrderPrice greater than 200, which is 17/5. The SQL MIN function selects the smallest number from a numeric column. In order to find out what was the minimum price paid for any of the orders in the Sales table, we use the following SQL SELECT MIN(OrderPrice) FROM Sales The SQL MAX function retrieves the maximum numeric value from a numeric column. The MAX SQL statement below returns the highest OrderPrice from the Sales table: SELECT MAX(OrderPrice) FROM Sales
{"url":"http://www.sql-tutorial.com/sql-aggregate-functions-sql-tutorial/","timestamp":"2014-04-21T09:13:19Z","content_type":null,"content_length":"9362","record_id":"<urn:uuid:5148523b-affc-44c6-bc68-bae17e6e5571>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Philosophical Logic and 1. Irvine, Andrew D. 1996. "Philosophy of Logic." In Routledge History of Philosophy. Volume Ix: Philosophy of Science, Logic and Mathematics in the Twentieth Century, edited by Kearney, Richard, 9-49. New York: Routledge. 2. Putnam, Hilary. 1971. Philosophy of Logic. New York: Harper & Row. 3. Quine, Willard van Orman. 1970. Philosophy of Logic. Harvard: Harvard University Press. 4. Fisher, Jennifer. 2008. On the Philosophy of Logic. Belmont: Thomson Wadsworth. 5. Haack, Susan. 1978. Philosophy of Logics. Cambridge: Cambridge University Press. Contents: Preface XI, Notation and abbreviations XV; 1. 'Philosophy of logics' 1; 2. Validity 11; 3. Sentence connectives 28; 4. Quantifiers 39; 5. Singular terms; 56; 6. Sentences, statements, propositions 74; 7. Theories of truth 86; 8. Paradoxes 135; 9. Logic and logics152; 10. Modal logic 170; 11. Many-valued logic 204; 12. Some metaphysical and epistemological questions about logic 221; Glossary 243; Advice on reading 253, Bibliography 255; Index 267. "My concern, in this book, is with the philosophy, rather than the history, of logic. But my strategy has been devised with an eye to the history of the interplay of formal and philosophical issues which I have just sketched. I begin with a consideration of some problems raised by the standard logical apparatus - the interpretation of sentence connectives, sentence letters, quantifiers, variables, individual constants, the concepts of validity, truth, logical truth; I turn, from chapter 9 onwards, to a consideration of the way some of these problems motivate formal innovations, 'extended' and 'deviant' logics, and to the ways in which these new formalisms lead, in turn, to a reevaluation of the philosophical issues; and I conclude, in the final chapter, with some questions - and rather fewer answers - about the metaphysical and epistemological status of logic, the relations between formal and natural languages, and the relevance of logic to And two recurring themes of the book also reflect this historical perspective. What seem to me to be the vital philosophical issues in logic are focussed by consideration (i) of the plurality of logical systems and (ii) of the ways in which formal calculi bear on the assessment of informal argument. More specifically, I shall be urging that, in view of the existence of alternative logics, prudence demands a reasonably radical stance on the question of the epistemological status of logic, and that the interpretation of formal results is a delicate task in which judicious attention to the purposes of formalisation is highly desirable. I have tried to produce a book which will be useful as an introduction to the philosophical problems which logic raises, which will be intelligible to students with a grasp of elementary formal logic and some acquaintance with philosophical issues, but no previous knowledge of the philosophy of logic. But I haven't offered simple answers, or even simple questions; for the interesting issues in philosophy of logic are complex and difficult. I have tried instead to begin at the beginning, to explain technicalities, and to illustrate highly general problems with specific case studies. To this end I have supplied, for those new to the subject, a glossary of possibly unfamiliar terms used in the text, and some advice on finding one's way about the literature; while, for those anxious to go further, I have included a generous (but I hope not intimidating) bibliography." (from the Preface). 6. Grayling, Anthony. 1997. An Introduction to Philosophical Logic. Oxford: Blackwell. Third revised edition (First edition 1982; second edition 1990). Contents: Preface V, 1. Philosophical logic, the philosophy of logic, philosophy and logic 1; 2. The proposition 12; 3. Necessity, analiticity, and the a priori 33; 4. Existence, presuppositions and descriptions 88; 5. Truth: the pragmatic, coherence and correspondence theories 122; 6. Truth: semantics, deflation, indefinability and evaluation 147; 7. Meaning, reference, verification and use 188; 8. Truth, meaning, realism and anti-realism 234; 9. Realism, anti-realism, idealism, relativism 285; Bibliography 324; Index 336. "The topics to be discussed are: the proposition, analyticity, necessity, existence, identity, truth, meaning and reference. These, at least, are the topics mentioned in chapter headings. In fact the list is more extensive, for in the course of these chapters there are also discussions of possible worlds, realisms of related sorts, anti-realism, and other questions. It is not possible to give an overview of philosophical logic without ranging widely in this way, but it will be clear that because each topic invites, and indeed commands, whole volumes to itself, the discussions I give do not pretend to be more than prefaces to the detailed treatments found in the original literature. These topics are collected under the unifying label 'philosophical logic' for three principal reasons. It marks their interrelatedness, for a good understanding of any of them requires an understanding of the others. It marks their central importance in all serious philosophical discussion. And it reflects the influence of developments in logic since the late nineteenth century, which have afforded an access of power in dealing with many philosophical problems afresh, not only because we have become technically better equipped for the task, but also because developments in logical machinery have promoted and facilitated a certain methodological style which has proved extraordinarily fruitful in philosophy. That methodological style is analysis. The invention of symbolic calculi would not have impelled philosophical developments by itself had it not been for the fact, quickly spotted by Frege and Russell, that they immediately prompt a range of philosophical questions, centrally among them questions about the nature of meaning and truth - which is in short to say, language; and language vitally interests philosophers because it provides our route to a philosophical understanding of thought and the world. The greatest single impetus to current preoccupations with philosophical logic comes indeed from interest in language, to understand which we need progress in this area. (pp. 1-2). 7. Hausman, Alan, Kahane, Howard, and Tidman, Paul. 2009. Logic and Philosophy. A Modern Introduction. Boston: Wadsworth. Eleventh edition (First edition 1969). 8. Engel, Pascal. 1992. The Norm of Truth. An Introduction to the Philosophy of Logic. Toronto: Toronto University Press. Contents: Acknowledgements VIII; List of logical symbols XII; Introduction 1; Part 1. Elementary structures 13 1. Propositions 15; 2. The meaning of propositional connectives 35; 3. Subject and predicate 56; 4. Varieties of quantification 68; Part 2. Truth and meaning 93 5. Theories of truth 95; 6. Truth, meaning and realism 118; Part 3. Limits of extensionality 143 7. Modalities, possibles and essences 145; 8. Reference and propositional attitudes 161; 9. Identity 183; 10. Vagueness 199; Part 4. The domain of logic 217 11. The province of logic 219; 12. Logical necessity 254; 13. Logic and rationality 291; Conclusion 321; Notes 324; Bibliography 356; Glossary-Index 371; Name Index 379. "This book is an introduction to the philosophy of logic. But 'philosophy of logic' is an umbrella term which covers a variety of different questions and styles of enquiry. I do not think that there is a single, well established, conception of the subject, and the one offered in this book does not pretend to represent them all. Although I shall not attempt to give a precise definition, it will be useful to indicate where my own treatment and choice of topics differs from other approaches. By 'logic' I shall mean, in the usual sense, the theory of inferences that are valid in virtue of their form. It is in general admitted that this definition applies only to deductive logic, and that the theory of inductive inferences does not belong to 'formal logic' in the ordinary sense. (...) Our present use of the term 'philosophical logic' is mostly post-Fregean and post-Russellian. Frege called 'logic' not only his own formal system, but also his reflections about the nature of his formalism and about meaning and truth in general. Although Frege himself does not use the term 'philosophical logic', it is clear that these reflections are close to our contemporary understanding of that term. His insistence on the fact that 'logic' in the wide sense is concerned with language in general and should be kept separate from both psychology and the theory of knowledge justifies Dummett's claim that Frege's inquiries belong also to the philosophy of language and that this discipline holds for him the position of a primary philosophy. Russell proposed explicitly the term philosophical logic for a general enquiry into the nature of 'logical forms'. By this he did not mean only a study of the structure of logical languages, but also of the logical structures of natural languages, which would have both epistemological and ontological consequences.' Our present conceptions of philosophical logic bear strongly their Fregean and Russellian heritages. Philosophical logic is taken to be continuous with the philosophy of language, and to use logic as a tool for the analysis of thought. But there are two main versions of what philosophical logic is, which differ in the respective weight or authority that is granted to logical analysis. One of them assigns precise limits to this authority, and can be called informal philosophical logic, whereas the other aims at contorting and extending this authority, and can be called formal philosophical logic." (from the Introduction). 9. Wolfram, Sybil. 1989. Philosophical Logic. An Introduction. London: Routledge. Contents: Preface XIII; 1. Introduction 1. 2. Reference and truth 26; 3. Necessary truth and the analytic-synthetic distinction 80; 4. Aspects of truth 129; 5. Negation 162; 6. Existence and identity 191; 7. Aspects of meaning 229; Appendix: Examination questions 252; Bibliography of works referred to 263; Glossary 270; Index 278. "Logic may be said to be the study of correct and incorrect reasoning. This includes the study of what makes arguments consistent or inconsistent, valid or invalid, sound or unsound (on these terms see 1.2.1). It has two branches, known as formal (or symbolic) logic and philosophical logic. One of the branches of logic, formal logic, codifies arguments and supplies tests of consistency and validity, starting from axioms, that is, from definitions and rules for assessing the consistency and validity of arguments.' At the present time there are two main systems of formal logic, usually known as the propositional calculus and the predicate calculus. The propositional calculus concerns relations of what it terms 'propositions' to each other. The predicate calculus codifies inferences which may be drawn on account of certain features of the content of The other branch of logic, philosophical logic, which is my concern here, is very much more difficult to delimit and define. It can be said to study arguments, meaning, truth. Its subject matter is closely related to that of formal logic but its objects are different. Rather than setting out to codify valid arguments and to supply axioms and notations allowing the assessment of increasingly complex arguments, it examines the bricks and mortar from which such systems are built. Although it aims, among other things, to illuminate or sometimes question the formalization of arguments into systems with axioms which have been effected, it is not restricted to a study of arguments which formal logic has codified." (pp. 1-2). 10. Lambert, Karel, and Fraasen, Bas C.van. 1972. Derivation and Counterexample. An Introduction to Philosophical Logic. Encino: Dickenson. "Since there are already many elementary logic texts in existence, and since logic is taught today at many levels, we shall explain, first, the specific purposes to which we think this text is suited, and second, how this text differs from other similar texts. In many philosophy departments today a distinction is drawn between the following topics in undergraduate logic teaching: (a) general introduction, (b) techniques of deductive logic, (c) metalogic, (d) philosophical uses of logic. In addition there are texts and courses devoted to advanced work in mathematical logic for students wishing to specialize. We conceive the present text to be usable in the teaching of (b)-(d), to students who either have had a general introduction to logic or who are allowed (and this is frequent enough) to begin symbolic logic without such an introduction. Topics that we would normally expect to have been covered on the introductory level include the nature of arguments and validity, the use/mention distinction, the nature of definition, and perhaps the use of Venn diagrams and truth-tables. A good example of a book designed especially for this general introductory level is Wesley Salmon's Logic (Prentice-Hall, 1963). After the introductory level, the instructor generally has a choice (or the student is offered a choice) whether to emphasize the philosophical side or the mathematical side of logic. Here our text is designed specifically for those whose interest is in philosophical aspects and uses of logic. With this aim in mind, we have introduced a number of innovations into the exposition, but at the same time have made sure that the standard body of elementary symbolic logic is covered. Our main innovations, however, are in the third part, which covers the logic of singular terms. Here we extend the language of classical logic by admitting singular terms, and extend our rules so as to license inferences involving such terms. The resulting extensions of classical logic are called free logic and free description theory. We take care to discuss explicitly the philosophical basis of such notions as possible worlds, domains of discourse, existence, reference and description, utilized in the first three parts, and to compare our approach with historical precedents. This is done, to some extent, as these notions are introduced, and also to some extent in Parts Four and Five. Although there are today many good treatments of metalogic available, they are generally aimed at more advanced levels of instruction. We have aimed to make our presentation of metalogic more elementary than is usual. First of all, as soon as the student is able to use deductive techniques, he is also in a position to prove the admissibility of further deductive rules. By placing such admissibility proofs in Parts One and Two, a certain amount of proof theory is taught along with the deductive techniques. Part Four is devoted to semantics, that is, to a scrutiny of the adequacy of the logical system developed in the first three parts. Since the book is aimed specifically at the philosophy student, we treat only the finite cases; we believe that in this way the student will be able to master the main theoretical concepts and methods without the use of sophisticated mathematical techniques. It must be noted that here the previous parallel development of the tableau rules greatly simplifies the presentation. In Part Five, we discuss the philosophical basis of the logic of existence and description theory, with special reference to the question of extensionality. In addition, we discuss the philosophical uses of free logic in connection with set theory, intentional dosicourse, thought and perception, modal concepts, and the concept of truth. The term "philosophical logic" is used increasingly to designated a specific discipline (indeed, the newly created Journal of Philosophical Logic will be entirely devoted to it), and we hope that Part Five will provide a useful introduction to some of its main areas of research." (from the Preface IX-XI). 11. Burgess, John P. 2009. Philosophical Logic. Princeton: Princeton University Press. 12. Read, Stephen. 1995. Thinking About Logic. An Introduction to the Philosophy of Logic. New York: Oxford University Press. Contents: Introduction 1; 1. Truth, pure and simple: language and the world 5; 2. The power of logic: logical consequence 35; 3. To think but of an If: theories of conditionals 64; 4. The incredulous stare: possible worlds 96; 5. Plato's beard: on what there is and what there isn't 121; 6. Well, I'll be hanged! The semantic paradoxes 148; 7. Bald men forever: the sorites paradoxes 173; 8. Whose line is it anyway? The constructivist challenge 203; Select bibliography 241; Glossary 248; Index 253. "This book is an introduction to the philosophy of logic. We often see an area of philosophy marked out as the philosophy of logic and language; and there are indeed close connections between logical themes and themes in the analysis of language. But they are also quite distinct. In the philosophy of language the focus is on meaning and reference, on what are known as the semantic connections between language and the world. In contrast, the central topic of the philosophy of logic is inference, that is, logical consequence, or what follows correctly from what. What conclusions may legitimately be inferred from what sets of premisses? One answer to this question makes play with the notion of truth-preservation: valid arguments are those in which truth is preserved, where the truth of the premisses guarantees the truth of the conclusion. Since truth itself is arguably the third member of a closely knit trio comprising meaning, reference, and truth, the connection with philosophy of language is immediately secured. (...) It is with these issues of truth and correct inference that we are to engage in this book; and central to that engagement, we will find, is paradox. Paradox is the philosophers' enchantment, their fetish. It fascinates them, as a light does a moth. But at the same time, it cannot be endured. Every force available must be brought to bear to remove it. The philosopher is the shaman, whose task is to save us and rid us of the evil demon. Paradox can arise in many places, but here we concentrate on two in particular, one set united by semantic issues, the other by a fuzziness inherent in certain concepts. In both cases the puzzle arises because natural, simple, and what seem clearly reasonable assumptions lead one very quickly to contradiction, confusion, and embarrassment. There is something awful and fascinating about their transparency, there is an enjoyment in surveying their variety, the rich diversity of examples. But their real philosophical value lies in the purging of the unfounded and uncritical assumptions which led to them. They demand resolution, and in their resolution we learn more about the nature of truth, the nature of consequence, and the nature of reality, than any extended survey of basic principles can give. Only when those seemingly innocent principles meet the challenge of paradox and come under a gaze tutored by realization of what will follow, do we really see the troubles that lie latent within them. We start, therefore, at the heart of philosophy of logic, with the concept of truth, examining those basic principles which seem compelling in how language measures up to the world. But I eschew a simple catalogue of positions held by the great and the good. That could be very dull, and perhaps not really instructive either. Rather, I try to weave a narrative, to show how natural conceptions arise, how they may be articulated, and how they can come unstuck. I hope that the puzzles themselves will capture the readers' imaginations, and tempt them onwards to further, more detailed reading, as indicated in the summary to each chapter. The idea is to paint a continuous picture of a network of ideas treated in their own right and in their own intimate relationships, largely divorced from historical or technical detail." pp. 1-3 (from the Introduction). 13. Sainsbury, Mark. 2001. Logical Forms. An Introduction to Philosophical Logic. Oxford: Blackwell. Second revised edition (First edition 1991). Contents: Preface to the first edition VI; Preface to the second edition VII; Introduction 1; 1. Validity 5; 2. Truth functionality 54; 3. Conditionals and probabilities 122; 4. Quantification 153; 5. Necessity 257; 6. The project of formalization 339; Glossary 392; List of symbols 403; Bibliography 406; Index 419. "This book is an introduction to philosophical logic. It is primarily intended for people who have some acquaintance with deductive methods in elementary formal logic, but who have yet to study associated philosophical problems. However, I do not presuppose knowledge of deductive methods, so the book could be used as a way of embarking on philosophical logic from scratch. Russell coined the phrase 'philosophical logic' to describe a programme in philosophy: that of tackling philosophical problems by formalizing problematic sentences in what appeared to Russell to be the language of logic: the formal language of Principia Mathematica. My use of the term 'philosophical logic' is close to Russell's. Most of this book is devoted to discussions of problems of formalizing English in formal logical languages. I take validity to be the central concept in logic. In the first chapter I raise the question of why logicians study this property in connection with artificial languages, which no one speaks, rather than in connection with some natural language like English. In chapters 2-5 I indicate some of the possibilities and problems for formalizing English in three artificial logical languages: that of propositional logic (chapter 2), of first order quantificational logic (chapter 4) and of modal logic (chapter 5). The final chapter takes up the purely philosophical discussion, and, using what has been learned on the way, addresses such questions as whether there was any point in those efforts at formalizing, what can be meant by the logical form of an English sentence, what is the domain of logic, and what is a logical constant. In this approach, one inevitably encounters not only questions in the philosophy of logic, but also questions in the philosophy of language, as when one considers how best to formalize English sentences containing empty names, or definite descriptions, or adverbs, or verbs of propositional attitude." (pp. 1-2). 14. Englebretsen, George, and Sayward, Charles. 2011. Philosophical Logic. An Introduction to Advanced Topics. New York: Continuum. Contents: List of Symbols X; 1. Introduction 1; 2. Sentential Logic 13; 3. Quantificational Logic 52; 4. Sententia Modal Logic 74; 5. Quantification and Modality 93; 6. Set Theory 103; 7. Incompleteness 130; 8. An Introduction to Term Logic 139; 9. The Elements of a Modal term Logic 166; References 176; Rules, Axioms, and Principles 177; Glossary 184; Index 195-198. "Post-Fregean mathematical logic began with a concern for foundational issues in mathematics. However, by the 1930s philosophers had not only contributed to the building and refinement of various formal systems, but they had also begun an exploitation of them for primarily philosophical ends. While many schools of philosophy today eschew any kind of technical, logical work, an ability to use (or at least a familiarity with) the tools provided by formal logic systems is still taken as essential by most of those who consider themselves analytic philosophers. Moreover, recent years have witnessed a growing interest in formal logic among philosophers who stand on friendly terms with computer theory, cognitive psychology, game theory, linguistics, economics, law, and so on. At the same time, techniques developed in formal logic continue to shed light on both traditional and contemporary issues in epistemology, metaphysics, philosophy of mind, philosophy of science, philosophy of language, and so forth. In what follows, students who have already learned something of classical mathematical logic are introduced to some other ways of doing formal logic: classical logic rests on the concepts of truth and falsity, whereas constructivists logic accounts for inference in terms of defense and refutation; classical logic usually makes use of a semantic theory based on models, whereas the alternative introduced here is based on the idea of truth sets; classical logic tends to interpret quantification objectually, whereas this alternative allows for a substitutional interpretation of quantifiers. As well, a radically different approach, fundamentally different from any version of mathematical logic, is also introduced. It is one that harkens back to the earliest stages in the history of formal logic but is equipped with the resources demanded of any formal logic today." (pp. 1-2).
{"url":"http://www.ontology.co/pathways-logic.htm","timestamp":"2014-04-18T13:06:25Z","content_type":null,"content_length":"97684","record_id":"<urn:uuid:a41c77fc-33b0-4770-bc13-d65f6df2a137>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'Odd One Out' printed from http://nrich.maths.org/ Why do this problem? This problem helps to develop the skill of working with large sets of numbers and getting a 'feel' for their properties. Being able to spot anomalous data quickly is very useful in industrial and research contexts, and this problem could be considered numerical detective work. Possible approach This activity could be used with small groups of students working on computers or with the whole class working together, perhaps with printed copies of a dataset. Give students plenty of time to study the data and discuss in small groups anything they notice. When they think they have identified an odd one out for most or all of the six processes, students could explain to one another how they think the processes work and how sure they are that they have correctly identified the odd one out. They could then test their identification of the processes with new data. Key questions Are there any patterns to the data? Is there anything that doesn't fit with the patterns you see? Can we ever be sure that our explanation of the processes is correct? At what point do we accept that our explanation is correct? Possible extension There is scope for lots of statistical calculation to justify the decisions students make for the odd ones out, by calculating the probability of those errors arising by chance, and working out how likely it is that the data was generated by processes other than those they have assumed. Possible support The problem Data Matching gives an opportunity to look at and compare data sets.
{"url":"http://nrich.maths.org/5801/note?nomenu=1","timestamp":"2014-04-18T01:14:00Z","content_type":null,"content_length":"4885","record_id":"<urn:uuid:983cda11-76df-4ee2-bb0b-8606b2f692e9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help Here is an easy one for someone to nail. Need to solve: In x = -79.93 I know x = e^-76.93 but how do i get to the answer 4.93 x 10^-34 I have an "In" button on my calc., the shift function of this is "e^x". Please help.
{"url":"http://mathhelpforum.com/algebra/41059-base-e.html","timestamp":"2014-04-16T11:43:08Z","content_type":null,"content_length":"58663","record_id":"<urn:uuid:71407160-d39a-4d3a-ae86-25a1117b8b47>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter 25 2. Which of the following is not true of capital investments? 3. Which of the following is a method of analyzing capital investment proposals that ignores present value? 4. Decisions to install new equipment, purchase other businesses, and purchase a new building are examples of 5. The expected average rate of return for a proposed investment of $600,000 in a fixed asset, with a useful life of four years, straight-line Depreciation, no residual value, and an expected total net income of $216,000 for the 4 years, is: 6. An anticipated purchase of equipment for $400,000, with a useful life of 8 years and no residual value, is expected to yield the following annual net incomes and net cash flows: 7. Which of the following is a present value method of analyzing capital investment proposals? 8. Using the following partial table of present value of $1 at compound interest, determine the present value of $25,000 to be received four years hence, with earnings at the rate of 10% a year: 9. The management of Arnold Corporation is considering the purchase of a new machine costing $430,000. The company's desired rate of return is 10%. The present value factors for $1 at compound interest of 10% for 1 through 5 years are 0.909, 0.826, 0.751, 0.683, 0.621, respectively. In addition to the foregoing information, use the following data in determining the acceptability in this 10. All of the following qualitative considerations may impact upon capital investments analysis except: 11. Which of the following provisions of the Internal Revenue Code can be used to reduce the amount of the income tax expense arising from capital investment projects? 12. Assume in analyzing alternative proposals that Proposal A has a useful life of five years and Proposal B has a useful life of eight years. What is one widely used method that makes the proposals 13. All of the following are factors that may complicate capital investment analysis except: 15. Which of the following factors does not have an impact on the outcome of a capital investment decision? 16. Which of the following is not true of capital investments? 17. The process by which management plans, evaluates, and controls long-term investment decisions involving fixed assets is called: 18. Decisions to install new equipment, replace old equipment, and purchase a new building are examples of 19. Which of the following are two methods of analyzing capital investment proposals that both ignore present value? 20. The expected average rate of return for a proposed investment of $44,000 in a fixed asset, using straight line Depreciation, with a useful life of 4 years, no residual value, and an expected total net income of $12,320 is: 21. An anticipated purchase of equipment for $500,000, with a useful life of 8 years and no residual value, is expected to yield the following annual net incomes and net cash flows: 22. Below is a table for the present value of $1 at Compound interest. 23. Below is a table for the present value of $1 at Compound interest. 24. The management of Arnold Corporation is considering the purchase of a new machine costing $420,000. The company's desired rate of return is 10%. The present value factors for $1 at compound interest of 10% for 1 through 5 years are 0.909, 0.826, 0.751, 0.683, 0.621, respectively. In addition to the foregoing information, use the following data in determining the acceptability in this situation: 25. All of the following qualitative considerations may impact upon capital investments analysis except: 26. All of the following qualitative considerations may impact upon capital investments analysis except: 28. All of the following are factors that may complicate capital investment analysis except: 29. Capital rationing uses the following measures to determine the funding of projects except: 30. In capital rationing, alternative proposals that survive initial and secondary screening are normally evaluated in terms of:
{"url":"http://www.proprofs.com/quiz-school/story.php?title=chapter-25_6","timestamp":"2014-04-21T00:32:04Z","content_type":null,"content_length":"261513","record_id":"<urn:uuid:ea885205-b398-4500-ac58-cda79457def3>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
College Level Math Placement (CLMP) Return to Top College Level Math Placement (CLMP) College Level Mathematics Placement Students that have placed into college-level mathematics, either by exemption or satisfactory scores on the TSI Mathematics exam, and desire to take a course that has as a pre-requisite of College Algebra, Trigonometry or Pre-Calculus for which they do not have credit may attempt the Accuplacer College Level Mathematics exam. • MATH 109 “Plane Trigonometry” • MATH 111 “Math for Business II" • MATH 120 “Calculus I” If you would like to complete the College Level Math Placement (CLMP), please choose one of the following options:
{"url":"https://www.tarleton.edu/SUCCESSWEB/testing/advanced-math-placement.html","timestamp":"2014-04-19T09:38:56Z","content_type":null,"content_length":"9938","record_id":"<urn:uuid:a232661c-3e67-4b44-a7bc-c0e32a9c380f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Addition and Subtraction of Decimals To add or subtract decimals, simply line them up so their decimal points are in the same place, and then add or subtract as usual. Sometimes one of the numbers will have more decimal places than the other. Because adding zeros to the end of a decimal does not change its value, we can just add zeros to the end of the shorter number until the two numbers have the same number of decimal places. For example, to subtract 65.23 from 987.462: 987 . 462 -65 . 230 922 . 232 To add 56.999 to 193.1: 193 . 100 +56 . 999 250 . 099 Multiplication of Decimals To multiply two decimals, first count the total number of digits to the right of the decimal place in each number, and add these two totals together. Then remove the decimal points and multiply the two new whole numbers together. Take this result, and count from the right the total number of places calculated in the first step. Then insert a decimal point to the left of this number. For example, to multiply 3.4 and 2.01: • Step 1. There is 1 digit to the right of the decimal point in 3.4, and 2 digits to the right of the decimal point in 2.01. This is a total of 3. • Step 2. Eliminate the decimal points and multiply 34 by 201. This equals 6834 • Step 3. Count 3 places from the right and insert a decimal point. This yields 6.834. 3.4×2.01 = 6.834 Division of Decimals To understand how to divide two numbers when one contains a decimal, we must first remember that adding zeros to the end of a decimal does not change the number. Therefore, we can add as many zeros as we want to either of our decimals. Second, we note that if we move the decimal one place to the right (or to the left) in both numbers, it does not change the answer. To divide two numbers, then, we first add zeros to the end of either number--these must be added to the right of the decimal point--until both numbers have the same number of digits to the right of the decimal point. For example, to divide 31.8 by 2.65, we add a zero to 31.8 so we are dividing 31.80 by 2.65. Next, we move the decimal point to the right until both numbers are whole numbers; moving the decimal point changes the value of the numbers, but it doesn't change the ratio between the two numbers, which is what division measures. Be very careful to move the decimals the same distance for each number. In this case, we move the decimal point to the right 2 places so we are dividing 3,180 by 265. Finally, we carry out the long division. 3, 180/265 = 12 .
{"url":"http://www.sparknotes.com/math/prealgebra/decimals/section3.rhtml","timestamp":"2014-04-16T04:27:31Z","content_type":null,"content_length":"55325","record_id":"<urn:uuid:9e8860ea-4c3a-4dbf-bc5f-b83098a66528>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
non-conservative changes in amino acid sequences John Clement clement at mail.utexas.edu Thu Oct 2 13:09:44 EST 1997 I was wondering if there are methods available for determining if there are branches on an evolutionary tree where amino acid sequences have undergone signficant amounts of non-conservative change in their side-chain chemistry. I would also like to know if my own naive method is valid. PROTDIST in PHYLIP allows distance estimation based on Felsenstein's CATEGORIES model. This model allows you to set a parameter such that changes among chemical classes of amino acids are either highly unlikely (as the value approaches zero) or are not treated differently from conservative changes (when the value = 1.0). It seems to me that if you estimated the distances for a set of sequences using both extremes of this parameter, and fit those distances to a tree, branches on which a considerable amount of non-conservative change had occurred would have a much higher length under, say, value=0.1 than under value=1.0, relative to other branches of the tree. In other words, if you have three branches of a tree, a, b, and c, estimate the distances using both values, and fit them to the tree. If l(a,0.1)/l(a,1.0) is much greater than l(b,0.1)/l(b,1.0) and l(c,0.1)/l(c,1.0), then that is evidence for a greater amount of non-conservative change on a than on b and c. Or perhaps one could weight the branch lengths by the proportion of change each represent in the tree. I. e., instead of l(a,0.1), use and so forth. Is this a legitimate approach? John Clement Botany Department University of Texas Austin, TX 78713 clement at mail.utexas.edu More information about the Mol-evol mailing list
{"url":"http://www.bio.net/bionet/mm/mol-evol/1997-October/006001.html","timestamp":"2014-04-16T21:54:12Z","content_type":null,"content_length":"4007","record_id":"<urn:uuid:fd12275e-7ea9-4faa-bed7-2271db4b2a6a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
H98 module Array exports non-H98 instance Functor ((->) a) Reported by: duncan Owned by: Priority: normal Milestone: 6.10.1 Component: libraries/haskell98 Version: 6.8.2 Keywords: Cc: Operating System: Unknown/Multiple Architecture: Unknown/Multiple Type of failure: Difficulty: Unknown Test Case: Blocked By: Blocking: Related Tickets: Here's a subtle problem for which we have global class instances to thank. Take a look at how many modules in base and the other core libraries re-export the class instances defined in Control.Monad.Instances. A lot do. Indeed the Data.Array and thus the H98 Array module does. That's very bad. It makes it easy to accidentally write non-portable non-H98 code. This bit us in Cabal. We try to keep Cabal working with ghc, nhc98, hugs etc. Malcolm discovered that we were relying on the instance Functor ((->) a) but were not importing it for nhc98 at least. Nothing in Cabal imports Control.Monad.Instances so I was wondering how we were coming to end up with it. Turns out we're getting it at least via Data.Array. Tracking down the source of instances is quite tricky. I wonder if there is anything we can do to make it easier? I was using ghc --show-iface on all the imports to try and find it. In this case it was easier because all I had to look for was Control.Monad.Instances as an orphan module. We should audit which modules are importing Control.Monad.Instances and see if they're essential or just convenience. The point of Control.Monad.Instances being in a separate module was that it'd not be in scope by default. That's defeated if other standard modules use it. For example does Control.Applicative really need it? In fact Control.Applicative is probably the root offender here, it's the one that causes Data.Array to re-export the unwanted instances. Do we need a Control.Applicative.Instances perhaps for the Applicative ((->) a) instance? Attachments (1) Change History (10) Here's a test for this bug: module ShouldCompile where -- import all Haskell 98 modules import Array import Char import Complex import CPUTime import Directory import IO import Ix import List import Locale import Maybe import Monad import Numeric import Random import Ratio import System import Time -- This will fail if any of the Haskell 98 modules indirectly import -- Control.Monad.Instances instance Functor ((->) r) where fmap = (.) This can be used to show that Array is the only H98 module where this bites. The problem is that it defines Foldable and Traversable instances for the Array type, and thus needs those modules and Control.Applicative, which they import. (Another vector is Control.Monad.Fix, imported by Control.Arrow, but that doesn't seem to lead to H98 modules.) I believe that these instances are a Good Thing, and we want them everywhere, except in H98 modules. I think the fix is to have an extra version of Data.Array, without the Foldable and Traversable instances (which are orphans anyway), and have Array import that. To clarify, it's the Functor and Monad instances in Control.Monad.Instances that are the Good Thing. Replying to duncan: Tracking down the source of instances is quite tricky. I wonder if there is anything we can do to make it easier? I was using ghc --show-iface on all the imports to try and find it. use ghci and :info? GHCi, version 6.9.20080217: http://www.haskell.org/ghc/ :? for help Loading package base ... linking ... done. Prelude> :i Functor class Functor f where fmap :: (a -> b) -> f a -> f b -- Defined in GHC.Base instance Functor Maybe -- Defined in Data.Maybe instance Functor [] -- Defined in GHC.Base instance Functor IO -- Defined in GHC.IOBase Prelude> :m +Data.Array Prelude Data.Array> :i Functor class Functor f where fmap :: (a -> b) -> f a -> f b -- Defined in GHC.Base instance (Ix i) => Functor (Array i) -- Defined in GHC.Arr instance Functor ((->) r) -- Defined in Control.Monad.Instances instance Functor ((,) a) -- Defined in Control.Monad.Instances instance Functor (Either a) -- Defined in Control.Monad.Instances instance Functor Maybe -- Defined in Data.Maybe instance Functor [] -- Defined in GHC.Base instance Functor IO -- Defined in GHC.IOBase Prelude Data.Array> one problem with this is that instances aren't managed properly in ghci sessions: Prelude Data.Array> :m -Data.Array Prelude> :i Functor class Functor f where fmap :: (a -> b) -> f a -> f b -- Defined in GHC.Base instance Functor ((->) r) -- Defined in Control.Monad.Instances instance Functor ((,) a) -- Defined in Control.Monad.Instances instance Functor (Either a) -- Defined in Control.Monad.Instances instance Functor Maybe -- Defined in Data.Maybe instance Functor [] -- Defined in GHC.Base instance Functor IO -- Defined in GHC.IOBase i thought the latter was a known bug, but i can't seem to find the ticket.. patch: Don't import Control.Applicative just to get <$> use fmap No good: Data.Foldable and Data.Traversable also import Control.Applicative. • Difficulty set to Unknown • Milestone set to 6.8.3 • Milestone changed from 6.8.3 to 6.10.1 • Resolution set to fixed • Status changed from new to closed Fixed by elimination of orphan instances in Data.Array and then the patch Sun Aug 17 01:23:48 BST 2008 Ross Paterson <ross@soi.city.ac.uk> * delete imports of Data.Foldable and Data.Traversable (fixes #2176) • Architecture changed from Unknown to Unknown/Multiple • Operating System changed from Unknown to Unknown/Multiple
{"url":"https://ghc.haskell.org/trac/ghc/ticket/2176","timestamp":"2014-04-24T04:45:14Z","content_type":null,"content_length":"33252","record_id":"<urn:uuid:4d4a1685-18c8-4797-a5a7-092242fe76de>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
NWT Literacy Council - Adult Literacy - What's the Problem? For Adults Who Like A Challenge Solving Word Problems - Page 11 PROBLEMS WITH WORD CLUES Tell how to work each problem by writing A, S, M, or D: 1. How do you find how much you save if you know how much you earn and how much you spend? 2. How do you find the weight of all the boys on a skidoo if you know the weight of each boy? 3. How do you find your share of the cost of a lunch, if you know the total cost and number of people who went to the lunch? 4. How do you find the number of dollars you can save in several weeks, if you know how many dollars you can save in one week? 5. How do you find the number of tickets you have sold if you know how many ickets you had to sell and how many tickets you have left? 6. If you know how much it costs to stay at Sandy Creek Camp for one week how can you find how much it will cost to stay there for several weeks? 7. How do you find the cost of one ticket to the movies if you know the cost of several tickets? 8. How do you find the age of one of your friends if you know how many years older s/he is than you are?
{"url":"http://www.nwt.literacy.ca/resources/adultlit/problems/p11.htm","timestamp":"2014-04-20T15:56:14Z","content_type":null,"content_length":"3743","record_id":"<urn:uuid:6479cfdb-321a-4d9f-8a2d-f6fb528dd5bf>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Binary Search via Adaptive Learning We use a Bayesian approach to optimally solve problems in noisy binary search. We deal with two variants: 1. Each comparison can be erroneous with some probability 1 - p. 2. At each stage k comparisons can be performed in parallel and a noisy answer is returned. We present a (classic) algorithm which optimally solves both variants together, up to an additive term of O(log log (n)), and prove matching information theoretic lower bounds. We use the algorithm with the results of Farhi et al. (FGGS99)presenting a quantum search algorithm in an ordered list of expected complexity less than log(n)/3, and some improved quantum lower bounds on noisy search, and search with an error probability. Joint work with Michael Ben-Or.
{"url":"http://www.perimeterinstitute.ca/videos/quantum-binary-search-adaptive-learning","timestamp":"2014-04-17T14:31:00Z","content_type":null,"content_length":"26633","record_id":"<urn:uuid:4b86af2e-96ab-4a73-9aa6-d8f98a0fee19>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 11 O Behavior How successful do you think Helen Bowers s new plan will be? we used the solution NiSO4 6H20 - i have to find the mass of solute dissolved in a 25ml solution. - and the molar concentration. Please check for any errors and correct them. Any any suggestions would be great. Cancer is known as an unbiased killer that knows no race, age or sex of its victims. The disease, lung cancer, is the number one cause of cancer deaths in the United States. Lung cancer takes mil... I need help explaining why the answers i have are correct. i am stumped please help me! Read the following sentences, and focus on the grammar area specified in the left column. Then, complete Appendix V by entering the correct sentence choice (a or b) for each number as well ... How do you simplify i38? when did mahathma gandhi died 32 divides n-6 and 64 does not divide n-6.find the highest power of 2 dividing n+2 and n+6? How do you solve (3^x)(3x^x+1)=9 Math -Please help me!!! I'm desperate for help! I need help: Emmett has a 15-year $115,000 mortgage at 7.1% His monthly payment is $1,040.09 Find the principal after the first payment. I know to multiply 115,000 by 7.1 divided into 12 months, then subtract that answer by $1,040.09. When I mulitply 115,000 by 7.1 divided in ... Writing-- Desperately Need Help!! This is an assignment due Wednesday. I NEED HELP!! You purchase an Aqua Delux Aquarium, Model D978,from Fish Emporium,899 Wilon Lane,Forest River,ND 58233,for $325 in cash at a July clearance sale. A sign by the cash register says that all sales are final and no returns or exc...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Chandra","timestamp":"2014-04-17T01:56:44Z","content_type":null,"content_length":"8065","record_id":"<urn:uuid:c17670c7-1bdc-4a78-a854-a6de14460c8e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
The parallel generation of combinations. Title: The parallel generation of combinations. Author: Elhage, Hassan. In this thesis we consider the problem of the generation of combinations of m elements chosen from a set of n elements. We provide a survey of the sequential and parallel algorithms for the generation of the (m,n)-combinations. The major achievement of this thesis is a parallel algorithm for generating all combinations without repetitions of m elements from an arbitrary set of n elements in loxicographic ascending order. The algorithm uses a linear systolic array of m processors, each having constant size memory (except processor m, which has O(n) memory), and each being responsible for producing one element of a given combination. There is a constant delay between successive combinations, leading to an O(c(m,n)) time solutions, where C(m,n) is Abstract: the total number of (m,n)-combinations. The algorithm is cost-optimal, assuming the time to output the combination is counted, and does not deal with very large integers. This algorithm is an improvement over all previous works because it generates combinations of a set of arbitrary elements $\{p\sb1,\ p\sb2,\... ,\ p\sb{n}\}$, on which an order relation is defined such that $p\sb1p\sb2\... p\sb{n}$. This property is important since it allows us to distinguish between algorithms that generate combinations of any ordered set, and algorithms that can only generate combinations of the set $\{$1, 2,$\... ,\ n\}$. Last, we modify this algorithm to generate combinations with repetitions in lexicographic order chosen from a set of arbitrary Date: 1995 URI: http://hdl.handle.net/10393/10041 Files in this item This item appears in the following Collection(s)
{"url":"https://www.ruor.uottawa.ca/en/handle/10393/10041","timestamp":"2014-04-18T23:17:25Z","content_type":null,"content_length":"10906","record_id":"<urn:uuid:72ab1ee7-f9d6-470b-9b45-13a11be475a0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Lomita SAT Math Tutor Find a Lomita SAT Math Tutor ...I now teach at a high school in San Pedro. I teach AP physics, conceptual physics, algebra 2, and honours trigonometry/precalculus. I hope to leave the same kind of impact on my students, as my high school physics and maths teachers left on me.Calculus is the study of rates-of-change. 11 Subjects: including SAT math, calculus, precalculus, trigonometry ...I can improve your math skills in just a very short time. I live in Palos Verdes and am available day, evenings, or weekends. Please let me know so that I can help you child. 11 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...I have also tutored in Geometry and Algebra as a private tutor. In high school I took AP Biology, AP Physics B, AP Psychology, AP Calculus AB, AP English Language and Composition, and AP Microeconomics. I graduated from Palos Verdes Peninsula High School with a 4.5 weighted GPA. 7 Subjects: including SAT math, geometry, biology, algebra 2 ...This gives me the information I need to assess your situation. I’m pretty good at sensing where students’ problems are. At the end of the session and periodically thereafter, I’ll give you (and your parents if appropriate) an idea of how things stand and where to go next. 45 Subjects: including SAT math, English, chemistry, physics ...I have a Master of Arts in Applied Linguistics and a departmental TESOL certificate from Biola University, from which I graduated summa cum laude. I was also the University's Writing Center coordinator, working mainly with ESL/EFL students to improve their English writing skills. I am qualified and able to teach SAT Math, which I have tutored in for the past several years. 22 Subjects: including SAT math, English, ACT Reading, ACT Math
{"url":"http://www.purplemath.com/Lomita_SAT_Math_tutors.php","timestamp":"2014-04-21T02:50:42Z","content_type":null,"content_length":"24030","record_id":"<urn:uuid:2251ae78-e8fd-43bc-8b94-8f22b8ae526b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
D. J. Bernstein Authenticators and signatures The elliptic-curve discrete-logarithm problem An attacker, given Alice's (28-byte or 56-byte) nistp224 public key, can try to figure out Alice's secret key. The fastest known method to compute Alice's secret key takes, on average, more than 2^ 111 elliptic-curve additions. (The method may succeed sooner: it has approximately 1% chance of success after 10% of the time, for example.) As of October 2001, a general-purpose CPU costing $100 can perform nearly one million elliptic-curve additions per second. One billion CPUs, costing 100 billion dollars (never mind the wiring costs), can perform 2^111 elliptic-curve additions in roughly 100 billion years. Special-purpose CPUs should be much faster. Furthermore, there are constant improvements in chip-building technology. But the cryptographic community would be astounded to see this computation actually carried out. This computation is roughly 2^60 (a quintillion: a billion billion) times more difficult than the ECC2K-108 computation. The ECC2K-108 computation was a 109-bit characteristic-2 elliptic-curve discrete-logarithm computation. A team led by Robert Harley carried out the ECC2K-108 computation in time equivalent to a few hundred years on one CPU. The elliptic-curve Diffie-Hellman problem An attacker, given Alice's public key and Bob's public key, can try to figure out the secret shared between Alice and Bob. There is no method known to do this more quickly than computing one of the two secret keys. The elliptic-curve hash Diffie-Hellman problem The precise security property needed for a typical application of nistp224 is the following: an attacker, given Alice's public key and Bob's public key, has negligible probability of distinguishing SHA-256(s) from an independent uniform random 256-bit string. Here s is the secret shared between Alice and Bob. There is no known distinguishing algorithm faster than computing one of the two secret keys. Are there better attacks? There are no known proofs of security for short-key cryptographic systems. It is always possible that a faster attack will be found. In 1985, when Miller and Koblitz proposed elliptic-curve cryptography, it was already known how to compute 224-bit elliptic-curve discrete logarithms in time comparable to 2^111 elliptic-curve additions. Since then, there have been successful attacks on special classes of elliptic curves, but nothing relevant to random curves such as NIST P-224.
{"url":"http://cr.yp.to/nistp224/cryptanalysis.html","timestamp":"2014-04-19T19:34:56Z","content_type":null,"content_length":"2878","record_id":"<urn:uuid:3f5f534a-9cc9-41b7-b5a7-eeda6f7b5b39>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
The College Mathematics Journal - November 2012 November 2012 Contents In the November issue, Mark Frantz gives us A Different Angle on Perspective, Timothy Jones proves the irrationality of π, David Seppala-Holtzman finds An Optimal Basketball Free Throw, and Tom Brown and Brian Pasko find the probability of Winning a Racketball Match. There are also four Proofs Without Words, two Classroom Capsules, a Student Research Project, Media Highlights and Problems and A Different Angle on Perspective Mark Frantz When a plane figure is photographed from different viewpoints, lengths and angles appear distorted. Hence it is often assumed that lengths, angles, protractors, and compasses have no place in projective geometry. Here we describe a sense in which certain angles are preserved by projective transformations. These angles can be constructed with compass and straightedge on existing projective images, giving insights into photography and perspective drawing. Euler's Identity, Leibniz Tables and the Irrationality of Pi Timothy W. Jones Using techniques that show that e and Π are transcendental, we give a short, elementary proof that is irrational based on Euler's identity. The proof involves evaluations of a polynomial using repeated applications of Leibniz' formula as organized in a Leibniz table. Sometimes Newton's Method Always Cycles Joe Latulippe and Jennifer Switkes Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of x. We find a class of complex powers that cycle for all non-trivial initial guesses and present the results analytically and graphically. Proof Without Words: An Alternating series Roger Nelsen A visual proof that 1 - (1/2) + (1/4) - (1/8) + . . . converges to 2/3. The numerical range of the Luoshu is a piece of cake - almost Dietrich Trenkler and Götz Trenkler The numerical range, easy to understand but often tedious to compute, provides useful information about a matrix. Here we describe the numerical range of a 3 x 3 magic square, in particular, one of the most famous of those squares, the Luoshu: 4 9 2/3 5 7/8 1 6, whose numerical range is a piece of cake - almost. Proof Without Words: The Sine is Subadditive on [0, Π ] Xingya Fa A visual proof that the sine is subadditive on [0,Π]. A Fifth Way to Skin a Definite Integral Satyanand Singh We use a novel approach to evaluate the indefinite integral of 1/(1+x^4) and use this to evaluate the improper integral of this integrand from 0 to ∞. Our method has advantages over other methods in ease of implementation and accessibility. Better than Optimal by Taking a Limit? David Betounes Designing an optimal Norman window is a standard calculus exercise. How much more difficult is its generalization to deploying multiple semicircles along the head (or along head and sill, or head and jambs)? What if we use shapes beside semi-circles? As the number of copies of the shape increases and the optimal Norman windows approach a rectangle, what proportions arise? How does the perimeter of the limiting rectangle compare to the limit of the perimeters? These questions provide challenging optimization problems for students and the graphical depiction of these window sequences illustrates the concept of limit more vividly than sequences of numbers. Proof Without Words: Ptolemy's Theorem William Derrick and James Hirstein A visual proof of Ptolemy's theorem. An Optimal Basketball Free Throw David Seppala-Holtzman A basketball player attempting a free throw has two parameters under his or her control: the angle of elevation and the force with which the ball is thrown. We compute upper and lower bounds for the initial velocity for suitable values of the angle of elevation, generating a subset of the configuration space of all successful free throws. A computer-assisted search of this configuration space yields a free throw shot most forgiving of error hence optimal. Winning a Racketball Match Tom Brown and Brian Pasko We find the probability of winning a best-of-three racquetball match given the probabilities that each player wins a point while serving. Idempotents à la Mod Thomas Q. Sibley An idempotent satisfies the equation x^2 = x. In ordinary arithmetic, this is so easy to solve it's boring. We delight the mathematical palette here, topping idempotents off with modular arithmetic and a series of exercises determining for which n there are more than two idempotents (mod n) and exactly how many there are. Rational Exponentials and Continued Fractions Jeff Denny Using continued fraction expansions, we can approximate constants, such as Π and e, with 2 raised to the power x^1/x, x a suitable rational. We review continued fractions and an give an algorithm for producing these approximations. Geometry of Sum-Difference Numbers Paul Yiu We relate the factorization of an integer N in two ways as N = xy = wz with x + y = w - z to the inscribed and escribed circles of a Pythagorean triangle. REFEREES IN 2012 GEORGE PÓLYA AWARDS FOR 2011
{"url":"http://www.maa.org/publications/periodicals/college-mathematics-journal/college-mathematics-journal-contents-novembe-13","timestamp":"2014-04-17T22:57:52Z","content_type":null,"content_length":"103973","record_id":"<urn:uuid:0178445e-a3e5-48cd-8092-ee0c09558d2e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
AN ERROR ESTIMATE FOR GAUSS-JACOBI QUADRATURE FORMULA WITH THE HERMITE WEIGHT $w(x)=\exp(-x^2)$ Radwan Al-Jarrah Department of Mathematical Sciences, University of Petroleum and Minerals, Dhahran, Saudi Arabia Abstract: The purpose of this paper is to give an estimate of the error in approximating the integral $\int\limits_{-\infty}^\infty f(x)\exp(-x^2)dx$ by the Gauss-Jacobi quadrature formula $Q_n(w;f) $, assuming that $f$ is an entire function satisfying a certain growth condition which depends on the Hermite weight function $w(x)= \exp(-x^2)$. Full text of the article: Electronic fulltext finalized on: 3 Nov 2001. This page was last modified: 16 Nov 2001. © 2001 Mathematical Institute of the Serbian Academy of Science and Arts © 2001 ELibM for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PIMB/047/2.html","timestamp":"2014-04-17T07:22:33Z","content_type":null,"content_length":"3328","record_id":"<urn:uuid:8e9c1eaa-8bdf-4559-8432-43705337a3f5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Comment on Ok, for fun, and because I like mathematical proofs (I made it to the USA Mathematic Olympiad twice), here's the proof I use. In pseudo-code, the n-choose-r function looks like: nCr (n,r) { res = 1; for i in 1 .. r { res *= n; res /= i; return res; The question is, why does remain an integer? It is not difficult to show that res /= 1 is an integer, but how can we prove that by the time we get to res /= 7 is a multiple of 7? The proof is that by the time we have gotten to res /= X, we have multiplied res's original value, 1, by X continuous integer, and in every series of X continuous integers, there will be one number divisible by X: X = 2, series = (y, y+1); one is div. by 2 X = 3, series = (y, y+1, y+2); one is div. by 3 in nCr(17,7), n is of the series (17,16,15,14,13,12,11) i is of the series ( 1, 2, 3, 4, 5, 6, 7) (17/1) (16/2) (17/1) (16/2) (15/3) (17/1) (16/4) (15/3) (14/2) (17/1) (16/4) (15/(3*5)) (14/2) (13/1) (17/1) (16/4) (15/(3*5)) (14/2) (13/1) (12/6) (17/1) (16/4) (15/(3*5)) (14/(2*7)) (13/1) (12/6) (11/1) We're allowed to move the denominators around like I did, because we've already showed the product will already be an integer. The product looks like: | | n - (i-1) | | --------- | | i Perl and Regex Hacker Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data! Read Where should I post X? if you're not absolutely sure you're posting in the right place. Please read these before you post! — Posts may use any of the Perl Monks Approved HTML tags: a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr Outside of code tags, you may need to use entities for some characters: For: Use: & &amp; < &lt; > &gt; [ &#91; ] &#93; Link using PerlMonks shortcuts! What shortcuts can I use for linking? See Writeup Formatting Tips and other pages linked from there for more info.
{"url":"http://www.perlmonks.org/?parent=68141;node_id=3333","timestamp":"2014-04-16T06:12:22Z","content_type":null,"content_length":"21399","record_id":"<urn:uuid:1f02ffbe-cba5-4956-ade5-27360c4d7d4b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: cc Find the value of v in the equation. 72 + v = 154 View Solution cc Subtract 511 from 1633. cc View Solution cc Tommy bought 1320 lb of yogurt. He ate 310 lb of it. How much is left? View Solution cc Every month, Brian spends $12 on food, and Jake, $54 . What is the difference between their spendings on food? cc View Solution cc Laura uses 1615 yd of fabric to make an outfit. Brian uses 85 yd to make the same outfit. How many yards of fabric does Brian use in excess of that used by Laura? cc View Solution cc 115 cups of cake recipe requires 136 cups of cocoa and the rest flour. How much flour is required in the recipe? View Solution cc Francis drives his car from Chicago to Detroit and then to Cleveland. He reached Cleveland in 5112 hours. If he took 103 hours to travel from Chicago to Detroit, then how much time did View he take to travel from Detroit to Cleveland? cc Solution cc Simplify the expression. 87 - 1728 View Solution cc Nancy bought 1516 of a yard of a fabric. She used only 14 of a yard of the fabric. How much fabric is left? cc View Solution cc Sam walks 23 miles and Jake walks 67 miles in one hour. Find the distance between Jake and Sam at the end of one hour if both Jake and Sam start walking at the same time. cc View Solution cc Tim was recording the growth of a plant. Its height after the first week was 12 in. and at the end of second week was 34 in. How much did the plant grow by the end of second week when View compared to the first week? cc Solution cc Find the difference between the unlike terms. View Solution 116 and 53 cc cc Subtract 25 from 1310. cc View Solution cc Lydia uses 916 yd of fabric to make an outfit. Jake uses 54 yd to make the same outfit. How many yards of fabric does Jake use in excess of that used by Lydia? cc View Solution cc Rose bought 1112 of a yard of a fabric. She used only 13 of a yard of the fabric. How much fabric is left? cc View Solution cc Find the difference between the unlike terms. View Solution 114 and 52 cc cc Simplify the expression. 35 - 425 View Solution cc Subtract 516 from 1532. cc View Solution cc Ethan bought 1320 lb of yogurt. He ate 310 lb of it. How much is left? View Solution cc Every month, Andrew spends $34 on food, and Brad, $118 . What is the difference between their spendings on food? cc View Solution cc Nancy uses 149 yd of fabric to make an outfit. Andrew uses 73 yd to make the same outfit. How many yards of fabric does Andrew use in excess of that used by Nancy? cc View Solution cc 94 cups of cake recipe requires 115 cups of cocoa and the rest flour. How much flour is required in the recipe? View Solution cc Victor drives his car from Chicago to Detroit and then to Cleveland. He reached Cleveland in 6716 hours. If he took 134 hours to travel from Chicago to Detroit, then how much time did View he take to travel from Detroit to Cleveland? cc Solution cc Brian walks 23 miles and Gary walks 67 miles in one hour. Find the distance between Gary and Brian at the end of one hour if both Gary and Brian start walking at the same time. cc View Solution cc Find the value of v in the equation. 52 + v = 134 View Solution cc Find: 76 - 13 View Solution cc Find: View Solution 116 - 32 cc cc Find: 34 - 18 View Solution cc 12 - 16 = ? View Solution cc 35 - 410 View Solution cc Lauren bought 1112 of a yard of a fabric. She used only 13 of a yard of the fabric. How much fabric is left? cc View Solution cc Gary was recording the growth of a plant. Its height after the first week was 45 in. and at the end of second week was 910 in. How much did the plant grow by the end of second week View when compared to the first week? cc Solution cc Find: View Solution 12 - 110 cc cc Simplify the expression. 65 - 710 View Solution cc Simplify the expression. 56 - 13 View Solution cc Find: 58 - 12 View Solution cc Find: View Solution 25 - 310 cc cc Find : View Solution cc Find : View Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxbxbgdfkxkjheg&.html","timestamp":"2014-04-19T17:34:23Z","content_type":null,"content_length":"90101","record_id":"<urn:uuid:29f7e869-e213-4e75-b995-f0cf91f43503>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Please help with calculation involve solid angle. I'll take a guess at this. It seems like the same idea as "geometric form factors" in radiative heat transfer. The telescope is "seeing" a piece of space that you can take as a flat disk (diameter 0.116 degrees) But the surface of Mars that is radiatiing is not a flat disk, it is a hemisphere. The "brightness" as seen by the telescope will be maximum in the center and less at the edges. That will give you another factor of ##\sin\theta## or ##\cos\theta## in the integral for Mars. Probably the "correction factor" of ##\pi/4## is a well known result, since most astronomical objects are approximately spherical.
{"url":"http://www.physicsforums.com/showpost.php?p=4224212&postcount=2","timestamp":"2014-04-17T04:02:04Z","content_type":null,"content_length":"7627","record_id":"<urn:uuid:e9f48d10-fb7a-47e3-aed2-91cdae0a0be1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Tutors West Bloomfield, MI 48322 Master Certified Coach for Exam Prep, Mathematics, & Physics ...tudents and Parents, It is a pleasure to make your acquaintance, and I am elated that you have found interest in my profile. I am the recipient of a Master of Science: and Master of Arts: Applied Mathematics with a focus in computation and theory. I am... Offering 10+ subjects including physics
{"url":"http://www.wyzant.com/Eastpointe_Physics_tutors.aspx","timestamp":"2014-04-20T21:55:03Z","content_type":null,"content_length":"60765","record_id":"<urn:uuid:db35c7b7-04f0-4909-afec-ad1ba80995e9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
between 2 dates does anyone know how to search and display the total between 2 days without using lots of code thanks but not sure if that is what I need be able to input date 2 dates and for example 21/02 /13 and 10/03/13 then look for a match in my payslip.txt and calculate the total wages between them . think ihave to try and convert the date in some way i know how to search one date but and keep a running total . Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/general/100263/","timestamp":"2014-04-21T02:06:52Z","content_type":null,"content_length":"9112","record_id":"<urn:uuid:c226787a-e655-4b11-83a4-5a1191da47b3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Ordered pair of function August 7th 2012, 05:08 AM #3 August 7th 2012, 05:03 AM #2 MHF Contributor August 7th 2012, 04:57 AM #1 Re: Ordered pair of function Are you comfortable with the definitions of function, domain and range? Can you write them from memory and can you give counterexamples, i.e., something that is not a function, a set that is not the domain of a given function and a set that is not the range of a given function? If you are not comfortable with definitions and examples, you should go back to the textbook or lecture notes and read them again. Ordered pair of function Re: Ordered pair of function
{"url":"http://mathhelpforum.com/math-topics/201853-ordered-pair-function.html","timestamp":"2014-04-20T00:41:43Z","content_type":null,"content_length":"35131","record_id":"<urn:uuid:1a271c0f-8296-4556-9580-2ee4fe59658c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Percentage of a Number ? Re: Percentage of a Number ? You have the number and you want to reduce it by 45%. First you calculate what 45% of the number is and then you would subtract it from the number. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=20169&p=2","timestamp":"2014-04-18T23:45:12Z","content_type":null,"content_length":"10579","record_id":"<urn:uuid:12e8061d-78fe-4036-bb3d-40417f6ef7b5>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
constrained partial derivative identity March 6th 2011, 03:08 AM #1 Aug 2010 constrained partial derivative identity if $f(x,y,z)=0$ then, $(\frac{\partial x}{\partial y})_z (\frac{\partial y}{\partial z})_x (\frac{\partial z}{\partial x})_y = -1$ I tried expressing $\frac{\partial f}{\partial x}$ with z constant but this didnt work Im lost. Help much appreciated It should! With z constant and x a function of y, we can think of f as a function of y only: $\frac{df}{dy}= \frac{\partial f}{\partial x}\frac{\partial x}{\partial y}+ \frac{\partial f}{\partial y}= 0$ $\left(\frac{\partial x}{\partial y}\right)_z= -\frac{\frac{\partial f}{\partial y}}{\frac{\partial f}{\partial x}}$ Now do the same for the other two Why have you written $\frac{df}{dy}$ should it not be $\frac{\partial f}{\partial y}$ as $z=g(x,y)$ This is where my confusion arises March 6th 2011, 03:22 AM #2 MHF Contributor Apr 2005 March 6th 2011, 03:31 AM #3 Aug 2010
{"url":"http://mathhelpforum.com/calculus/173609-constrained-partial-derivative-identity.html","timestamp":"2014-04-18T17:29:35Z","content_type":null,"content_length":"36600","record_id":"<urn:uuid:af6290eb-8cfe-4637-a5b8-09a7aa6f3c71>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of labeled regular graphs on n vertices up vote 0 down vote favorite What is known about the number of labeled regular graphs on n vertices? The sequence does not appear to be in the OEIS. co.combinatorics graph-theory 2 Welcome to MO! Please include more details regarding what you want to know (exact numbers for samll values, bounds for somewhat larger ones, asymptotics [for which range of parameters])? Also, searching the internet one finds various information readily. I assume you already know quite a bit about the problem so it could be better to mention this so that people do not tell you things you already know. – quid Jun 29 '13 at 21:33 Do you mean the number of labeled $k$-regular graphs on $n$ vertices for any $k$? If so, I think you'd find more results by specifying a particular $k$. For example, $2$-regular labeled graphs seem to be counted by A001205. Also note that $3$-regular graphs are called trivalent. That helped me find A006607. – Vince Vatter Jun 30 '13 at 9:49 Enumerating labelled regular (simple) graphs of fixed degree $k$ on $n$ vertices is a notoriously intractable problem. Equivalently, one is enumerating traceless $n\times n$ symmetric $(0,1) $-matrices with row and column sums $k$. The paper of Musiker and Odama at mtholyoke.edu/courses/gcobb/REU_MCMC/papers.html deals with the situation when the matrices are not symmetric nor traceless, but their technique can be adapted to the present situation to give an answer of sorts, but not a very interesting one. – Richard Stanley Jun 30 '13 at 23:25 add comment 1 Answer active oldest votes For fixed $k$, the problem of counting $k$-regular labeled graphs is not intractable at all; the counting sequence is P-recursive, so in principle the sequence is (up to a constant factor) as easy to compute as it could be. (But the complexity grows quickly with $k$.) For $k\le 5$, see I. P. Goulden and D. M. Jackson, Labelled graphs with small vertex degrees and P-recursiveness. SIAM J. Algebraic Discrete Methods 7 (1986), 60-66. up vote 3 down vote and for P-recursiveness in the general case see my paper Ira M. Gessel, Symmetric functions and P-recursiveness. J. Combin. Theory Ser. A 53 (1990), 257-285. add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/135293/number-of-labeled-regular-graphs-on-n-vertices/135424","timestamp":"2014-04-19T18:00:00Z","content_type":null,"content_length":"55242","record_id":"<urn:uuid:170826a6-2304-4e76-8af7-daeff3f548ea>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry/For Enthusiasts/Regular Polygons From Wikibooks, open books for an open world A regular polygon is a polygon with all its sides the same length and all its angles equal. A polygon can have any number (three or more) of sides so there are infinitely many different regular Interior Angles of Regular Polygons[edit] To demonstrate that a square can be drawn so that each of its four corners lies on the circumference of a single circle: Draw a square and then draw its diagonals, calling the point at which they cross the center of the square. The center of the square is (by symmetry) the same distance from each corner. Consequently a circle whose center is co-incident with the center of the square can be drawn through the corners of the square. A similar argument can be used to find the interior angles of any regular polygon. Consider a polygon of n sides. It will have n corners, through which a circle can be drawn. Draw a line from each corner to the center of the circle so that n equal apex angled triangles meet at the center, each such triangle must have an apex angle of 2π/n radians. Each such triangle is isoceles, so its other angles are equal and sum to π - 2π/n radians, that is each other angle is (π - 2π/n)/2 radians. Each corner angle of the polygon is split in two to form one of these other angles, so each corner of the polygon has 2*(π - 2π/n)/2 radians, that is π - 2π/n radians. This formula predicts that a square, where the number of sides n is 4, will have interior angles of π - 2π/4 = π - π/2 = π/2 radians, which agrees with the calculation above. Likewise, an equilateral triangle with 3 equal sides will have interior angles of π - 2π/3 = π/3 radians. A hexagon will have interior angles of π - 2π/6 = 4π/6 = 2π/3 radians which is twice that of an equilateral triangle: thus the hexagon is divided into equilateral triangles by the splitting process described above.
{"url":"https://en.wikibooks.org/wiki/Trigonometry/For_Enthusiasts/Regular_Polygons","timestamp":"2014-04-24T12:45:03Z","content_type":null,"content_length":"25617","record_id":"<urn:uuid:672f43a6-c9f1-45c1-8f0e-c9b39807508b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
A New Approach for Capacity Analysis of Large Dimensional Multi-Antenna Channels Results 1 - 10 of 13 - and France Télécom Labs , 1979 "... Abstract—In this paper, the capacity-achieving input covariance matrices for coherent block-fading correlated multiple input multiple output (MIMO) Rician channels are determined. In contrast with the Rayleigh and uncorrelated Rician cases, no closed-form expressions for the eigenvectors of the opti ..." Cited by 28 (15 self) Add to MetaCart Abstract—In this paper, the capacity-achieving input covariance matrices for coherent block-fading correlated multiple input multiple output (MIMO) Rician channels are determined. In contrast with the Rayleigh and uncorrelated Rician cases, no closed-form expressions for the eigenvectors of the optimum input covariance matrix are available. Classically, both the eigenvectors and eigenvalues are computed numerically and the corresponding optimization algorithms remain computationally very demanding. In the asymptotic regime where the number of transmit and receive antennas converge to infinity at the same rate, new results related to the accuracy of the approximation of the average mutual information are provided. Based on the accuracy of this approximation, an attractive optimization algorithm is proposed and analyzed. This algorithm is shown to yield an effective way to compute the capacity achieving matrix for the average mutual information and numerical simulation results show that, even for a moderate number of transmit and receive antennas, the new approach provides the same results as direct maximization approaches of the average mutual information. Index Terms—Multiple input multiple output (MIMO) Rician channels, ergodic capacity, large random matrices, capacity achieving covariance matrices, iterative waterfilling. I. , 2008 "... Abstract. Linear statistics of eigenvalues in many familiar classes of random matrices are known to obey gaussian central limit theorems. The proofs of such results are usually rather difficult, involving hard computations specific to the model in question. In this article we attempt to formulate a ..." Cited by 24 (3 self) Add to MetaCart Abstract. Linear statistics of eigenvalues in many familiar classes of random matrices are known to obey gaussian central limit theorems. The proofs of such results are usually rather difficult, involving hard computations specific to the model in question. In this article we attempt to formulate a unified technique for deriving such results via relatively soft arguments. In the process, we introduce a notion of ‘second order Poincaré inequalities’: just as ordinary Poincaré inequalities give variance bounds, second order Poincaré inequalities give central limit theorems. The proof of the main result employs Stein’s method of normal approximation. A number of examples are worked out; some of them are new. One of the new results is a CLT for the spectrum of gaussian Toeplitz matrices. 1. , 2007 "... Abstract. Consider a N × n random matrix Yn = (Y n ij) where the entries are given by Y n σij(n) ij = √ X n n ij, the Xn ij being centered, independent and identically distributed random variables with unit variance and (σij(n); 1 ≤ i ≤ N,1 ≤ j ≤ n) being an array of numbers we shall refer to as a ..." Cited by 11 (6 self) Add to MetaCart Abstract. Consider a N × n random matrix Yn = (Y n ij) where the entries are given by Y n σij(n) ij = √ X n n ij, the Xn ij being centered, independent and identically distributed random variables with unit variance and (σij(n); 1 ≤ i ≤ N,1 ≤ j ≤ n) being an array of numbers we shall refer to as a variance profile. We study in this article the fluctuations of the random variable log det (YnY ∗ n + ρIN) where Y ∗ is the Hermitian adjoint of Y and ρ> 0 is an additional parameter. We prove that when centered and properly rescaled, this random variable satisfies a Central Limit Theorem (CLT) and has a Gaussian limit whose parameters are identified. A complete description of the scaling parameter is given; in particular it is shown that an additional term appears in this parameter in the case where the 4 th moment of the Xij’s differs from the 4 th moment of a Gaussian random variable. Such a CLT is of interest in the field of wireless communications. Key words and phrases: Random Matrix, empirical distribution of the eigenvalues, Stieltjes , 2008 "... This paper is devoted to the performance study of the Linear Minimum Mean Squared Error estimator for multidimensional signals in the large dimension regime. Such an estimator is frequently encountered in wireless communications and in array processing, and the Signal to Interference and Noise Ratio ..." Cited by 5 (5 self) Add to MetaCart This paper is devoted to the performance study of the Linear Minimum Mean Squared Error estimator for multidimensional signals in the large dimension regime. Such an estimator is frequently encountered in wireless communications and in array processing, and the Signal to Interference and Noise Ratio (SINR) at its output is a popular performance index. The SINR can be modeled as a random quadratic form which can be studied with the help of large random matrix theory, if one assumes that the dimension of the received and transmitted signals go to infinity at the same pace. This paper considers the asymptotic behavior of the SINR for a wide class of multidimensional signal models that includes general multiantenna as well as spread spectrum transmission models. The expression of the deterministic approximation of the SINR in the large dimension regime is recalled and the SINR fluctuations around this deterministic approximation are studied. These fluctuations are shown to converge in distribution to the Gaussian law in the large dimension regime, and their variance is shown to decrease as the inverse of the signal dimension. "... This paper is devoted to the study of the ergodic capacity of frequency selective MIMO systems equipped with a MMSE receiver when the channel state information is available at the receiver side and when the second order statistics of the channel taps are known at the transmitter side. As the express ..." Cited by 1 (0 self) Add to MetaCart This paper is devoted to the study of the ergodic capacity of frequency selective MIMO systems equipped with a MMSE receiver when the channel state information is available at the receiver side and when the second order statistics of the channel taps are known at the transmitter side. As the expression of this capacity is rather complicated and difficult to analyse, it is studied in the case where the number of transmit and receive antennas converge to + ∞ at the same rate. In this asymptotic regime, the main results of the paper are related to the design of an optimal precoder in the case where the transmit antennas are correlated. It is shown that the left singular eigenvectors of the optimum precoder coincide with the eigenvectors of the mean of the channel taps transmit covariance matrices, and its singular values are solution of a certain maximization problem. 1. , 2008 "... In this contribution, the capacity-achieving input covariance matrices for coherent blockfading correlated MIMO Rician channels are determined. In contrast with the Rayleigh and uncorrelated Rician cases, no closed-form expressions for the eigenvectors of the optimum input covariance matrix are avai ..." Add to MetaCart In this contribution, the capacity-achieving input covariance matrices for coherent blockfading correlated MIMO Rician channels are determined. In contrast with the Rayleigh and uncorrelated Rician cases, no closed-form expressions for the eigenvectors of the optimum input covariance matrix are available. Classically, both the eigenvectors and eigenvalues are computed by numerical techniques. As the corresponding optimization algorithms are not very attractive, an approximation of the average mutual information is evaluated in this paper in the asymptotic regime where the number of transmit and receive antennas converge to + ∞ at the same rate. New results related to the accuracy of the corresponding large system approximation are provided. An attractive optimization algorithm of this approximation is proposed and we establish that it yields an effective way to compute the capacity achieving covariance matrix for the average mutual information. Finally, numerical simulation results show that, even for a moderate number of transmit and receive antennas, the new approach provides the same results as direct maximization approaches of the average mutual information, while being much more computationally attractive. "... journal homepage: www.elsevier.com/locate/peva Using cross-system diversity in heterogeneous networks: ..." Add to MetaCart journal homepage: www.elsevier.com/locate/peva Using cross-system diversity in heterogeneous networks: , 810 "... Linear receivers are considered as an attractive low-complexity alternative to optimal processing for multi-antenna MIMO communications. In this paper we characterize the performance of MIMO linear receivers in two different asymptotic regimes. For fixed number of antennas, we investigate the Divers ..." Add to MetaCart Linear receivers are considered as an attractive low-complexity alternative to optimal processing for multi-antenna MIMO communications. In this paper we characterize the performance of MIMO linear receivers in two different asymptotic regimes. For fixed number of antennas, we investigate the Diversity-Multiplexing Tradeoff (DMT), which captures the outage probability (decoding block-error probability) in the limit of high SNR. For fixed SNR, we characterize the outage probability for a large (but finite) number of antennas. As far as the DMT is concerned, we report a negative result: we show that both linear Zero-Forcing (ZF) and linear Minimum Mean-Square Error (MMSE) receivers achieve the same DMT, which is largely suboptimal even though outer coding and decoding is performed across the antennas. We also provide an approximate quantitative analysis of the different behavior of the MMSE and ZF receivers at finite rate and non-asymptotic SNR, and show that while the ZF receiver achieves poor diversity at any finite rate, the MMSE receiver error curve slope flattens out progressively, as the coding rate increases. When SNR is fixed and the number of antennas grows large, we show that the mutual information at the output of a MMSE or ZF linear receiver has fluctuations that converge in distribution to a Gaussian random variable, whose mean and variance can be characterized in closed form. This analysis extends to the linear receiver case a result that was previously obtained for the optimal receiver. Simulations reveal that the asymptotic analysis captures accurately the outage behavior of systems even with a , 712 "... Precise characterization of the mutual information of MIMO systems is required to assess the throughput of wireless communication channels in the presence of Rician fading and spatial correlation. Here, we present an asymptotic approach allowing to approximate the distribution of the mutual informat ..." Add to MetaCart Precise characterization of the mutual information of MIMO systems is required to assess the throughput of wireless communication channels in the presence of Rician fading and spatial correlation. Here, we present an asymptotic approach allowing to approximate the distribution of the mutual information as a Gaussian distribution in order to provide both the average achievable rate and the outage probability. More precisely, the mean and variance of the mutual information of the separatelycorrelated Rician fading MIMO channel are derived when the number of transmit and receive antennas grows asymptotically large and their ratio approaches a finite constant. The derivation is based on the replica method, an asymptotic technique widely used in theoretical physics and, more recently, in the performance analysis of communication (CDMA and MIMO) systems. The replica method allows to analyze very difficult system cases in a comparatively simple way though some authors pointed out that its assumptions are not always rigorous. Being aware of this, we underline the key assumptions made in this setting, quite similar to the assumptions made in the technical literature using the replica method in their asymptotic analyses. As far as concerns the convergence of the mutual information to the Gaussian distribution, it is shown that it holds under some mild technical conditions, which are tantamount to assuming that the spatial correlation structure has no asymptotically dominant eigenmodes. The accuracy of the asymptotic approach is assessed by providing a sizeable number of numerical results. It is shown that the approximation is very accurate in a wide variety of system settings even when the number of transmit and receive antennas is as small as a few units. , 2011 "... Contributions à l’estimation aveugle et semi-aveugle et analyse de performances ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=548137","timestamp":"2014-04-16T18:00:48Z","content_type":null,"content_length":"39609","record_id":"<urn:uuid:3afa2ea4-5bc4-45c6-993c-d51c93153c64>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring Mountains A geographical surveyor places a marker at a position such that the angle between the horizontal and the top of a mountain is 45^o, after driving 1 km away from the mountain, the angle between the horizontal and the top of the mountain is 30^o. Use the given information to find the height of the mountain. Problem ID: 132 (Nov 2003) Difficulty: 2 Star
{"url":"http://mathschallenge.net/view/measuring_mountains","timestamp":"2014-04-19T09:24:06Z","content_type":null,"content_length":"4415","record_id":"<urn:uuid:0fe17336-8a87-41d0-b8f9-86e3eb25a121>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
Faster velocity than the speed of light? Thank you all for the replies. I feel that i understand the basic concept now. By the way, can anyone recommend a good introductory physics book explaining time dilation and related topics It's not an "introductory physics" book, but I really like the presentation of SR in " A first course in general relativity " by Schutz. It doesn't go very deep into it, but it covers all the basics very well. I don't like that book's presentation of GR however. It's considered the easiest intro to GR, but it's easy because it does everything it can to avoid explaining differential geometry. Your question in the OP seems to have been answered, but I'll offer my thoughts anyway, since you got the answer in bits and pieces. Perhaps this will make it easier for the next person who asks this The distance between the position coordinates of the two objects in the inertial coordinate system comoving with A increasing at a rate faster than c. Some people even call that rate the "relative speed" of the two objects. Another term is "separation speed". The velocity of object p2 relative to p1 is however something completely different. This is the ##dx^i/dt## of the line in ##\mathbb R^4## (or the dx/dt of the line in ##\mathbb R^2##) that p2's world line is mapped to by the inertial coordinate system that's comoving with p1. Since you specified that the coordinate velocities of p1 and p2 in the inertial coordinate system comoving with A are in opposite directions, you can calculate the velocity of p2 in the inertial coordinate system comoving with p1 by using the velocity addition formula. In units such that c=1, it takes the form $$w=\frac{u+v}{1+uv}.$$ In this case, u and w are known, and we're looking for v, so we solve for v. $$v=\frac{w-u}{1-wu}.$$ Now you can just plug in the values w=0.6 and u=-0.6 to get the result v=1.2/1.36≈0.88. I think the easiest way to prove that the right-hand side of the velocity addition formula is in the interval (-1,1) for all u,v<1 is to define ##\theta(r)=\tanh^{-1}(r)## for all ##r\in\mathbb R##, and use the identity $$\frac{\tanh x+\tanh y}{1+\tanh x\tanh y}=\tanh(x+y)$$ and the fact that ##|\tanh x|<1## for all ##x\in \mathbb R##. $$|w| =\left|\frac{\tanh\theta(u)+\tanh\theta(v)}{1 +\tanh\theta(u)\tanh\theta(v)}\right| =\left|\tanh(\theta(u)+\theta(v))\right|<1.$$
{"url":"http://www.physicsforums.com/showthread.php?p=4265454","timestamp":"2014-04-17T01:05:54Z","content_type":null,"content_length":"77135","record_id":"<urn:uuid:e161c7c0-a24d-423b-bff9-57362ceca27f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Limits with inequalities October 9th 2011, 12:22 PM #1 Sep 2009 Limits with inequalities Hi, I am just wondering if I have the correct understanding of the questions below (in image) For question 1. I got the highest value of the function is 19. I got this by subbing in 7.5 because the |x - 7| < 0.5 because it is looking 0.5 units within x=7. So the highest value has to be when it is either 6.5 or 7.5. It turns out f(7.5) = 19 was the highest value. Therefore, the answer is 19 correct? I am wondering because it does not include 7.5, therefore the highest value of the function has to be the limit of f(x) as x approaches 7.5, which is 19. For question two. The 0.3 units of 22. The 22 is the y-value of the function correct? Re: Limits with inequalities Hi, I am just wondering if I have the correct understanding of the questions below (in image) For question 1. I got the highest value of the function is 19. I got this by subbing in 7.5 because the |x - 7| < 0.5 because it is looking 0.5 units within x=7. So the highest value has to be when it is either 6.5 or 7.5. It turns out f(7.5) = 19 was the highest value. Therefore, the answer is 19 correct? If $|x-7|<0.5$ that means $x\in(6.5,7.5).$ Now on an open interval $\left| {\frac{{x + 2}}{{x - 8}}} \right|$ cannot have a maximum. BUT you are right 19 is a upper bound. Re: Limits with inequalities Just as I was thinking over a few minutes ago. So the lower bound is 17/3 while the upper bound is 19. The highest value between these is no value at all because you can always get closer to 16 while not touching 16. Thanks for aid. With question 2 though. The y-value (the limit) is 22 correct? Re: Limits with inequalities October 9th 2011, 01:01 PM #2 October 9th 2011, 01:18 PM #3 Sep 2009 October 9th 2011, 02:24 PM #4
{"url":"http://mathhelpforum.com/pre-calculus/189934-limits-inequalities.html","timestamp":"2014-04-19T22:48:58Z","content_type":null,"content_length":"41972","record_id":"<urn:uuid:0c311a30-55dc-42ac-a679-becda02c0300>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 1/3 in 1/5 in and 1/4 added is • one year ago • one year ago Best Response You've already chosen the best response. you have to make common denominators , then you add Best Response You've already chosen the best response. take the L.C.M of the denominators of the 3 fractions...here, the L.C.M. of 3, 5 and 4 is 60...then, u make all the denominators 60 and so multily the numerators with the required number...for instance in 1/3, 1 is multiplied by 20, since 20 multiplied by 3 gives you 60...do the same for the rest of them and add all the numerators... Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/510aa7c3e4b0d9aa3c468d31","timestamp":"2014-04-19T02:28:03Z","content_type":null,"content_length":"30254","record_id":"<urn:uuid:ec6ce24a-01d4-4279-9685-3f24d98714c7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by rani Total # Posts: 36 How do I find the image of O (0,0) after two reflections, first across x=a and then across y=b? an object at rest explodes into three pieces of equal mass. one moves east at 20 m/s a second moves southeast at 30 m/s what is the velocity of the third piece?? A hobby shop has 4 times as manymodel airplanes as model helicopters.The total number of model airplanesand helicopters is 35. How manymodel airplanes does the hobby shophave? beth travels 244 miles everyweek. she says she will travel 2,196 miles if she travels the same number of miles for the next 9 weeks. chemistry (buffer) If 5.00 mL of 1.60 nitric acid is added to a solution of 2.00 mmol thiocyanic acid (HSCN pKa=4.00) and 4.500 mmol calcium thiocyanide, assume a soluble salt- what is teh pOG of the resultant complex solution (of total volume .350L). I worked the problem out and ended up gettin... a body cools from 80 degrees to 50 degrees in 5 minuts. calculate what time it will take to cool from 60 degrees to 30 degrees? I want to have the details of the experiment title: Determine the equilibrium constant of a reversible reaction involving the iodine and the tri iodate ion reaction and also determine the gibbs free energy of the reaction. Sorry the "Total Items $$ Total" was supposed to be 3 columns. Okay, I have figured this out. Total Items $$ Total Books 175-v 5 5(175-v) Videos v 6 6v Total 175 --- $910 5(175-v) + 6v = 910 875- 5v + 6v = 910 1v (or just v) = 910-875 v = 35, or 35 videos 35 videos at $6 each = $200 sold $910-200 = $710 left for books $710/$5 (books) = 14... Used books are being sold at $5 and used videos are being sold at $6. There are 175 total items sold and a total of $910 collected. How many videos were sold? Reasoning. If you live 71mi from a river, does it make sense to say you live about 80 mi from the river? Explain unscramble ecrfenine MATH grade 12 Determine the values of a M and N , so that the polynomial 2x^3+mx^2+nx-3 and X^3-3mx^2+2nx+4 are both divisible by x-2. I am so lost, i do not know how to start this problem! please help i have a test soon! advance functions Determine the values of a M and N , so that the polynomial 2x^3+mx^2+nx-3 and X^3-3mx^2+2nx+4 are both divisible by x-2. I am so lost, i do not know how to start this problem! please help i have a test soon! A gaseous reaction occurs at a constant pressure of 50.0 atm and releases 55.6 kJ of heat. Before the reaction, the volume of the system was 7.60 L. After the reaction, the volume of the system was 2.00 L. Calculate the total internal energy change, Delta U, in kilojoules. In lab we did thermochemistry, where we are supposed to find the heats of solution, heat change of calorimeter and molar heat solution of the salt. I calculated that the calorimeter constant for the water we had was 883.194 Joules. We took 1/10 of a mole of NH4Cl and tried to ... if cot theta =7/8 , evaluate (1+sin^2theta)(1-sin^2theta)/(1+cos^2theta)(1+cos-^2theta) the answer should be 49/64 but i couldnt get that answer. can any one help? if cot theta =7/8 , evaluate (1+sin^2theta)(1-sin^2theta)/(1+cost^2theta)(1+co-^2theta) the answer should be 49/64 but i couldnt get that answer. can any one help? Four billion years ago the Sun s radiative output was 30% less than it is today. (i) If we assume the radius of the sun is the same, and that the Earth s atmosphere was the same as it is now (that is, the atmosphere absorbs 10% of the incoming solar radiation and 80%... PROBLEMS: 1. In a measurement of precipitation (rainfall), a raingauge with an orifice (collector) area of 30.50 in2 collects 1.30 litres of water over a period of 26 minutes and 45 seconds.[10] Calculate: - The depth (amount) of rain that fell (in mm). - The intensity (mm h-1... A)The depth (amount) of rain that fell (in mm). b. The intensity (mm h-1) at which the rain fell. c. The volume (m3) of water that would have fallen on an area of 7.00 acres. d. The discharge in m3 s-1 that would occur if all the water ran off the area in part c. in 3.00 hours... 1. In a measurement of precipitation (rainfall), a raingauge with an orifice (collector) area of 30.50 in2 collects 1.30 litres of water over a period of 26 minutes and 45 seconds.[10] Calculate: a. The depth (amount) of rain that fell (in mm). b. The intensity (mm h-1) at whi... Science!! Please HELP If a population consists of 10,000 individuals at time t=0 years (P0), and the annual growth rate (excess of births over deaths) is 3% (GR), what will the population be after 1, 15 and 100 years (n)? Calculate the "doubling time" for this growth rate. Given this grow... Science!! Please HELP If a population consists of 10,000 individuals at time t=0 years (P0), and the annual growth rate (excess of births over deaths) is 3% (GR), what will the population be after 1, 15 and 100 years (n)? Calculate the "doubling time" for this growth rate. Given this grow... Math - please elaborate can someone please further explain my question on? it's just a farther bit down. The subject is "Math - help really needed" I think I almost got the answer, but I'm not sure where to go from there. really appreaite all the help so far as well; thank you. :) Math - help really needed I haven't learned what cot and csc are. Can I still solve this with just tan, cos, and sin? Math - help really needed but if I have identical denominators, why can't I just add the 1 to the tan^2X? Math - help really needed I'm really sorry. I don't want to sound stupid, but when I add tan^2x/tan^2x to 1/tan^2x I get 1+tan^2x/tan^2x and then the tans cancel out and I'm left with just one. Obviously, that's not correct, but I can't find my mistake. Math - help really needed okay, I'll try it out, thank you. Math - help really needed wait, for the first one if sin^2+cos^2=1, how did you even cancel some out? Math - help really needed oh, ok, I see. thank you all very much. I think I get it now, so hopefully I can actually get them on my own. :) Math - help really needed I'm sorry to double post; I don't want to seem impatient, but I really need help with this. Prove each idenity. 1+1/tan^2x=1/sin^2x 1/cosx-cosx=sinxtanx 1/sin^2x+1/cos^2x=1/sin^2xcos^2x 1/1-cos^2x+/ 1+cosx=2/sin^2x and (1-cos^2x)(1+1/tan^2x)= 1 I haven't even gotten... Math - help really needed Prove each idenity. 1+1/tan^2x=1/sin^2x 1/cosx-cosx=sinxtanx 1/sin^2x+1/cos^2x=1/sin^2xcos^2x 1/1-cos^2x+/1+cosx=2/sin^2x and (1-cos^2x)(1+1/tan^2x)= 1 I haven't even gotten 'round to sny of the quedtions because the first one is just so hard. I'm not really sure I... 15 percent of 50 acids and alkalis Pls help me. Pls tell me some bases (alkelis)other than any cleaning products.
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=rani","timestamp":"2014-04-21T04:51:14Z","content_type":null,"content_length":"14761","record_id":"<urn:uuid:562eba3b-c1f9-4312-a937-f158f3284a96>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof that a finitely generated projective module over a Von Neumann Regular ring is free up vote 1 down vote favorite I'm searching for a proof that a finitely generated projective module over a Von Neumann Regular ring such that all the localizations have the same rank is free. I know that this result is true, because a friend of mine have proved it when trying to make an intuitionistic proof of the Serre's conjecture, however he did not publish it and he lost the proof that must be located among the piles of paper in his room. Thanks in advance. Look in Ken Goodearl's book on von Neumann regular rings. – Nik Weaver May 24 '13 at 3:16 1 Is it true though ? A product of fields is a commutative VNR ring, but you can easily find a principal ideal which is not free.. – Fred.Fred May 24 '13 at 11:54 Oops, I will delete my wrong answer. Thanks to Fred.Fred and Torsten! – Fred Rohrer May 24 '13 at 21:08 1 Ops, I forgot to put that the rank is constant at each localization. – user17868 Jun 3 '13 at 14:03 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ac.commutative-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/131673/proof-that-a-finitely-generated-projective-module-over-a-von-neumann-regular-rin?answertab=oldest","timestamp":"2014-04-20T01:28:41Z","content_type":null,"content_length":"51430","record_id":"<urn:uuid:db755660-4564-4ef8-815a-8c67674148df>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Real Scaled Matching Amihood Amir \Lambday Ayelet Butman \Lambda Moshe Lewenstein \Lambda z Bar­Ilan University Bar­Ilan University Bar­Ilan University Georgia Tech Scaled Matching refers to the problem of finding all locations in the text where the pattern, proportionally enlarged according to an arbitrary integral scale, appears. Scaled matching is an important problem that was originally inspired by problems in However, in real life, a more natural model of scaled matching is the Real Scaled Matching model. Real scaled matching is an extended version of the scaled match­ ing problem allowing arbitrary real­sized scales, approximated by some function, e.g. It has been shown that the scaled matching problem can be solved in linear time. However, even though there has been follow­up work on the problem, it remained an open question whether real scaled matching could be solved faster than the simple solution of O(nm) time, where n is the text size and m is the pattern size. Using a new approach we show how to solve the real scaled matching problem in linear time.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/466/3889166.html","timestamp":"2014-04-21T15:26:24Z","content_type":null,"content_length":"8217","record_id":"<urn:uuid:500ab3ee-4f6b-4870-af97-d2e9a6e7ccb0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the trivial solution the only solution? up vote 1 down vote favorite Let n be a positive integer, and c_1, c_2, ... c_n be (unkown) real numbers . Consider the system $$c_1+c_2+ ... +c_n=0,$$ $$c_1^2+c_2^2+ ... +c_n^2=0,$$ $$c_1^3+c_2^3+ ... +c_n^3=0,$$ $$.... .... ....$$ $$c_1^n+c_2^n+ ... +c_n^n=0.$$ The question is: Is the trivial solution the only solution to the system? linear-algebra ra.rings-and-algebras 2 en.wikipedia.org/wiki/Vandermonde_matrix. Voting to close as not research-level (for which see the FAQ). – Steve Huntsman Aug 23 '12 at 14:55 3 The problem is more interesting over the complex numbers, for there sums of squares obey different constraints than for real numbers. Gerhard "Has Solution For Many Equations" Paseman, 2012.08.23 – Gerhard Paseman Aug 23 '12 at 15:17 2 I didn't note the comment. How do you use the Vandermonde determinant to show that all $c_j$ are zero? – Pietro Majer Aug 23 '12 at 15:34 add comment closed as off topic by Steve Huntsman, Andres Caicedo, Federico Poloni, Benjamin Steinberg, algori Aug 23 '12 at 15:27 Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. 1 Answer active oldest votes Yes, assuming we are in a field. A first consequence is that all the symmetric functions of $(c_1,\dots,c_n)$ are zero (see e.g. this wiki article ). But this can be written as an up vote 3 identity of polynomials $$x^n=\prod_{j=1}^n(x-c_j)$$ which obviously implies $c_j=0$ for all $j$. down vote 2 or, alternatively, from $\prod_{j=1}^n c_j=0$ one gets that at least one $c_j$ vanishes; and iterating, that they all vanish. – Pietro Majer Aug 23 '12 at 15:32 add comment Not the answer you're looking for? Browse other questions tagged linear-algebra ra.rings-and-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/105324/is-the-trivial-solution-the-only-solution","timestamp":"2014-04-24T14:07:24Z","content_type":null,"content_length":"50303","record_id":"<urn:uuid:daa50742-6320-4133-96a3-95943749409c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximation algorithms for geometric TSP - Journal of Machine Learning Research , 2006 "... Given a matrix of values in which the rows correspond to objects and the columns correspond to features of the objects, rearrangement clustering is the problem of rearranging the rows of the matrix such that the sum of the similarities between adjacent rows is maximized. Referred to by various names ..." Cited by 6 (0 self) Add to MetaCart Given a matrix of values in which the rows correspond to objects and the columns correspond to features of the objects, rearrangement clustering is the problem of rearranging the rows of the matrix such that the sum of the similarities between adjacent rows is maximized. Referred to by various names and reinvented several times, this clustering technique has been extensively used in many fields over the last three decades. In this paper, we point out two critical pitfalls that have been previously overlooked. The first pitfall is deleterious when rearrangement clustering is applied to objects that form natural clusters. The second concerns a similarity metric that is commonly used. We present an algorithm that overcomes these pitfalls. This algorithm is based on a variation of the Traveling Salesman Problem. It offers an extra benefit as it automatically determines cluster boundaries. Using this algorithm, we optimally solve four benchmark problems and a 2,467-gene expression data clustering problem. As expected, our new algorithm identifies better clusters than those found by previous approaches in all five cases. Overall, our results demonstrate the benefits of rectifying the pitfalls and exemplify the usefulness of this clustering technique. Our code is available at our websites. "... The Traveling Salesman Problem (TSP) is perhaps the most studied discrete optimization problem. Its popularity is due to the facts that TSP is easy to formulate, difficult to solve, ..." Add to MetaCart The Traveling Salesman Problem (TSP) is perhaps the most studied discrete optimization problem. Its popularity is due to the facts that TSP is easy to formulate, difficult to solve, , 2012 "... We propose a framework of lower bounds for the asymmetric traveling salesman problem (TSP) based on approximating the dynamic programming formulation with different basis vector sets. We discuss how several well-known TSP lower bounds correspond to intuitive basis vector choices and give an economic ..." Add to MetaCart We propose a framework of lower bounds for the asymmetric traveling salesman problem (TSP) based on approximating the dynamic programming formulation with different basis vector sets. We discuss how several well-known TSP lower bounds correspond to intuitive basis vector choices and give an economic interpretation wherein the salesman must pay tolls as he travels between cities. We then introduce an exact reformulation that generates a family of successively tighter lower bounds, all solvable in polynomial time. We show that the base member of this family yields a bound greater than or equal to the well-known Held-Karp bound, obtained by solving the linear programming relaxation of the TSP’s integer programming arc-based formulation.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=10090058","timestamp":"2014-04-23T22:55:50Z","content_type":null,"content_length":"17367","record_id":"<urn:uuid:7a459fe9-c5e0-4701-b6cb-2b9e06948724>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
- - Please install Math Player to see the Math Symbols properly Click on a 'View Solution' below for other questions: cc Find the value of v in the equation. 72 + v = 154 View Solution cc Subtract 511 from 1633. cc View Solution cc Tommy bought 1320 lb of yogurt. He ate 310 lb of it. How much is left? View Solution cc Every month, Brian spends $12 on food, and Jake, $54 . What is the difference between their spendings on food? cc View Solution cc Laura uses 1615 yd of fabric to make an outfit. Brian uses 85 yd to make the same outfit. How many yards of fabric does Brian use in excess of that used by Laura? cc View Solution cc 115 cups of cake recipe requires 136 cups of cocoa and the rest flour. How much flour is required in the recipe? View Solution cc Francis drives his car from Chicago to Detroit and then to Cleveland. He reached Cleveland in 5112 hours. If he took 103 hours to travel from Chicago to Detroit, then how much time did View he take to travel from Detroit to Cleveland? cc Solution cc Simplify the expression. 87 - 1728 View Solution cc Nancy bought 1516 of a yard of a fabric. She used only 14 of a yard of the fabric. How much fabric is left? cc View Solution cc Sam walks 23 miles and Jake walks 67 miles in one hour. Find the distance between Jake and Sam at the end of one hour if both Jake and Sam start walking at the same time. cc View Solution cc Tim was recording the growth of a plant. Its height after the first week was 12 in. and at the end of second week was 34 in. How much did the plant grow by the end of second week when View compared to the first week? cc Solution cc Find the difference between the unlike terms. View Solution 116 and 53 cc cc Subtract 25 from 1310. cc View Solution cc Lydia uses 916 yd of fabric to make an outfit. Jake uses 54 yd to make the same outfit. How many yards of fabric does Jake use in excess of that used by Lydia? cc View Solution cc Rose bought 1112 of a yard of a fabric. She used only 13 of a yard of the fabric. How much fabric is left? cc View Solution cc Find the difference between the unlike terms. View Solution 114 and 52 cc cc Simplify the expression. 35 - 425 View Solution cc Subtract 516 from 1532. cc View Solution cc Ethan bought 1320 lb of yogurt. He ate 310 lb of it. How much is left? View Solution cc Every month, Andrew spends $34 on food, and Brad, $118 . What is the difference between their spendings on food? cc View Solution cc Nancy uses 149 yd of fabric to make an outfit. Andrew uses 73 yd to make the same outfit. How many yards of fabric does Andrew use in excess of that used by Nancy? cc View Solution cc 94 cups of cake recipe requires 115 cups of cocoa and the rest flour. How much flour is required in the recipe? View Solution cc Victor drives his car from Chicago to Detroit and then to Cleveland. He reached Cleveland in 6716 hours. If he took 134 hours to travel from Chicago to Detroit, then how much time did View he take to travel from Detroit to Cleveland? cc Solution cc Brian walks 23 miles and Gary walks 67 miles in one hour. Find the distance between Gary and Brian at the end of one hour if both Gary and Brian start walking at the same time. cc View Solution cc Find the value of v in the equation. 52 + v = 134 View Solution cc Find: 76 - 13 View Solution cc Find: View Solution 116 - 32 cc cc Find: 34 - 18 View Solution cc 12 - 16 = ? View Solution cc 35 - 410 View Solution cc Lauren bought 1112 of a yard of a fabric. She used only 13 of a yard of the fabric. How much fabric is left? cc View Solution cc Gary was recording the growth of a plant. Its height after the first week was 45 in. and at the end of second week was 910 in. How much did the plant grow by the end of second week View when compared to the first week? cc Solution cc Find: View Solution 12 - 110 cc cc Simplify the expression. 65 - 710 View Solution cc Simplify the expression. 56 - 13 View Solution cc Find: 58 - 12 View Solution cc Find: View Solution 25 - 310 cc cc Find : View Solution cc Find : View Solution
{"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxbxbgdfkxkjheg&.html","timestamp":"2014-04-19T17:34:23Z","content_type":null,"content_length":"90101","record_id":"<urn:uuid:29f7e869-e213-4e75-b995-f0cf91f43503>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Karen is using an indirect method to prove that segment DE is not parallel to segment BC in the triangle ABC shown. She starts with the assumption that segment DE is parallel to segment BC. Which inequality will she use to contradict the assumption? 6:2 ≠ 7:10 6:8 ≠ 7:10 6:7 ≠ 2:7 6:2 ≠ 3:2 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fa9552be4b059b524f52f12","timestamp":"2014-04-19T10:18:56Z","content_type":null,"content_length":"55064","record_id":"<urn:uuid:1bc87cfc-d419-4660-9bb7-7129964eda61>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
boundedCellVoronoi calculates a modified Voronoi diagram of a set of points in the x-y plane. A temporary layer of artificial cells is added outside the convex hull of the points, and the Voronoi diagram is computed on the extended set. The cells corresponding to the artificial cell are then removed to generate the bounded cell Voronoi diagram. boundedCellVoronoi[points, diameter, options]: If diameter is ≤ 0 then the mean delaunay triangulation edge length is used for the diameter. boundedCellVoronoi[points,options] is equivalent to boundedCellVoronoi[points, -1, options]. visualization and visualize3D use boundedCellVoronoi. boundedCellVoronoi is also an option for visualization, and can be used to provide a copy of the bound cell Voronoi Diagram as input to visualization to improve processing time. │ The following figures illustrate the concepts in two dimensions. Click on any of the figures to see a larger view; the example notebook illustrates how to generate these figures. │ │ │ │ A set of 50 randomly placed points in a circular region. Finite-length edges of the Voronoi Diagram. Finite-length edges, colored regions, and points. │ │ Finite edges are shown in pink to illustrate where they were │ │ The Bounded Cell Voronoi of the same set of points. truncated. Colored Bounded Cell Voronoi. │ │ A 3-Dimensional set of points; the box is shown to give some The bounded cell Voronoi Diagram in 3 Dimensions; each cell is shown The bounded cell Voronoi Diagram showing only the cell │ │ perspective to the figure. randomly colored. edges and centers. │ See Also:
{"url":"http://xlr8r.info/mPower/pages/boundedCellVoronoi.html","timestamp":"2014-04-17T12:46:34Z","content_type":null,"content_length":"6662","record_id":"<urn:uuid:fc6b6c5f-f43f-4d0c-b598-bdffc3f1954f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
easy geometry! November 19th 2010, 05:32 PM #4 November 19th 2010, 05:28 PM #3 November 19th 2010, 05:25 PM #2 November 19th 2010, 05:05 PM #1 Fine, so we want to use trig. Very well. We know the magnitude of all the angles because we see that we had similar triangles here, they are all isosceles, and hence the other angles were 30 degrees each. construct two perpendicular lines to AC, one passing through P, the other through Q. That will give you the sine relationships your professor mentioned (Remember SOHCAHTOA!) The fact that the sum is 1/2 AC goes back to what I was saying about similar triangles with the factor of 1/2 Note that $\displaystyle \triangle APB$ and $\displaystyle \triangle BQC$ are both isosceles triangles with the same side lengths. Similar triangles with exactly the same magnitude. (do you see B bisects AC (why?), and so, the bases of triangles $\displaystyle APB$ and $\displaystyle BQC$ are each half the length of $\displaystyle \triangle ARC$, which is also a similar triangle. Since these are similar, corresponding sides are proportional, and knowing $\displaystyle \triangle ARC$ is double the magnitude of the two other triangles, we know the exact proportion. The upshot of all this is: $\displaystyle AP + CQ - AR = \frac 12AR + \frac 12 AR - AR = 0$ since AP and CQ are corresponding sides to AR with half its magnitude. Now I spouted a lot of unproven claims here. It's up to you to fill in the blanks Well, my teacher gave a solution , but as my trig is horrible now we know $angles PAB , PBC, QBC, QCB, RAC, ACR = 30$ Now $AP \sin 30 + CQ \sin 30 = \frac{1}{2} AC = AR \sin 30$ , which implies $AP+CQ = AR$ OR, AP+CQ-AR=0 It may be very trivial but i dont understand why $AP \sin 30 + CQ \sin 30 = \frac{1}{2} AC = AR \sin 30$ please help! easy geometry! Well, my teacher gave a solution , but as my trig is horrible now we know $\angle PAB , PBC, QBC, QCB, RAC, ACR = 30$ Now $AP \sin 30 + CQ \sin 30 = \frac{1}{2} AC = AR \sin 30$ , which implies $AP+CQ = AR$ OR, AP+CQ-AR=0 It may be very trivial but i dont understand why $AP \sin 30 + CQ \sin 30 = \frac{1}{2} AC = AR \sin 30$ please help! construct two perpendicular lines to AC, one passing through P, the other through Q. That will give you the sine relationships your professor mentioned (Remember SOHCAHTOA!) The fact that the sum is 1/2 AC goes back to what I was saying about similar triangles with the factor of 1/2 Hi earthboy, Is B supposed to be the midpoint of AC? Is B supposed to be the midpoint of AC the question says, B is any point on the line AC. but then B bisects AC (why?) omg!its becoming more confusing for me sorry if i am being silly :-o. Hi earthboy, With B as any point on a line segment AC the statement AP +CQ - AR =0 can be proved by easy geometry (no trig required ).Other info as you described. the problem says B is a point on AC, not "any" point. it ends up being the midpoint with the way the rest of the problem is set up. the secret that tells us this lies in the angles APB and easy geometry problem Hello jhevon, A point on a line can be placed anywhere if there is no spec given.To draw a diagram for a proof lets make AB= 4 cm BC = 6 cm AC =10cm. Points P, Q, R lie on the perpendicular bisectors of AB,BC, AC respectively. Three 30-30-120 isosceles triangles are erected as described. Extend AP and QC to meet the perpendicular bisector of AC at S above AC ARCS and PBQS are parallelograms AP+QC = AR.The whole is the sum of the parts yes ... sorry if i am being a moron Extend AP and QC to meet the perpendicular bisector of AC at S above AC suppose AP and QC when extended meet as S. Can we be sure that the perpendicular bisector of AC always passes through the point S ??? another doubt: is the original trig solution ok? Hi earthboy, Did you draw the diagram ? Triangle ACS is an isosceles triangle congruent to ARC.AP and CQ extended meet at S on the perpendicular bisector.I cannot believe you received the trig relations from a math teacher. November 20th 2010, 12:48 PM #5 Super Member November 20th 2010, 10:30 PM #6 November 21st 2010, 11:28 AM #7 Super Member November 21st 2010, 05:07 PM #8 November 22nd 2010, 05:41 AM #9 Super Member November 22nd 2010, 06:56 AM #10 November 22nd 2010, 07:27 AM #11 Super Member
{"url":"http://mathhelpforum.com/geometry/163788-easy-geometry.html","timestamp":"2014-04-17T16:45:30Z","content_type":null,"content_length":"72438","record_id":"<urn:uuid:2e6c4839-2ab8-4ac4-97a1-9ff1994e8d99>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Exponent of metacyclic groups up vote 1 down vote favorite I am interested in the following related questions in metacyclic groups of the form $\mathbb{Z}_n \ltimes_r \mathbb{Z}_m$, where $r^n \equiv 1 \pmod{m}$: 1. The order of an arbitrary element $g = (\alpha, 0)*(0, \beta)$ - or some upper bound on the order - where * is the group operation. 2. The exponent of the group I know that the first question reduces to finding the smallest integer $k$ such that: $k \alpha \equiv 0\pmod{n}$, and $\beta \frac{r^{k \alpha} - 1}{r^\alpha - 1} \equiv 0 \pmod{m}$, but that's about it. Thank you very much in advance. gr.group-theory nt.number-theory It seems like a homework. Voted to close. – Mark Sapir Mar 1 '12 at 15:41 It is not homework. If it looks so easy to you, could you please give me some hint? – Hebert Mar 1 '12 at 16:30 This question was posted two days ago at MathStackExchange, and it hasn't got any answer so far. That's why I have posted it here. It is a legitimate research question. – Hebert Mar 1 '12 at 16:57 2 See the paper C. E. Hempel, Metacyclic groups, Comm. Algebra 28 (2000), no. 8, 3865--3897. In particular, Lemma 2.1 gives the answer to your question. – Primoz Mar 1 '12 at 19:34 I still haven't got access to the paper, but thank you very much anyway. I'm looking forward to see it. – Hebert Mar 1 '12 at 21:23 show 2 more comments 1 Answer active oldest votes Here is an answer which is probably far from optimal (I am no expert). Let $$ t:=\mathrm{ord}_m r, \qquad k:=\mathrm{lcm}\left(\frac{n}{\gcd(n,\alpha)},\frac{mt}{\gcd(t,\alpha)}\right), $$ then clearly $k\alpha\equiv 0\pmod{n}$, and I claim that $\frac{r^{k\alpha}-1}{r^\alpha-1}\equiv 0\pmod{m}$. For the latter observe that $$ \alpha\ \Big|\frac{t\alpha}{\gcd(t,\ alpha)}\quad\text{and}\quad \frac{mt\alpha}{\gcd(t,\alpha)}\ \Big|\ k\alpha, $$ so that $$ \frac{r^\frac{mt\alpha}{\gcd(t,\alpha)}-1}{r^\frac{t\alpha}{\gcd(t,\alpha)}-1}\ \Big|\ \frac{r ^{k\alpha}-1}{r^\alpha-1}. $$ up vote 1 down vote So it suffices to show that the left hand side is divisible by $m$. The fraction equals $$ \sum_{j=0}^{m-1} r^\frac{jt\alpha}{\gcd(t,\alpha)}. $$ Here each exponent is divisible by $t$, accepted hence each term in the sum is $\equiv 1\pmod{m}$. There are $m$ terms, hence the sum is divisible by $m$ as claimed. It also follows that the exponent of the group divides $\mathrm{lcm}(n,mt)$. Note that the last quantity is in between $\mathrm{lcm}(n,m)$ and $nm$. It is indeed helpful; thank you very much. – Hebert Mar 1 '12 at 20:53 Actually in the definition of $k$ one can lower $m$ to $m/\gcd(m,\beta)$. – GH from MO Mar 1 '12 at 21:05 1 Well, it turns out that the exponent of the group is exactly $\mbox{lcm}(n,m)$. – Hebert Mar 2 '12 at 21:56 @Hebert: Thank you. Can you provide an explanation, e.g. as a response to your own question? From Hempel's Lemma 2.1 the answer is not obvious to me, e.g. I don't know the values $k,l,m,n$ in your situation. An elementary number theoretic argument would be even better. – GH from MO Mar 3 '12 at 10:08 @GH: In our case, $k=n$, $m=m$, $l=m$, and $n=r$ (the left-hand sides are Hempel's variables, and the right-hand sides are mine). So, the value of $\lcm(n,m)$ follows from Lemma 2.1 of Hempel's paper. I would still like to obtain that same value by lowering $k$ in your argument. – Hebert Mar 4 '12 at 14:52 show 1 more comment Not the answer you're looking for? Browse other questions tagged gr.group-theory nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/89963/exponent-of-metacyclic-groups","timestamp":"2014-04-19T10:37:37Z","content_type":null,"content_length":"62476","record_id":"<urn:uuid:bfeb1512-b2bc-414c-95b6-3550cf253c73>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Trouble installing Numpy on AIX 5.2. mfmorss at aep.com mfmorss at aep.com Wed Feb 22 08:34:03 CST 2006 Thanks for this observation. I will modify ufuncobject.h as you suggested, instead. The other problem still results in a complaint, but not an error; it does not prevent compilation. I have another little problem but I expect to be able to solve it. I will report when and if I have Numpy installed. Mark F. Morss Principal Analyst, Market Risk American Electric Power Travis Oliphant ieee.org> To mfmorss at aep.com 02/22/2006 11:29 cc AM numpy-discussion <numpy-discussion at lists.sourceforge Re: [Numpy-discussion] Trouble installing Numpy on AIX 5.2. mfmorss at aep.com wrote: >This problem was solved by adding "#include <fenv.h>" to ...numpy-0.9.5 I suspect this allowed compilation, but I'm not sure if it "solved the problem." It depends on whether or not the FE_OVERFLOW defined in fenv.h is the same as FP_OVERFLOW on the _AIX (it might be...). The better solution is to change the constant to what it should be... Did the long double *, double * problem also resolve itself? This seems to an error with the modfl function you are picking up since the AIX docs say that modfl should take and receive long double arguments. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-February/018852.html","timestamp":"2014-04-19T07:15:17Z","content_type":null,"content_length":"5295","record_id":"<urn:uuid:05ab03e4-5c32-47d3-aafa-738fe83569ba>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Fleetwood, NY Geometry Tutor Find a Fleetwood, NY Geometry Tutor ...I understand that every student is different and that strategies must be adjusted accordingly. I engage my students in an active learning style which favors asking them questions so that they piece together their understanding of new material from prior knowledge. I do not simply explain material while my students nod their heads. 24 Subjects: including geometry, chemistry, ASVAB, physics ...Prior to becoming a teacher, I was a Wall Street Vice President for some major financial services organizations. I am willing to travel to your home or to a library for tutoring. I am also comfortable doing online tutoring and could combine that with in person tutoring.I have been teaching High School Algebra for 10 years. 20 Subjects: including geometry, algebra 1, GRE, finance ...Even more exciting is watching that light-bulb go on when I am tutoring. I have experience tutoring students of all ages, (elementary through graduate school) in many subject areas - although my real passion is math. I have a BA in statistics from Harvard and will be starting nursing school shortly. 18 Subjects: including geometry, chemistry, statistics, biology ...I also throw *fun* stuff into the mix, from prior-years math competitions, such as Math Kangaroo (all grades) and the AMC (middle and high school). In that program, students can be reasonably expected to ace the math section of the SAT by the end of 7th grade. (My 12-year-old son did just that,... 27 Subjects: including geometry, calculus, statistics, physics ...I am a devoted and experienced tutor for high school and college-level Physics and Math; I also prepare students for taking college-entrance tests. Working with each student, I use individual approach based on student's personality and background. The progress is guaranteed!I am a native Russian speaker. 24 Subjects: including geometry, physics, GRE, Russian Related Fleetwood, NY Tutors Fleetwood, NY Accounting Tutors Fleetwood, NY ACT Tutors Fleetwood, NY Algebra Tutors Fleetwood, NY Algebra 2 Tutors Fleetwood, NY Calculus Tutors Fleetwood, NY Geometry Tutors Fleetwood, NY Math Tutors Fleetwood, NY Prealgebra Tutors Fleetwood, NY Precalculus Tutors Fleetwood, NY SAT Tutors Fleetwood, NY SAT Math Tutors Fleetwood, NY Science Tutors Fleetwood, NY Statistics Tutors Fleetwood, NY Trigonometry Tutors Nearby Cities With geometry Tutor Allerton, NY geometry Tutors Bardonia, NY geometry Tutors Bronxville geometry Tutors Heathcote, NY geometry Tutors Hillside, NY geometry Tutors Inwood Finance, NY geometry Tutors Manhattanville, NY geometry Tutors Maplewood, NY geometry Tutors Mount Vernon, NY geometry Tutors Mt Vernon, NY geometry Tutors River Vale, NJ geometry Tutors Scarsdale Park, NY geometry Tutors Throggs Neck, NY geometry Tutors Tuckahoe, NY geometry Tutors Wykagyl, NY geometry Tutors
{"url":"http://www.purplemath.com/Fleetwood_NY_geometry_tutors.php","timestamp":"2014-04-16T07:22:09Z","content_type":null,"content_length":"24293","record_id":"<urn:uuid:f9898913-5619-4736-92ed-62d98c8cdf62>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] Coefficients of Logistic Regression from bootstrap - how to get them? [R] Coefficients of Logistic Regression from bootstrap - how to get them? Frank E Harrell Jr f.harrell at vanderbilt.edu Thu Jul 24 20:35:09 CEST 2008 Michal Figurski wrote: > Thank you all for your words of wisdom. > I start getting into what you mean by bootstrap. Not surprisingly, it > seems to be something else than I do. The bootstrap is a tool, and I > would rather compare it to a hammer than to a gun. People say that > hammer is for driving nails. This situation is as if I planned to use it > to break rocks. > The key point is that I don't really care about the bias or variance of > the mean in the model. These things are useful for statisticians; > regular people (like me, also a chemist) do not understand them and have > no use for them (well, now I somewhat understand). My goal is very > practical: I need an equation that can predict patient's outcome, based > on some data, with maximum reliability and accuracy. > I have found from the mentioned paper (and from my own experience) that > re-sampling and running the regression on re-sampled dataset multiple > times does improve predictions. You have a proof of that in that paper, > page 1502, and to me it is rather a stunning proof: compare 56% to 82% > of correctly predicted values (correct means within 15% of original value). Michal I think you are misinterpreting that paragraph, although it's hard to know exactly what they mean there. I think it is a comparison of a naive stepwise regression approach to a more pre-specified approach that selects the model using a more unbiased criterion than that used by stepwise selection. Resampling can be useful for selecting the form of the model in some cases. But your original post dealt with resampling from a single pre-specified model which is not what the authors used. (remainder omitted) More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2008-July/168795.html","timestamp":"2014-04-20T21:03:51Z","content_type":null,"content_length":"4687","record_id":"<urn:uuid:fa86d6a7-73d1-48b2-8b22-e7679ca65b5f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex impedance calculation Newbie level 4 Join Date Jul 2006 0 / 0 complex impedance calculator I found out how I can calculate the complex impedance of a unkown load by replacing it with two other loads and doing some maths, but what I´m really looking for is to calculate the complex impedance of a load by measuring forward and reflected power right after the power amplifier. I remember these formulas were taught at university; how to calculate the complex impedance from 3 different measurements of forward and reflected power, but I can´t find the maths in my books, neither on the internet. Or is there an IC available that does all the complex impedance calculation by itself? Can anyone help me out on this one? Thanks a lot on forehand. Full Member level 4 Join Date Jul 2002 45 / 45 power calculation complex impedance I found out how I can calculate the complex impedance of a unkown load by replacing it with two other loads and doing some maths, but what I´m really looking for is to calculate the complex impedance of a load by measuring forward and reflected power right after the power amplifier. I remember these formulas were taught at university; how to calculate the complex impedance from 3 different measurements of forward and reflected power, but I can´t find the maths in my books, neither on the internet. Or is there an IC available that does all the complex impedance calculation by itself? Can anyone help me out on this one? Thanks a lot on forehand. Hmmm.... - you want a network analyzer for > $20000 in one chip ok... I try very important formula is: Z is (unknow) complex impedance Z0 i reference impedance (mostly case real value) Γ is reflections cofficient 'gamma', HP call this for s-parameter (here S11) Z - Z0 Γ = ---------- (Γ also called 's' as S-parameters in old HP-litterature) Z + Z0 and after normalisation z = Z/Z0 z - 1 Γ = ------ z + 1 And if you via measure bridge (weathstone-bridge or using directive coupler etc.) measure reflected amplitude and phase angle compared to sending wave + know reference impedance (50 Ohm in moste case), ie. make know Γ in amplitude ratio of sending voltage and phase angle compare to sending wave. For example measured reflected amplitude (voltage) 0.4472 and lagging 63.43 degree compare to sending wave => Γ = 0.4472 |_ 63.43 degree in polar notation or 0.2+i0.4 in rectangular notation. and using: 1 + Γ z = --------- 1 - Γ put in value (rectangular notation) 1 + 0.2+i04 z = --------------- = 1+i1 1 - 0.2+i0.4 and denormalisation z*Z0 = Z 1+i1 * 50 = 50+i50 Ohm or 70.71 |_ 45 degree Ohm ie. 50 Ohm serial with 50 Ohm inductive reactace (you must know measure frequency for make inductance value in Henry) Above calculation is working only if complex unknow load is coupling directcly to measure bridge without any kind of transmissions line or other error. If you have transmission line (same impedance as reference impedance) say 10 degree lag travel time from source (bridge) to load and back again, above Γ = 0.4472 |_ 63.43 value turns now to Γ = 0.4472 |_ 73.43 (10 degree extra lag) or 0.1275+i0.4287 in rectangular notation If you not know this extra delay and calculate directly as above formula, give 1 + 0.1275+i0.4287 z = ---------------------- = 0.8466+i0.9072 1 - 0.1275+i0.4287 and denormalisation z*Z0 = Z 0.8466+i0.9072 * 50 = 42.33+i45.36 Ohm ie. wrong measured value depend of unknow distance of transmissions line. Other moment is loss on transmissions cable, If you using exact multiple of half wave lengt transmissions line (no measured extra cable depend delay) and you lose 1 dB in amplitude between source and load (0.5 dB each direction) 1 dB loss is same as 0.8913 of sending voltage, ie reflected value have only 0.8913 of wanted theroretic value from load as Γ = 0.8913 |_ 0 degree * 0.4472 |_ 63.43 = 0.3986 |_ 63.43 or in rectangular notation 0.1783+i0.3565 and put in formula: 1 + 0.1783+i0.3565 z = ---------------------- = 1.048+i0.8886 1 - 0.1783+i0.3565 and denormalisation z*Z0 = Z 1.048+i0.8886 * 50 = 52.42+i44.43 Ohm or 68.72 |_ 40.28 degree Ohm Impedances going closer to 50 Ohm same way using PI-attenuator to make better return loss from load with low return loss - and loads real value going more or less hide. If you have both 10 degree extra lag and 1 dB attentuation in used measure cable to load, give shortly result of: measured: Γ = 0.398.6 |_ 73.43 degree => 45.14+i41.01 Ohm (60.99 |_42.26) formula above compare to rigth value of 50+i50 Ohm (70.71 |_ 45) If you know this transmissions line depend influence (here 0.8913 |_10.00e0), you can use this 'constant' to take away influence of cable, ie. 0.398.6 |_ 73.43 / 0.8913 |_10.00 = 0.4472 | _63.43 before using above formula and make correct 50+i50 Ohm result Conclusion: you must know every part of measure bridge and cable character and length to load if you want measure complex load with resonable accurate. Ie. you must make calibration process (easiest via open, short, load-measure) to make reference plane, error correction constant to hide most of errors in bridge and used cable (length). mostly RF-book handle matemathic behind complex load and using smithchart etc. one of many books describe this: Microwave transistors amplifiers, analysis an design, Guillermo Gonzales and possible to find here in board If remember rigth hp/Agilents more or less famous appnote 95 describe how S-parameters, network analyzer and math work behind of this. You can using chip with EXOR input to measure angle between reference and measured signal, and using RSSI-circurits to measure amplitude. If I remember right, Analog Device have type of chip including both RSSI measure (ca 60 dB dynamic) and phase-comparator up to 500 MHz (or 1 GHz??) Exor-comparator type of phase angle measures can only measure near +90 to -90 degree range and you need reference or measured wave delayed 0 and 90 degree and switch between under measure process (ie. I and Q-measure) to make decision if measured phase angle comming from capacitance or inductive reactance. You need MCU/computer power to correct measures from error corrections constants and calculate value as above formula. I thinking is not possible to find in one chip solution - yet... For easy calculate with complex number in above formula I using 'free42' from http://home.planet.nl/~demun000/thomas_projects/free42/ This calculator is perfect to work with complex impedances etc. Is a RPN-calculator, ie. use '1' <enter> '2' '+' for answer '3'. to make or split up complex number: '1' <enter> '2' <orange button> 'complex' for answer '1 i2' and after this you can use '+-/*' and all other math function same as normal number without extra moment or special mode for operating on complex numbers. ie '1 i2' <enter> '3' '*' give answer ' 3.0 i6.0' without fuzz, you _not_ need make '3' as '3 i0' before complex operation as needed on hp32sii or hp33s... Input and view mode between rectangular and polar notation of complex numbers adjusts via <orange button> 'mode' and select 'rect' or 'polar' to wanted show and input notation. Yes, you can find function to convert complex numbers to different notation , but this way is absolute fastest. To make storage register to accept complex numbers, you must make this moment one time: 'RCL' <select REGS> <enter> <orange button> 'complex' 'STO' <select REGS> after this you can using STO 00-25 to storing real and complex numbers. (I hopfully thinking rigth now...) Newbie level 4 Join Date Jul 2006 0 / 0 If remember rigth hp/Agilents more or less famous appnote 95 describe how S-parameters, network analyzer and math work behind of this. You're right, the good old HP AN95, I've got to dig that one up... I'll break my brains on these maths again and then program a mcu to do the thinking for me :D Thanks, but still...all additional comments are welcome! Full Member level 4 Join Date Jul 2002 45 / 45 find the complex impedance If remember rigth hp/Agilents more or less famous appnote 95 describe how S-parameters, network analyzer and math work behind of this. You're right, the good old HP AN95, I've got to dig that one up... I'll break my brains on these maths again and then program a mcu to do the thinking for me :D Thanks, but still...all additional comments are welcome! What for range of frequency you thinking measure. how you think build measure bridge ??? power range? For low power I thinking weatstone bridge is easiest to build, but needs balun or balanced input to measure and (very) low source impedance or make source impedance part of measure bridge. Newbie level 4 Join Date Jul 2006 0 / 0 phase angle calculation from complex impedance What for range of frequency you thinking measure. » 3-50MHz how you think build measure bridge ??? » directional coupler + log detector power range? » depends on coupler...can go from 1mW...1,5kW Possibilities are infinite...
{"url":"http://www.edaboard.com/thread70355.html","timestamp":"2014-04-20T20:55:49Z","content_type":null,"content_length":"80081","record_id":"<urn:uuid:57618f17-d30d-42df-bb16-4685135b5fb8>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
The work of Thurston up vote 10 down vote favorite I seem to remember written or said somewhere that at some point Thurston decided to stop writing down his theorems in order not to repel mathematicians from his field (maybe this is not correct?). I am really curious if now 25-30 years later there is some nice source, book, or notes, where it is possible to learn some basic ideas about the proof of the fact that Haken manifolds admit a hyperbolic structure? Maybe some of his ideas got a more accessible explanation? Of course his beautiful notes http://www.msri.org/publications/books/gt3m/ exist, but they don't go so far. soft-question hyperbolic-geometry gt.geometric-topology 2 From reading the article linked in the answer of HW, one sees that the statement of what the OP remembers having read or heard, that "Thurston decided to stop writing down his theorems in order not to repel mathematicians from his field", is indeed not correct. – Lee Mosher Mar 16 '13 at 15:29 add comment 2 Answers active oldest votes There are several sources for Thurston's hyperbolization theorem, some published, some not. Off the top of my head: 1) M.Kapovich, Hyperbolic manifolds and discrete groups. 2) J. Hubbard's Teichmuller theory volume II (not yet published) 3) J. Morgan, H. Bass (eds). The Smith conjecture. (English) Papers presented at the symposium held at Columbia University, New York, 1979. Pure and Applied Mathematics, 112. Academic Press, Inc., Orlando, Fla., 1984. xv+243 pp. up vote 11 down vote accepted For only the case of manifolds that fibre over S^1 1) J-P. Otal, The hyperbolization theorem for fibred 3-manifolds. Of course there's also the new non-Thurston proofs using Ricci flow. Oh, and regarding that anecdote about repelling people from a field -- I've only heard that comment attributed to one mathematician and it was in reference to Thurston's early work on foliations. I don't think that's a widely held belief, but I wasn't alive then so I'm just going on 2nd hand comments. 4 There's also MR1677888 (2000b:57025) Otal, Jean-Pierre(F-ENSLY-PM) Thurston's hyperbolization of Haken manifolds. Surveys in differential geometry, Vol. III (Cambridge, MA, 1996), 77--194, Int. Press, Boston, MA, 1998. This takes the approach of McMullen (one could also cite McMullen's papers, but Otal's article puts more of the proof together). – Ian Agol Dec 4 '09 at 1:38 6 Really? Thurston himself says something about repelling people from the field here: ams.org/bull/1994-30-02/S0273-0979-1994-00502-6/… – Qiaochu Yuan Dec 4 '09 at 1:42 Hmm, that Otal book seems to be out of print. :( – Ryan Budney Dec 4 '09 at 2:20 3 And an anonymous mathematician reminds me to mention that there is of course all of the relevant papers of Thurston's on the arXiv now. Links: front.math.ucdavis.edu/9801.5019, front.math.ucdavis.edu/9801.5045, front.math.ucdavis.edu/9801.5058 – Ryan Budney Dec 4 '09 at 5:36 add comment This isn't a direct answer to the actual question, but in your first sentence I think you're alluding to Thurston's article On proof and progress in mathematics. In section 6, entitled up vote 11 "Some personal experiences", he describes how his experience working on foliations influenced the way he presented his later work on geometrization. down vote add comment Not the answer you're looking for? Browse other questions tagged soft-question hyperbolic-geometry gt.geometric-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/7733/the-work-of-thurston","timestamp":"2014-04-19T02:35:40Z","content_type":null,"content_length":"62119","record_id":"<urn:uuid:b557b62a-1a42-48a4-a9a1-8d9dfd9a3be4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
sine is the orthogonal projection of the unit circle? April 27th 2009, 01:15 PM #1 Junior Member Mar 2009 Berkeley, California sine is the orthogonal projection of the unit circle? I read that "sine is the orthogonal projection of the unit circle". I know othogonal is another way of saying perpendicular but I still can't quite grasp what is going on here. Can anyone help? I guess that's a fancy way of saying that sine goes on the y axis and cos on the x axis. Sin 90 =1 etc. April 27th 2009, 01:25 PM #2 Junior Member May 2008
{"url":"http://mathhelpforum.com/trigonometry/86046-sine-orthogonal-projection-unit-circle.html","timestamp":"2014-04-20T23:54:30Z","content_type":null,"content_length":"31492","record_id":"<urn:uuid:6c068d33-2451-47b7-a205-7072c25ee6c2>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Determining sample size needed to test hypothesis Thank you :-) Okay, if you were to write out a 95% confidence interval for this, how would you write it? 6.2 +- ?? Think about it. If you reject H[0] at 6.0, what value do you think goes in where the question marks are?
{"url":"http://www.physicsforums.com/showpost.php?p=946849&postcount=4","timestamp":"2014-04-20T18:29:32Z","content_type":null,"content_length":"7294","record_id":"<urn:uuid:61f93964-a287-44ac-8691-4c9d76f34d9c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Air Density Calculator Excel Spreadsheet with Ideal Gas Law Air Density Calculator Excel Spreadsheet Where to Find an Air Density Calculator Excel Spreadsheet To find an air density calculator Excel spreadsheet to use as an air density calculator, click here to visit our spreadsheet store. Why use an online calculator or look in tables, when you can get an air density calculator excel spreadsheet to use as an air density calculator here? Read on for information about Excel spreadsheets that can be used to calculate the density of air (and other gases) at different pressures and temperatures with the ideal gas law. Gas density background for an Air Density Calculator Excel Spreadsheet Pressure and temperature have significant effects on the density of gases, so some means of determining the density of air and other gases at specified temperatures and pressures is needed for a variety of fluid mechanics applications. Fortunately, the ideal gas law provides a means of doing this for many gases over ranges of temperature and pressure that are of interest. The Ideal Gas Law for use in an Air Density Calculator Excel Spreadsheet A common form for the ideal gas law equation is PV = nRT, giving the relationship among T, the absolute temperature of the gas; P, its absolute pressure; V, the volume occupied by n moles of the gas; and R, the ideal gas law constant. The density of the gas can be introduced into this equation, through the fact that molecular weight (MW) has units of mass/mole, so that n = m/MW. This leads to the ideal gas law written as: PV = (m/MW)RT. Solving this equation for m/V (which is equal to the gas density, ρ) gives the following equation for gas density as a function of its MW, pressure and temperature: ρ = (MW)P/RT. A commonly used set of U.S. units for this equation is as follows: ρ = density of the gas in slugs/ft^3, MW = molecular weight of the gas in slugs/slugmole (or kg/kgmole, etc.) (NOTE: MW of air = 29), P = absolute gas pressure in psia (NOTE: Absolute pressure equals pressure measured by a guage plus atmospheric pressure.), T = absolute temperature of the gas in ^oR (NOTE: ^oR = ^oF + 459.67) R = ideal gas constant in psia-ft^3/slugmole-^oR. For conditions under which air can be treated as an ideal gas (see the next section), the ideal gas law in this form can be used to calculate the density of air at different pressures and The air density calculator excel spreadsheet template shown in the screenshot below will calculate the density of a gas for specified molecular weight, pressure and temperature. This Excel spreadsheet is available at a very reasonable price in our spreadsheet store and can be used with either U.S. or S.I. units. These spreadsheets also contain tables of critical temperature and critical pressure for several common gases. But When Can I Use the Ideal Gas Law to Calculate the Density of Air? A good question indeed, because air and other gases for which you may need a density value are real gases, not ideal gases. It is fortunate, however, that many real gases behave almost exactly like an ideal gas over a wide range of temperatures and pressures. The ideal gas law works best for high temperatures (relative to the critical temperature of the gas) and low pressures (relative to the critical pressure of the gas). See table at the left for values of critical temperature and critical pressure for several common gases. For many practical, real situations, the ideal gas law gives quite accurate values for the density of air (and many other gases) at different pressures and temperatures. S.I. Units for the Ideal Gas Law The ideal gas law is a dimensionally consistent equation, so it can be used with any consistent set of units. For SI units the ideal gas law parameters are as follows: ρ = density in kg/m^3, P = absolute gas pressure in pascals (N/m^2), T = absolute temperature in ^oK (NOTE: ^oK = ^oC + 273.15) R = ideal gas constant in Joules/kgmole-K 1. Bengtson, Harlan H., Flow Measurement in Pipes and Ducts, An online PDH course for Professional Engineers 2. Munson, B. R., Young, D. F., & Okiishi, T. H., Fundamentals of Fluid Mechanics, 4^th Ed., New York: John Wiley and Sons, Inc, 2002. 3. Applied Thermodynamics ebook, http://www.taftan.com/thermodynamics/ This entry was posted in Fluid Properties and tagged air density calculator excel, density of air, density of air at different pressures and temperatures, ideal gas equation, ideal gas law, ideal gas law constant, S.I. units. Bookmark the permalink.
{"url":"http://www.engineeringexcelspreadsheets.com/2011/04/air-density-calculator-excel-spreadsheet/","timestamp":"2014-04-21T09:36:19Z","content_type":null,"content_length":"31456","record_id":"<urn:uuid:2f41192c-f328-4723-bb1b-ea2ebea65954>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Time-Distance Issue Hello , I am having trouble with the following question: Tom and bill agreed to race across 50-foot pool and back again.They started together but Tom finished 10feet ahead of bill.IF there rates were constant and Tom finished the race in 27 seconds , how did it take bill to finish: The answer is supposed to be 30 but i am getting 33.75 Here is how i am doing it: Toms velocity is: 50/27 ft/sec Bills velocity will be : 40/27 (Because Tom finished 10ft ahead of Bill) So Time taken for bill to finish the race will be: S/V = 50 / (50/27) = 33.75 Could anyone tell me what i am missing or doing wrong!! Thanks
{"url":"http://mathhelpforum.com/math-topics/200085-time-distance-issue.html","timestamp":"2014-04-17T11:04:02Z","content_type":null,"content_length":"40078","record_id":"<urn:uuid:06142213-a085-4b43-9f26-32f2a07e856b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Show the the derivative exists but not continuous August 11th 2010, 02:42 AM #1 Jul 2010 Show the the derivative exists but not continuous Hi dear friends here, I got a question, it supposed to be quite simple, but it is a little bit tricky, can anyone please help me? Thanks a lot. Here it is: f(x,y)={x^3/(x^6+y^2), for (x,y) does not equal (0,0)} f(x,y)={0, for (x,y)=(o,o)} Show that df/dx exists at (0,0) but is not continuous there. I used first principle to prove the limit of $df/dx$is 0 at (x,y)=(0,0), which means it does exist, am I right? What is the exact definition to say something exists? Then, I can imagine it is discontinuous because at (0,0) the graph of $df/dx$should be broken, which has the circle there, but I just forgot how to show it. I thought the definition of "continuiy" is the value of limit equals to the value of function. So how do I write about it? Please help me, thanks a lot. Hi dear friends here, I got a question, it supposed to be quite simple, but it is a little bit tricky, can anyone please help me? Thanks a lot. Here it is: f(x,y)={x^3/(x^6+y^2), for (x,y) does not equal (0,0)} f(x,y)={0, for (x,y)=(o,o)} Show that df/dx exists at (0,0) but is not continuous there. I used first principle to prove the limit of $df/dx$is 0 at (x,y)=(0,0), which means it does exist, am I right? What is the exact definition to say something exists? You mean $\frac{\partial f}{\partial y}$ here, don't you? That's very different from saying "the derivative exists at (0, 0)" or "the function is differentiable at (0, 0)" Yes, thi partial derivative exists at x= 0 if and only if $\lim_{(h,k)\to 0,0}\frac{f(0+h, 0)- f(0, 0)}{h}$ exists. In this case, that limit is $\lim_{h\to 0}\frac\frac{{(0+h)^3}{(0+h)^6+ 0^2}- 0}{h}= \lim_{h\to 0}\frac{h^3}{h^7}$$= \lim_{h\to 0}\frac{1}{h^4}$. It does NOT look to me like that limit exists. Are you sure you have copied the problem correctly? Then, I can imagine it is discontinuous because at (0,0) the graph of $df/dx$should be broken, which has the circle there, but I just forgot how to show it. I thought the definition of "continuiy" is the value of limit equals to the value of function. So how do I write about it? Please help me, thanks a lot. You mean $\frac{\partial f}{\partial y}$ here, don't you? That's very different from saying "the derivative exists at (0, 0)" or "the function is differentiable at (0, 0)" Yes, thi partial derivative exists at x= 0 if and only if $\lim_{(h,k)\to 0,0}\frac{f(0+h, 0)- f(0, 0)}{h}$ exists. In this case, that limit is $\lim_{h\to 0}\frac\frac{{(0+h)^3}{(0+h)^6+ 0^2}- 0}{h}= \lim_{h\to 0}\frac{h^3}{h^7}$$= \lim_{h\to 0}\frac{1}{h^4}$. It does NOT look to me like that limit exists. Are you sure you have copied the problem correctly? Thank you. I'm sure I copied down the right question. Also, I do mean by $\frac{\partial f}{\partial y}$, sorry that I'm not very good at with using code. I think the question is trying to say $\frac{\partial f}{\partial y}$ is exists but discontinues, not original function. So what I did was differentiate original function first, and I end up with [tex]\frac{\partial f}{\partial y}=(3x^2y^3-3x^8y)/((x^6+y^2)^2)[tex], and use first principle as you did, so I can find the limit equal to 0. But not sure how to sure discontinues. You mean $\frac{\partial f}{\partial y}$ here, don't you? That's very different from saying "the derivative exists at (0, 0)" or "the function is differentiable at (0, 0)" Yes, thi partial derivative exists at x= 0 if and only if $\lim_{(h,k)\to 0,0}\frac{f(0+h, 0)- f(0, 0)}{h}$ exists. In this case, that limit is $\lim_{h\to 0}\frac\frac{{(0+h)^3}{(0+h)^6+ 0^2}- 0}{h}= \lim_{h\to 0}\frac{h^3}{h^7}$$= \lim_{h\to 0}\frac{1}{h^4}$. It does NOT look to me like that limit exists. Are you sure you have copied the problem correctly? thank you, I'm sure I got right question, and it is $\frac{\partial f}{\partial y}$, sorry that I'm not very good with using code. I think the question is asking about $\frac{\partial f}{\partial y}$, rather than original function. I differentiate original function first, and I end up with $df/dx=(3x^2y^3-3x^8y)/(x^6+y^2)^2$ , which can find limit at (0,0) is 0. But I'm not sure how to show it is discontinues. August 11th 2010, 02:56 AM #2 MHF Contributor Apr 2005 August 11th 2010, 03:06 AM #3 Jul 2010 August 11th 2010, 03:12 AM #4 Jul 2010
{"url":"http://mathhelpforum.com/calculus/153346-show-derivative-exists-but-not-continuous.html","timestamp":"2014-04-17T10:12:07Z","content_type":null,"content_length":"46618","record_id":"<urn:uuid:6570232c-6886-475c-b163-4ed7d7aa847a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Request for reference: Banach-type spaces as algebraic theories. up vote 3 down vote favorite Sparked by Yemon Choi's answer to Is the category of Banach spaces with contractions an algebraic theory? I've just spent a merry time reading and doing a bit of reference chasing. Imagine my delight at finding that one of my old favourites (functional analysis) and one of my new fads (category theory, and in particular algebraic theories) are actually very closely connected! I was going to ask about the state of play of these things as it's a little unclear exactly what stage has been achieved. Reading the paper On the equational theory of $C^\ast$-algebras and its review on MathSciNet then it appears that although it's known that $C^\ast$-algebras do form an algebraic theory, an exact presentation in terms of operations and identities is still missing (at least at the time of that paper being written), though I may be misreading things there. It's possible to do a little reference chasing through the MathSciNet database, but the trail does seem to go a little cold and it's very hard to search for "$C^\ast$ algebra"! But now I've decided that I don't want to just know about the current state of play, I'd like to learn what's going on here in a lot more detail since, as I said, it brings together two seemingly disparate areas of mathematics both of which I quite like. So my real question is • Where should I start reading? Obviously, the paper Yemon pointed me to is one place to start but there may be a good summary out there that I wouldn't reach (in finite) time by a reference chase starting with that paper. So, any other suggestions? I'm reasonably well acquainted with algebraic theories in general so I'm looking for specifics to this particular instance. Also, I'll write up my findings as I find them on the n-lab so anyone who wants to join me is welcome to follow along there. I probably won't actually start until the new year though. fa.functional-analysis ct.category-theory algebraic-theory reference-request add comment 1 Answer active oldest votes Do an emath search for Waelbroeck, L*; note especially his paper "The Taylor spectrum and quotient Banach spaces". For more recent things, search for Castillo, J*. Also, Mariusz up vote 2 Wodzicki at Berkeley has unpublished notes that contain many things. I don't know if they are in a form for distribution. down vote add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis ct.category-theory algebraic-theory reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/9169/request-for-reference-banach-type-spaces-as-algebraic-theories","timestamp":"2014-04-18T10:46:27Z","content_type":null,"content_length":"53202","record_id":"<urn:uuid:9c3aa317-3035-4ea9-8ee4-43db697afd8a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Bethesda, MD Find a Bethesda, MD Math Tutor ...I have attended many math trainings and conferences which have provided me with a wealth of teaching strategies and techniques to be able to reach all types of learners. As a teacher, I use many hands-on manipulatives to help build a conceptual understanding of the concepts my students are learning. I also enjoy incorporating technology and math games into my lessons. 4 Subjects: including algebra 1, prealgebra, SAT math, elementary math ...I make my tutoring session interactive, doing exercises with students to practice using strategies and then troubleshoot problem areas in the moment. As someone who took the MCAT myself and has graduated medical school, I also can serve as a mentor in helping students get through what is often c... 2 Subjects: including algebra 1, MCAT ...Teachers diagnose learning difficulties on a daily basis. Invest in a quality tutor to get quality results. As a certified PA Math (7-12) teacher, you can trust that I have the knowledge, skills and experience to address your student's and your needs. 19 Subjects: including precalculus, algebra 1, algebra 2, SAT math ...You can expect to improve your command of the subject right away.American history is uniquely important to Americans, because it influences our lives and stirs our emotions more than any other, just as Japanese history is more important to the Japanese, or Micronesian history to Micronesians. Bu... 15 Subjects: including ACT Math, Spanish, English, reading ...I do my best to keep the learning as practical and hands-on as possible, using word problems, pictures, and topics of interest to the student. I am a certified teacher, specializing in math and English, as well as a home-schooling mom with years of experience in teaching reading. For the beginning reader, I combine repetitious practice with fun activities like the memory game. 17 Subjects: including algebra 1, SAT math, trigonometry, geometry Related Bethesda, MD Tutors Bethesda, MD Accounting Tutors Bethesda, MD ACT Tutors Bethesda, MD Algebra Tutors Bethesda, MD Algebra 2 Tutors Bethesda, MD Calculus Tutors Bethesda, MD Geometry Tutors Bethesda, MD Math Tutors Bethesda, MD Prealgebra Tutors Bethesda, MD Precalculus Tutors Bethesda, MD SAT Tutors Bethesda, MD SAT Math Tutors Bethesda, MD Science Tutors Bethesda, MD Statistics Tutors Bethesda, MD Trigonometry Tutors Nearby Cities With Math Tutor Arlington, VA Math Tutors Chevy Chase Math Tutors Chevy Chase Village, MD Math Tutors Chevy Chs Vlg, MD Math Tutors Falls Church Math Tutors Gaithersburg Math Tutors Hyattsville Math Tutors Martins Add, MD Math Tutors Martins Additions, MD Math Tutors Mc Lean, VA Math Tutors Rockville, MD Math Tutors Silver Spring, MD Math Tutors Somerset, MD Math Tutors Takoma Park Math Tutors Washington, DC Math Tutors
{"url":"http://www.purplemath.com/Bethesda_MD_Math_tutors.php","timestamp":"2014-04-16T16:00:57Z","content_type":null,"content_length":"24007","record_id":"<urn:uuid:4000ce30-e202-4870-b38f-a37f903c5843>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00651-ip-10-147-4-33.ec2.internal.warc.gz"}