content
stringlengths
86
994k
meta
stringlengths
288
619
J-Test Davidson-MacKinnon Hi all, I would need your help with the following. I have two models 1) Rf = a + b1*P 2) Rf = a + b1*P + b2*R + b3*I + b4*S + b5*V + b6*C + b7*A + b8*E + b9*G I have check done all diagnostics for autocorrelation, heteroscedasticity, etc and they both seem fine. The problem is that variable P appears in both equations and it may be endogenous in the system with the dependent variable Rf. Is the J-test (Davidson-MacKinnon) the right way to check this or not and if yes how do I do it? Is it that I run the first regression get the fitted values and the do the same for regression 2? And then do I plug in the fitted values in the regression by removing the variable P from the second Another question I have is the following: When running 2STLS I leave variable P in the regressors list with the rest of the variables of eq2, while in the instruments lists I insert the rest of variables (variables R,I,S,C,A,E,G). The model will not run. Why is that? Furthermore, do I insert Rf also in the instruments list? Finally, if the instruments are right or wrong, or lets lets if I do not need a 2STLS and the OLS works fine, how do I check that? Many thanks Re: J-Test Davidson-MacKinnon by the way, since the two models are nested, do I need the Davidson-MacKinnon test or something else?
{"url":"http://forums.eviews.com/viewtopic.php?f=4&t=3501","timestamp":"2014-04-19T17:33:11Z","content_type":null,"content_length":"17238","record_id":"<urn:uuid:a1e26b03-fa27-49c2-9e25-c8e624fce12a>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
[Maxima] Re: sum evals first argument inconsistently Richard Fateman fateman@cs.berkeley.edu Tue, 20 May 2003 08:37:45 -0700 The method of evaluation for the arguments to SUM have been hashed over repeatedly both in Macsyma (circa 1969?) and its competitors. It does not help to have the "right" answer if the users expect you to have a different answer. I havent read / re-read the arguments recently, and I can't follow the link from Stavros' mail (it says ERROR - No group_id was chosen. ) But an argument can be made to require that the first argument must be a function of one argument, lambda([index], expression). For what it is worth, I have been experimenting with Matlab (with symbolic toolbox), and it is really weird, with TWO value spaces and ONE namespace. command result pi 3.14159.. x=3 sets x to 3 simplify(pi+sym(0)) pi simplify(x+sym(0)) 3 simplify(x) error Martin RUBEY wrote: > Summary: I like the evaluation rules of integrate better than those of > sum, and I certainly want to have them consistent. I believe that > integrate simply evaluates all its arguments before proceeding. > Dear Stavros and all the others interested in evaluation rules: > This reminds me a lot of our old discussion on evaluation rules... > http://www.ma.utexas.edu/pipermail/maxima/2002/003101.html > http://www.ma.utexas.edu/pipermail/maxima/2002/003107.html > http://www.ma.utexas.edu/pipermail/maxima/2002/003108.html > and some other things in the same thread > Consider the following: > (C3) [sum(2^x,x,0,3),sum(2^x,x,0,inf)],x:3; > (D3) [15,'SUM(2^x,x,0,INF)] > seems consistent to me. > (C4) [sum(z,z,0,3),sum(z,z,0,inf)],z:y^k; > (D4) [6,'SUM(z,z,0,INF)] > OK. > (C5) [sum(z,k,0,3),sum(z,k,0,inf)],z:y^k; > (D5) [4*y^k,'SUM(y^k,k,0,INF)] > With your proposal, this would become > (D5') [1+y+y^2+y^3,...] > But is D4 and D5' consistent? To me, D3 and D4 suggest that the first arg > is *not* evaluated, however, in this case I'd rather expect sum(z,k,0,inf) > to give INF. > After some more testing, it seems that - at the moment - the first > argument is evaluated only *after* simplification of the sum took place! > Probably we want to change this. > So what I'm saying is that not sum(z,k,0,3),z:y^k is inconsistent with the > rest of sum but rather sum(z,k,0,inf),z:y^k. > I find it interesting that integrate behaves differently with respect to > the second arg also - maybe this behaviour is not so bad, because it makes > it clearer what's happening: > (C6) integrate(x,x,0,5),x:1; > Variable of integration not a variable: 1 > (C7) integrate(z,z,0,3),z:y^k; > Improper variable of integration: y^k > (C8) (x:2,integrate('x,'x,0,2)); > (D8) 2 > (C9) (x:2,integrate(x,'x,0,2)); > (D9) 4 > What I didn't expect, though: > (C10) integrate(x,'x,0,2),x:2; > Variable of integration not a variable: 2 > Apart of the error resulting from (C10), this makes situations as (C3) and > (C4) impossible. Am I right that integrate simply evaluates all arguments > before doing anything else? (tracing seems to confirm this) > I cannot see a reason for evaluating the first arg >> with the summation > variable bound to itself <<, it seems to me that this makes the evaluation > rules unnessecarily complicated! > Martin >> foo: i^2; >> sum('foo,i,0,2) => 3*foo (correct) >> sum(foo,i,0,2) => 3*i^2 WRONG >> sum(foo,i,0,n) => 'sum(i^2,i,0,n) (correct) >>In the case where upperlimit-lowerlimit is a known >>integer, simpsum is checking whether foo is free of i >>*before* evaluating foo. >>I believe the correct way to handle this case is as >>follows: First evaluate foo with i bound to itself, and >>check if that result is free of i. If so, return the product. >>If not, *substitute* (don't evaluate) i=lowerlimit, >>i=lowerlimit+1, etc. >>This means that sum(print(i),i,1,2) would print "i", and >>not "1 2". That makes it consistent with Integrate: >> integrate('foo,i,0,2) => 2*foo (correct) >> integrate(foo,i,0,2) => 8/3 (correct) >> integrate(foo,i,0,n) => n^3/3 (correct) >>It also makes it consistent with Integrate in the >>presence of side-effects: >> integrate(print(i),i,0,1) prints i >> sum(print(i),i,0,1) >> currently prints 0 1 >> but in this proposal would print i >>It is true that it would also create some funny situations. >> sum(integrate(x^i,x),i,0,2) >>evaluates correctly to x+x^2/2+x^3/3. Under this >>proposal, it would also evaluate *correctly*, but would >>first ask whether i+1 is zero or nonzero. Unless, that is, >>simpsum binds i to be an integer and to have a value >>>=lowerlimit and <=upperlimit (which is a sensible thing >>to do anyway). >>Now consider >> sum(integrate(1/(x^i+1),i),i,0,1) >>This currently correctly evaluates to x/2+log(x+1). >>Under the proposal, however, it would evaluate to a sum >>of noun forms, since the integral does not exist in >>closed form. I think we can live with that; an ev >>(...,integrate) takes care of it. >>But I still like the proposal. After all, if you set the result >>of the integration expression above to a temporary >>variable (which seems like a sensible thing to do), you >>will run into the original bad behavior. > _______________________________________________ > Maxima mailing list > Maxima@www.math.utexas.edu > http://www.math.utexas.edu/mailman/listinfo/maxima
{"url":"https://www.ma.utexas.edu/pipermail/maxima/2003/004870.html","timestamp":"2014-04-21T02:14:22Z","content_type":null,"content_length":"9684","record_id":"<urn:uuid:6d98c76d-3720-4155-a1ba-a39e359843f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Irvington, NJ SAT Tutor Find a Irvington, NJ SAT Tutor ...I help students by going over techniques, maybe introduce new ways to do certain things. For the science classes, I find that flash cards, and games designed to drill information to be helpful. For math subjects, working through problems together is the best thing. 8 Subjects: including SAT math, biology, algebra 1, elementary math ...Sorry! **I got 5s in the following AP tests: Physics B, Physics C Mechanics, Physics C E&M. I have been designing websites in HTML and CSS for several years. (I have implemented a bit of PHP as well.) I served as Webmaster of multiple student organizations in college, such as Princeton's chapter of ASME and Princeton Pro-Life. I have a BSE in Mechanical Engineering from Princeton 26 Subjects: including SAT math, SAT writing, physics, English ...I help my students get in the habit of making reminders for themselves of what they need to do, using such strategies as assignment sheets, daily schedules, and "to do" lists. I teach older students how to take notes from both oral presentations and textbooks. I help my students form lists of the main ideas or concepts. 39 Subjects: including SAT reading, SAT writing, geometry, SAT math ...I am a high school senior who loves math, science, and economics, and I am ready to share that passion for learning with others. I am confident that with hard work and a little bit of guidance, any student can excel academically. I have taken AP-level Biology, Chemistry, Physics, Calculus and M... 19 Subjects: including SAT writing, SAT math, SAT reading, chemistry ...I am excellent at writing papers and using the correct format and grammar and can help any student prepare for any test. Spelling is something that has come natural to me my entire life. However, I do understand the ins and outs of spelling and rules because I teach phonics to English language learners and can help any student improve their spelling skills. 18 Subjects: including SAT reading, SAT writing, reading, English Related Irvington, NJ Tutors Irvington, NJ Accounting Tutors Irvington, NJ ACT Tutors Irvington, NJ Algebra Tutors Irvington, NJ Algebra 2 Tutors Irvington, NJ Calculus Tutors Irvington, NJ Geometry Tutors Irvington, NJ Math Tutors Irvington, NJ Prealgebra Tutors Irvington, NJ Precalculus Tutors Irvington, NJ SAT Tutors Irvington, NJ SAT Math Tutors Irvington, NJ Science Tutors Irvington, NJ Statistics Tutors Irvington, NJ Trigonometry Tutors Nearby Cities With SAT Tutor Bayonne SAT Tutors Belleville, NJ SAT Tutors Bloomfield, NJ SAT Tutors East Orange SAT Tutors Elizabeth, NJ SAT Tutors Hillside, NJ SAT Tutors Kearny, NJ SAT Tutors Livingston, NJ SAT Tutors Maplewood, NJ SAT Tutors Newark, NJ SAT Tutors Orange, NJ SAT Tutors South Kearny, NJ SAT Tutors South Orange SAT Tutors Union Center, NJ SAT Tutors Union, NJ SAT Tutors
{"url":"http://www.purplemath.com/irvington_nj_sat_tutors.php","timestamp":"2014-04-18T13:38:08Z","content_type":null,"content_length":"24042","record_id":"<urn:uuid:0cf04982-0a96-44ce-bfd0-e82043f85b08>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Generators for congruence group $\Gamma(2)$ up vote 6 down vote favorite Is the congruence group $\Gamma(2)$ generated by the upper triangular matrix $(1, 2; 0, 1)$ and the lower triangular matrix $(1, 0; 2, 1)$ or does on need to also throw in the negation of the identity? To be specific, how do I check that the negation of the identity is not a word in the above matrices? add comment 3 Answers active oldest votes Yes, you need to throw in $-I$. Check that the set of all matrices of the form $$\left(\begin{matrix} a&b\\\ c&d \end{matrix}\right)$$ with $b$ and $c$ even and $a\equiv d\ up vote 6 down vote equiv1$ (mod $4$) is a subgroup of the modular group. It's Exercise 6 on p. 34 in M. Yoshida's "Hypergeometric functions---my love": the group $\Gamma(2)$ is generated by the $[1,2;0,1]$, $[1,0;2,1]$ and... $[-1,0;0,-1]$. – Wadim Zudilin Jun 27 '10 at 12:12 1 And Robin explains why the minus identity is not in the group generated by two others. – Wadim Zudilin Jun 27 '10 at 12:54 1 Wadim, exercise 6 does not answer the question. – Chris Judge Jun 27 '10 at 17:01 add comment There is already an answer posted, but I can't resist making two remarks. The first gives an alternate proof that also works for $\Gamma_n(p)$ for all $n$ and $p$ (and also gives a minimal generating set for these groups, at least when $n \geq 3$). The second says a little more about $\Gamma_2(2)$. By the way, $p$ doesn't have to be prime. 1) Let us define a surjective homomorphism $f : \Gamma_n(p) \rightarrow \mathfrak{sl}_n(\mathbb{Z}/p\mathbb{Z})$. An element $M \in \Gamma_n(p)$ is of the form $M = \mathbb{I}_n + p A$ for some matrix $A$. Define $f(M) = A$ mod $p$. Amazingly enough, this is a homomorphism! Indeed, if $N = \mathbb{I}_n + p B$, then $$f(MN) = f((\mathbb{I}_n + p A)(\mathbb{I}_n + p B)) = f(\ mathbb{I}_n + p(A+B) + p^2 AB) = A+B$$ modulo $p$. This is sort of like a derivative! It is an easy exercise to check that the image of $f$ lies in $\mathfrak{sl}_n(\mathbb{Z}/p\mathbb{Z}) To check that $f$ is surjective, let $e_{ij}$ for $i \neq j$ be the identity matrix with a $1$ inserted into the $(i,j)$ position. Then $f(e_{ij}^p)$ is the matrix with a $1$ in the $(i,j)$ up vote 8 position and zeros elsewhere. To get the diagonal matrices, define $f_i$ for $1 \leq i < n$ to be the result of inserting the 2x2 matrix $(1+p,p;-p,1-p)$ into the identity matrix with its down vote upper left entry at position $(i,i)$. Then $f(f_i)$ is the matrix with a $1$ at positions $(i,i)$ and $(i,i+1)$, a $-1$ at positions $(i+1,1)$ and $(i+1,i+1)$, and zeros elsewhere. The existence of $f$ implies immediately that $\Gamma_n(p)$ is not generated by the elementary matrices $e_{ij}^p$. A theorem of Lee and Szczarba says that in fact $f$ gives the abelianization of $\Gamma_n(p)$ for $n \geq 3$. Thus for $n \geq 3$ we have $[\Gamma_n(p),\Gamma_n(p)] = ker\ f = \Gamma_n(p^2)$. One can check (I've never seen this in print) that $\ Gamma_n(p)$ is generated by the $e_{ij}^p$ and the $f_i$ when $n \geq 3$. For the case $n=2$, see the answers to my question here. 2) In fact, we have $\Gamma_2(2) \cong F_2 \times (\mathbb{Z}/2\mathbb{Z})$. Here $F_2$ is a rank $2$ free group generated by $e_{12}^2$ and $e_{21}^2$ and $\mathbb{Z}/2\mathbb{Z}$ is generated by the central element $(-1,0;0,-1)$. This can be proved in many ways : I leave it as a fun exercise! Quite beautiful! Thank you for adding this. – Chris Judge Jun 27 '10 at 17:18 add comment This follows from the fact that the image of $\Gamma(2)$ in $\text{PSL}_2(\mathbb{Z})$ is freely generated by the two matrices you describe. There is a geometric proof of this fact based up vote 1 on the fact that $\Gamma(2)$ acts properly discontinuously on the upper half plane $\mathbb{H}$ which I sketch here. down vote add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/29700/generators-for-congruence-group-gamma2","timestamp":"2014-04-16T22:51:28Z","content_type":null,"content_length":"61848","record_id":"<urn:uuid:bb2b0fec-4c98-4d7e-806f-f8a0fbf1987a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
ing and Results 11 - 20 of 37 "... A relational database may not satisfy certain integrity constraints (ICs) for several reasons. However most likely most of the information in it is still consistent with the ICs. The answers to queries that are consistent with the ICs can be considered sematically correct answers, and are characteri ..." Cited by 32 (21 self) Add to MetaCart A relational database may not satisfy certain integrity constraints (ICs) for several reasons. However most likely most of the information in it is still consistent with the ICs. The answers to queries that are consistent with the ICs can be considered sematically correct answers, and are characterized [2] as ordinary answers that can be obtained from every minimally repaired version of the database. In this paper we address the problem of specifying those repaired versions as the minimal models of a theory written in Annotated Predicate Logic [27]. It is also shown how to specify database repairs using disjunctive logic program with annotation arguments and a classical stable model semantics. - In Proceedings of KR’02 , 2002 "... An intelligent agent may receive information about its environment from several different sources. How should the agent merge these items of information into a single, consistent piece? Taking our lead from the contraction + expansion approach to belief revision, we envisage a two-stage approach to ..." Cited by 28 (1 self) Add to MetaCart An intelligent agent may receive information about its environment from several different sources. How should the agent merge these items of information into a single, consistent piece? Taking our lead from the contraction + expansion approach to belief revision, we envisage a two-stage approach to this problem. The first stage consists of weakening the individual pieces of information into a form in which they can be consistently added together. The second, trivial, stage then consists of simply adding together the information thus obtained. This paper is devoted mainly to the first stage of this process, which we call social contraction. We consider both a postulational and a procedural approach to social contraction. The latter builds on the authorÕs framework of belief negotiation models. With the help of Spohn-type rankings we provide two possible instantiations of this extended framework. This leads to two interesting concrete families of social contraction functions. Ó 2005 Elsevier B.V. All rights reserved. , 2003 "... We consider herescalq aggregation queries in databases that mayviolzz a given set of functional dependencies. We de#ne consistent answers to such queries to begreatest-lEzqglzqgl upper bounds on thevalq of thescalW function acrossal (minimal repairs of the database. We show how to compute such answe ..." Cited by 28 (5 self) Add to MetaCart We consider herescalq aggregation queries in databases that mayviolzz a given set of functional dependencies. We de#ne consistent answers to such queries to begreatest-lEzqglzqgl upper bounds on thevalq of thescalW function acrossal (minimal repairs of the database. We show how to compute such answers. We provide acomplWg characterization of thecomputational compltati of thisproblz- Wealf show howtractabilfx can be improved inseveral special cases (oneinvolfz anovel applNgjfzz of Boyce--CoddNormal Form) and present apractical hybrid queryevalq###x method. - In Proc. International Conference on Database Theory (ICDT 05), Springer LNCS 3363, 2005 , 2005 "... Abstract. We propose a generalization of the well-known Magic Sets technique to Datalog ¬ programs with (possibly unstratified) negation under stable model semantics. Our technique produces a new program whose evaluation is generally more efficient (due to a smaller instantiation), while preserving ..." Cited by 25 (4 self) Add to MetaCart Abstract. We propose a generalization of the well-known Magic Sets technique to Datalog ¬ programs with (possibly unstratified) negation under stable model semantics. Our technique produces a new program whose evaluation is generally more efficient (due to a smaller instantiation), while preserving soundness under cautious reasoning. Importantly, if the original program is consistent, then full query-equivalence is guaranteed for both brave and cautious reasoning, which turn out to be sound and complete. In order to formally prove the correctness of our Magic Sets transformation, we introduce a novel notion of modularity for Datalog ¬ under the stable model semantics, which is relevant per se. We prove that a module can be evaluated independently from the rest of the program, while preserving soundness under cautious reasoning. For consistent programs, both soundness and completeness are guaranteed for brave reasoning and cautious reasoning as well. Our Magic Sets optimization constitutes an effective method for enhancing the performance of data-integration systems in which query-answering is carried out by means of cautious reasoning over Datalog ¬ programs. In fact, preliminary results of experiments in the EU project INFOMIX, show that Magic Sets are fundamental for the scalability of the system. 1 - IN BERTOSSI ET AL , 2004 "... We address the problem of minimal-change integrity maintenance in the context of integrity constraints in relational databases. Using the framework proposed by Arenas, Bertossi, and Chomicki [4], we focus on two basic computational issues: repair checking (is a database instance a repair of a given ..." Cited by 19 (3 self) Add to MetaCart We address the problem of minimal-change integrity maintenance in the context of integrity constraints in relational databases. Using the framework proposed by Arenas, Bertossi, and Chomicki [4], we focus on two basic computational issues: repair checking (is a database instance a repair of a given database?) and consistent query answers (is a tuple an answer to a given query in every repair of a given database?). We study the computational complexity of both problems, delineating the boundary between the tractable and the intractable. We review relevant semantical issues and survey different computational mechanisms proposed in this context. Our analysis sheds light on the computational feasibility of minimal-change integrity maintenance. The tractable cases should lead to practical implementations. The intractability results highlight the inherent limitations of any integrity enforcement mechanism, e.g., triggers or referential constraint actions, as a way of performing minimal-change integrity maintenance. - Journal of Applied Logic "... We address the problem of retrieving certain and consistent answers to queries posed to a mediated data integration system with open sources under the local-as-view paradigm using conjunctive and disjunctive view definitions. For obtaining certain answers a query program is run on top of a norma ..." Cited by 15 (3 self) Add to MetaCart We address the problem of retrieving certain and consistent answers to queries posed to a mediated data integration system with open sources under the local-as-view paradigm using conjunctive and disjunctive view definitions. For obtaining certain answers a query program is run on top of a normal deductive database with choice that defines the class of minimal legal instances of the integration system under the cautious stable model semantics. This methodology works for all monotone Datalog queries. To compute answers to queries that are consistent wrt given global integrity constraints, the specification of minimal legal instances is combined with another disjunctive deductive database that specifies the repairs of those legal instances. This allows to retrieve the consistent answers to any Datalog query, for any set of universal and acyclic referential integrity constraints. "... Abstract. Answer set programming (ASP) with disjunction offers a powerful tool for declaratively representing and solving hard problems. Many NP-complete problems can be encoded in the answer set semantics of logic programs in a very concise and intuitive way, where the encoding reflects the typical ..." Cited by 14 (3 self) Add to MetaCart Abstract. Answer set programming (ASP) with disjunction offers a powerful tool for declaratively representing and solving hard problems. Many NP-complete problems can be encoded in the answer set semantics of logic programs in a very concise and intuitive way, where the encoding reflects the typical “guess and check ” nature of NP problems: The property is encoded in a way such that polynomial size certificates for it correspond to stable models of a program. However, the problem-solving capacity of full disjunctive logic programs (DLPs) is beyond NP, and captures a class of problems at the second level of the polynomial hierarchy. While these problems also have a clear “guess and check ” structure, finding an encoding in a DLP reflecting this structure may sometimes be a non-obvious task, in particular if the “check ” itself is a co-NP-complete problem; usually, such problems are solved by interleaving separate guess and check programs, where the check is expressed by inconsistency of the check program. In this paper, we present general transformations of head-cycle free (extended) disjunctive logic programs into stratified and positive (extended) disjunctive logic programs based on meta-interpretation techniques. The answer sets of the original and the transformed program are in simple correspondence, and, moreover, inconsistency of the original program is indicated by a designated answer set of the transformed program. Our transformations facilitate the integration of separate “guess ” and “check” - Ninth International Workshop on Non-Monotonic Reasoning (NMR02), Special Session: Changing and Integrating Information: From Theory to Practice , 2002 "... Consistent answers from a relational database that violates a given set of integrity constraints are characterized [Arenas et al. 1999] as ordinary answers that can be obtained from every repaired version of the database. In this paper we address the problem of specifying the repairs of a database a ..." Cited by 13 (7 self) Add to MetaCart Consistent answers from a relational database that violates a given set of integrity constraints are characterized [Arenas et al. 1999] as ordinary answers that can be obtained from every repaired version of the database. In this paper we address the problem of specifying the repairs of a database as the minimal models of a theory written in Annotated Predicate Logic [Kifer et al. 1992a]. The specification is then transformed into a disjunctive logic program with annotation arguments and a stable model semantics. From the program, consistent answers to first order queries are obtained. - In Seipel, D., & Turell-Torres, J. (Eds.), Proc. 3rd Int. Symp. on Foundations of Information and Knowledge Systems (FoIKS’04), No. 2942 in LNCS , 2004 "... Abstract. We introduce a simple and practically efficient method for repairing inconsistent databases. The idea is to properly represent the underlying problem, and then use off-the-shelf applications for efficiently computing the corresponding solutions. Given a possibly inconsistent database, we r ..." Cited by 12 (2 self) Add to MetaCart Abstract. We introduce a simple and practically efficient method for repairing inconsistent databases. The idea is to properly represent the underlying problem, and then use off-the-shelf applications for efficiently computing the corresponding solutions. Given a possibly inconsistent database, we represent the possible ways to restore its consistency in terms of signed formulae. Then we show how the ‘signed theory ’ that is obtained can be used by a variety of computational models for processing quantified Boolean formulae, or by constraint logic program solvers, in order to rapidly and efficiently compute desired solutions, i.e., consistent repairs of the database. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=183101&sort=cite&start=10","timestamp":"2014-04-20T14:15:51Z","content_type":null,"content_length":"39905","record_id":"<urn:uuid:f4de0d54-e8e6-44e2-aefe-c3f1413beb46>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
Sawtooth VCO The figure above shows my current implementation of the popular integrator-with-reset sawtooth VCO. Very briefly, the circuit operates as follows. Capacitor C2 and op amp OA3 form an active integrator which is driven by current from an exponential current source built around op amp OA2 and the differential transistor pair Q1-Q2. When the integrator output reaches the 4-V threshold set by OA4, the comparitor U1 puts out a short voltage pulse which turns on the FET switch Q3, thereby discharging C2. The charging cycle then begins again. The input control signals are summed and scaled by OA1, with temperature compensation provided by R11. Finally, OA5 scales the output to a 10-V peak value. The design is a modification of the original circuit of Terry Michaels (Electronotes, v. 62). The main goals of the present design were to reduce the temperature drift and to improve speed and tuning accuracy by employing modern op amps. For improved stability and temperature drift, on-board +/- 6.9 V regulated supplies were added (circuitry around the LM329 chips in the upper right corner of the figure). These are used to supply the critical voltages in the circuitry for: 1) the coarse frequency control (R3), 2) the reference current in the exponential converter (OA2) and 3) the ramp reset point (OA4). For improved op-amp performance in critical parts of the circuit -- OA1, OA2 and OA3 -- I selected the Burr Brown OPA132. This chip was chosen for its combination of low input current (5 pA), fast slew rate (20 V/usec), high stability and moderate price (under $4). For the comparitor U1, I used the LM319, which is somewhat faster than the original LM311. The capacitor discharge switch chosen is the 2N4391 JFET (Q3). It works almost identically to the original KE4859, which is still available but a bit harder to obtain. The exponential converter uses the LM394CH matched transistor pair (Q1-Q2). Determining temperature drift can be tricky, as gradients and drafts can complicate the measurement. The prototype circuit was built on a plug-in board, which in turn was mounted on a wooden board. This assembly was then placed on a folded-over heating pad and covered with a plastic lid. This seemed to give fairly reproducible results. Temperature was monitored by an LM335 sensor heat sinked to the expo converter. First I looked at the oscillator section alone, without the expo converter, and was surprised to find quite a bit of temperature drift. Probing around with heat from a soldering iron tip proved that the discharge FET Q3 was the sensitive component. The problem turned out to be that the discharge time was too short, causing a variation in the discharge level due to the temperature dependence of the FET's channel ("on") resistance. Increasing the discharge-timing cap C4 eliminated this source of drift. The residual drift was quite small, perhaps about 200 ppm/K, but it was hard to get a reliable measurement at this level. Next I hooked up the expo converter and looked at the overall drift. Converter drift actually involves a temperature-dependent scale factor. In other words, when the differential base-emitter voltage (dV[be]) of the Q1-Q2 pair is zero, there is no temperature dependence of the output current, and voltages above and below zero give positive and negative dependences, respectively. Furthermore, these changes increase with increasing magnitude of dV[be]. So when someone quotes a value for VCO temperature drift you have to look closely at how this relates to the operating point of the converter. Conversely, converters should be designed so that dV[be]=0 corresponds to the center of the critical audio range, around 1-2 kHz. It turned out that the temperature drift of the full VCO was in the opposite direction from that of the uncompensated converter. This is sensible, as the Q81 resistor R11 has a tempco of 3500 ppm/K vs. the 3300 ppm/K (or so) needed for compensation. This discrepancy was easily fixed by adding a small metal film resistor R12 in series with the Q81, producing a composite with a smaller tempco. For this circuit a value of 174 ohms worked well. The final result is an essentially negligible temperature dependence of the VCO. At frequencies of 1 kHz and 3 kHz I observed zero frequency change (i.e., under 1 Hz) over a 10 K temperature range. At 200 Hz the drift was 0.5 Hz over a 15 K range. So the drift is less than 0.1 Hz/K or alternately less than 200 ppm/K (or 0.35 cents/K) over the range of 200 Hz - 3 kHz. The circuit's tracking accuracy is perfect, as close I could measure it. This was done by setting the frequency at the octave values between 110 Hz and 28160 Hz (by beating against a 440 Hz crystal-controlled oscillator) and measuring the corresponding input voltages. Within the 1 mV accuracy of the DVM used, these were in exact 1.000 V steps, indicating better than 0.1% accuracy. The switching reset time is under 0.5 usec, consistent with the OPA132 slew rate and settling time.
{"url":"http://home.comcast.net/~ijfritz/sy_cir2.htm","timestamp":"2014-04-16T08:33:07Z","content_type":null,"content_length":"5927","record_id":"<urn:uuid:6d744c4c-ca83-4732-a4b3-6f62e1ec3cf5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Advice for prospective graduate students Every noble work is at first impossible. --Thomas Carlyle What students am I looking for? • Independent thinkers. How can you prove that you are an independent thinker? Example: You have published a paper that builds on your work. However, not all papers count. If you did some experiments in a paper and is merely presenting the results, without trying to teach a lesson that generalizes beyond the scope of the experiments, this does not prove that you are an independent thinker. The lesson should be novel. We all know that genetic algorithms, given enough time and data, can find good solutions. Demonstrating this in yet another experiment in yet another domain represents marginal value. • Skills in math and computer science. What subjects? Linear algebra, calculus, probability and statistics, algorithms & data structures, and a bit of algorithmic complexity. You should be good at solving problems and possibly at programming (if you are very good at math, I don't care if you can program or not; you can pick that up later). To check your problem solving abilities, in an interview I will likely ask you to solve simple math problems. Ones that I would expect a really good high school student to solve. If you can e.g. solve the logic puzzle, probability and elementary calculus and number theory problems that go under the name 'puzzle sheets' on this page, then you are in a good shape! • Knowledge of ML. This is required for PhD students and is a plus for Masters students. You want to prove that you have a good grasp of some of what is happening in machine learning these days. How do you do this? Pick a paper from a recent machine learning conference, like ICML or NIPS, or you can also pick a journal paper from MLJ or JMLR, read it, understand and interpret it and then explain to me what is in the paper. What are the strengths and the weaknesses of the paper. What are the remaining outstanding issues? Or why did you like, or you did not like the paper? Presumably the paper would be in your area of interest, matching some of my interests. ICML and NIPS papers are online, just like JMLR papers. E.g., you can find ICML'2007 papers here, NIPS'2007 papers here, and JMLR papers here. Luckily, these days anyone has a good chance to follow the literature because the vast majority of publications is freely accessible on-line. • Familiarity with some of my previous work. It is a plus, if you are familiar with my recent work. Look at my slides, my papers. How am I going to check this? You pick a topic and you will explain the contents to me. Again, I am interested in your ideas related to my work. So merely repeating what my work was about (or a few keywords) will not help you. My publications can be accessed here • Good grades and letters, TOEFL/GRE scores I do care about grades and letters, at least to some extent. Same for TOEFL/GRE scores. Admission to our program is competitive, hence having good records on paper is insufficient. If you have bad grades in math or weak letters, you will not have much of a chance. However, I typically give chance to people who have some bad grades if they are likely to meet my other criteria (independent thinkers, skills in math, knowledge of ML). What happens in a (phone/skype) interview? I call you, we chat and I will ask questions related to the above items. You will also have a chance to ask questions of me. The purpose of the phone interview is to give me a chance to assess your knowledge, the way you think and your English skills. It is a good idea to be prepared to answer my questions related to the above. In the phone interview I will ask specifics. No need to talk about how much you love AI/RL/ML, ..., I know you do, unless you would not have applied. Should you send me an e-mail? If you think you meet the above criteria, I would love to hear from you, explaining why you think meet these criteria. Please discuss the criteria one by one in your e-mail. If we met before, you should mention that in your e-mail. In any ways, if you want to apply here to become a student of mine, you have to submit your application through the system. We do award fellowships to well qualified students for both our MSc and PhD . You can read about AICML, the machine learning center associated with the department . Please note that it is not me, but the graduate comittee of our department, who is making decisions about admittance of students. How to call me in your e-mail? I know this causes a lot of headache. My preference is "Csaba". More information is Advice for Graduate Students piece of advice Michael Steele . An about what research is like in the grad school. The take home message is that if people in the grad school are doing their job well, you will feel stupid. However, everyone in research feels stupid (not knowing what lies ahead), so grad school is for you only if you don't mind this feeling:)
{"url":"http://www.ualberta.ca/~szepesva/advice.html","timestamp":"2014-04-18T19:48:50Z","content_type":null,"content_length":"11739","record_id":"<urn:uuid:3c475e06-a972-48fe-b701-bf3b07590248>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
3rd grade word problem lesson LESSON PLAN—Grade 3 Math: Word Problems (55 minutes) By Rosy Audette, November 20, 2010 Table 1 OBJECTIVE. What will your students know, understand or be able to do by the end of class? SWBAT identify important parts of a word problem by using manipulatives to visually represent a given problem in order to solve singular multiplication problems. ASSESSMENT. How will you know concretely that all of your students have mastered the objective? KEY POINTS. What three to five main ideas or steps will you emphasize in Defining your lesson? Success Students will underline necessary information for a given word problem. Students will solve simple word • Only certain information is important in word problems. problems with objects. • Sometimes you need to see something in order to understand it. Table 2 OPENING. How will you focus, prepare and engage students for the lesson’s objective? MATERIALS. Do Now: a sentence “Ms. Audette is my teacher today” with many unrelated words in between. The students will try to figure out what words are important and which ones are not. The teacher will explain that this is exactly what we will be doing today. “Today we will be figuring out what information is important within word problems.”(3) INTRODUCTION OF NEW MATERIAL. How will you convey the knowledge and/or skills of the lesson? What will your students be doing to process this Teacher will explain that a word problem is just another way of asking a math question (i.e., 5+7). Within each problem there are numbers that represent the question. The teacher will have manipulatives for examples of real life problems. She will explain that using objects is another way of figuring out problems. The teacher will ask herself a question (identical to the word problem written on a poster) and model how she would find the relevant information. (10) GUIDED PRACTICE. In what ways will your learners attempt to explain or do what you have outlined? How will you monitor and coach their Lesson performance? Poster of written out word problems related to the Cycle The class will go through two more examples with the teacher by having the class decide what information they need to know.(10) objectsObjects: toys, books, markers, etc.Group INDEPENDENT PRACTICE. In what ways will your different learners attempt the objective on their own? How will you gauge mastery? job poster: one underliner, one reader, two object Each table will be a station. The students will each bring their math journals with them. For each problem, each child must write the handlers numerical equivalent and answer in their journals. The student groups will have five minutes at each station. The stations will have a word problem to be solved as a group and a set of manipulatives that match the word problem. The group will underline the important information in the problem. From this information, the students will use the objects to create a visual representation of the word problem. Then they will solve the problem. The teacher will be monitoring behavior and progress. She will also use the answered word problems to assess CLOSING. How will you have students summarize what they’ve learned? How will reinforce the objective’s importance and its link to past and future learning? To check for understanding and to summarize, the teacher will have a problem on the board that involves the students in the room. They have to decide what information is important. The teacher will reiterate the need to distinguish important information from a word problem in order to figure it out, and that using manipulatives is a way of organizing thoughts.(5) Table 3 DIFFERENTIATION: How will you differentiate your instruction to reach the diversity of learners in your classroom? I will group the students by separating out the math groups from math/literacy in order to make the groups heterogeneous. I will also reinforce the need for each student to help others when needed.
{"url":"http://cnx.org/content/m36166/1.1/","timestamp":"2014-04-19T12:12:12Z","content_type":null,"content_length":"45806","record_id":"<urn:uuid:a3c6d582-d1e3-4c21-9b23-db93874e580a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
What is this shape called? [Archive] - MacRumors Forums Nov 4, 2010, 11:28 AM I need look into the usage of a particular shape/design, and I really need to know what it’s called - if it has a name - before I can even search for it. Here’s a quick sketch: So, it’s formed from a circle, and overlaid with a square with the length of the sides equal to the radius of the circle, so that the corner of the square is at the centre of the circle. However, It’s not in any of the lists of geometric shapes that I’ve seen, maybe because it’s not a geometric shape with an equation to produce it. Does anyone know if there’s a name for this shape? Many thanks.
{"url":"http://forums.macrumors.com/archive/index.php/t-1043323.html","timestamp":"2014-04-16T17:13:01Z","content_type":null,"content_length":"12717","record_id":"<urn:uuid:6756277c-e9f6-4378-9461-3f2dca0a95c2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Most general form for calculating derivatives using limits. November 5th 2009, 03:25 PM #1 Junior Member Dec 2008 Most general form for calculating derivatives using limits. Right now I am working on finding the most general form for calculating a derivative using limits. Right now I have: $\lim_{h\rightarrow0} \frac{f(x+nh)-f(x-zh)}{h(n+z)} = f'(x)$ I need to incorporate this finite differencing expression into the above formula. The finite differencing expression is also equal to the derivative. Any help would be greatly appreciated. Thank you. I know it has something to do with weighted averages, but I can't wrap my head around it. I figured it out, if any of you are interested. If you set $<br /> lim_{h\rightarrow0} \frac{f(x+nh)-f(x-zh)}{h(n+z)} = P$, then P is the derivative. Using a weighted average and the finite differencing formula, the most general form for calculating a first derivative is: $\sum_{i = 1}^{n}\frac{a_{i}P_{i}}{a_{i}}$ November 6th 2009, 05:25 AM #2 Junior Member Dec 2008 November 6th 2009, 11:45 AM #3 Junior Member Dec 2008
{"url":"http://mathhelpforum.com/calculus/112673-most-general-form-calculating-derivatives-using-limits.html","timestamp":"2014-04-20T03:38:48Z","content_type":null,"content_length":"34913","record_id":"<urn:uuid:7a2c4a02-73d8-414a-b05f-f42ac297787a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Stephen Boyd, Laurent El Ghaoui, Eric Feron, and Venkataramanan Balakrishnan Studies in Applied Mathematics 15 In this book the authors reduce a wide variety of problems arising in system and control theory to a handful of convex and quasiconvex optimization problems that involve linear matrix inequalities. These optimization problems can be solved using recently developed numerical algorithms that not only are polynomial-time but also work very well in practice; the reduction therefore can be considered a solution to the original problems. This book opens up an important new research area in which convex optimization is combined with system and control theory, resulting in the solution of a large number of previously unsolved problems. This book is primarily intended for researchers in system and control theory; both the beginner and the advanced researcher will find the book useful. Researchers in convex optimization will find this book a source of optimization problems for which algorithms need to be devised. A background in linear algebra, elementary analysis, and exposure to differential equations and system and control theory is recommended. Preface; Chapter 1: Introduction. Overview; A Brief History of LMIs in Control Theory; Notes on the Style of the Book; Origin of the Book; Chapter 2: Some Standard Problems Involving LMIs. Linear Matrix Inequalities; Some Standard Problems; Ellipsoid Algorithm; Interior-Point Methods; Strict and Nonstrict LMIs; Miscellaneous Results on Matrix Inequalities; Some LMI Problems with Analytic Solutions; Chapter 3: Some Matrix Problems. Minimizing Condition Number by Scaling; Minimizing Condition Number of a Positive-Definite Matrix; Minimizing Norm by Scaling; Rescaling a Matrix Positive-Definite; Matrix Completion Problems; Quadratic Approximation of a Polytopic Norm; Ellipsoidal Approximation; Chapter 4: Linear Differential Inclusions. Differential Inclusions; Some Specific LDIs; Nonlinear System Analysis via LDIs; Chapter 5: Analysis of LDIs: State Properties. Quadratic Stability; Invariant Ellipsoids; Chapter 6: Analysis of LDIs: Input/Output Properties. Input-to-State Properties; State-to-Output Properties; Input-to-Output Properties; Chapter 7: State-Feedback Synthesis for LDIs. Static State-Feedback Controllers; State Properties; Input-to-State Properties; State-to-Output Properties; Input-to-Output Properties; Observer-Based Controllers for Nonlinear Systems; Chapter 8: Luré and Multiplier Methods. Analysis of Luré Systems; Integral Quadratic Constraints; Multipliers for Systems with Unknown Parameters; Chapter 9: Systems with Multiplicative Noise. Analysis of Systems with Multiplicative Noise; State-Feedback Synthesis; Chapter 10: Miscellaneous Problems. Optimization over an Affine Family of Linear Systems; Analysis of Systems with LTI Perturbations; Positive Orthant Stabilizability; Linear Systems with Delays; Interpolation Problems; The Inverse Problem of Optimal Control; System Realization Problems; Multi-Criterion LQG; Nonconvex Multi-Criterion Quadratic Problems; Notation; List of Acronyms; Bibliography; Index. Royalties from the sale of this book are contributed to the SIAM Student Travel Fund. 1994 / ix + 193 pages / Softcover / ISBN-13: 978-0-898714-85-2 / ISBN-10: 0-89871-485-0 / List Price $71.50 / SIAM Member Price $50.05 / Order Code AM15
{"url":"http://www.ec-securehost.com/SIAM/AM15.html","timestamp":"2014-04-17T21:22:56Z","content_type":null,"content_length":"7142","record_id":"<urn:uuid:13d5ba76-b570-47fd-9b86-c553ebd4b7cd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
2 masses connected by a string wrapped around a massless pulley, F applied to pulley. hi tonicandgin! (just got up yes, there are two ways doing it your can either start with F = ma for the centre of mass, as your professor suggests, or you can just do F = ma for each of the three bodies separately … i still feel like there are too many unknowns and not enough equations. whichever method you choose, write out all the equations you have, and we'll see
{"url":"http://www.physicsforums.com/showthread.php?t=574173","timestamp":"2014-04-18T03:03:06Z","content_type":null,"content_length":"41224","record_id":"<urn:uuid:d7207ee9-ae65-43f2-878f-92d94df73b32>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
Book Details Search page | Title Index | Author Index Table of Contents e-Book PDF (4193 KB) Heritage of European Mathematics Peter M. Neumann (University of Oxford, UK) The mathematical writings of Évariste Galois Corrected 2nd printing, September 2013 ISBN 978-3-03719-104-0 DOI 10.4171/104 October 2011, 421 pages, hardcover, 17 x 24 cm. 78.00 Euro Although Évariste Galois was only 20 years old when he died, shot in a mysterious early-morning duel in 1832, his ideas, when they were published 14 years later, changed the course of algebra. He invented what is now called Galois Theory, the modern form of what was classically the Theory of Equations. For that purpose, and in particular to formulate a precise condition for solubility of equations by radicals, he also invented groups and began investigating their theory. His main writings were published in French in 1846 and there have been a number of French editions culminating in the great work published by Bourgne & Azra in 1962 containing transcriptions of every page and fragment of the manuscripts that survive. Very few items have been available in English up to now. The present work contains English translations of almost all the Galois material. They are presented alongside a new transcription of the original French, and are enhanced by three levels of commentary. An introduction explains the context of Galois' work, the various publications in which it appears, and the vagaries of his manuscripts. Then there is a chapter in which the five mathematical articles published in his lifetime are reprinted. After that come the Testamentary Letter and the First Memoir (in which Galois expounded the ideas now called Galois Theory), which are the most famous of the manuscripts. There follow the less well known manuscripts, namely the Second Memoir and the many fragments. A short epilogue devoted to myths and mysteries concludes the text. The book is written as a contribution to the history of mathematics but with mathematicans as well as historians in mind. It makes available to a wide mathematical and historical readership some of the most exciting mathematics of the first half of the 19th century, presented in its original form. The primary aim is to establish a text of what Galois wrote. Exegesis would fill another book or books, and little of that is to be found here. This work will be a resource for research in the history of mathematics, especially algebra, as well as a sourcebook for those many mathematicians who enliven their student lectures with reliable historical background. Further Information Review in Zentralblatt MATH 1237.01011 Review in Bulletin of the London Mathematical Society Review in MR 2882171 (2012j:01032) Review in LMS Newsletter, No. 419 November 2012 Notices of the AMS, December 2012, pp. 1565–1568
{"url":"http://www.ems-ph.org/books/book.php?proj_nr=137","timestamp":"2014-04-17T03:49:45Z","content_type":null,"content_length":"7779","record_id":"<urn:uuid:8739c0ce-69e4-4c5c-ade4-9ff441154cdf>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
GSoC Update 4: Solving Sudoku's In my previous post, I discussed my work on generating Sudoku puzzles. I had planned to try and improve this, and did make some small improvements, for instance adding a check at the end using the existing solver code to check for a unique solution. However I am not sure if this is a very good method. While looking for inspiration in the python code, I noticed that its generator used the functionality of the solver to help it generate puzzles with a unique solution. So I decided to port the python Sudoku solver and rater directly. At first this was very hard, the data structures in the python code are quite complicated and it uses them in some quite complicated ways. This is exacerbated by pythons use of duck typing. However, once I had worked out how to run the python code from the interpreter, either by pasting in the bit I was interested in, or by just importing the file and executing whole functions, the process went much quicker. I found that looking at the data structure, then working backwards to determine its function, then working out how to replicate that, was far quicker and easier than trying to reproduce the semantics of the code exactly. I now have written out most of the solver, and understand some of it. The code currently on will work for simple puzzles, but I am having problems determining if the guessing and backtracking part of the solver is correct. This is because when I run the python code, I believe the slight difference in implementation or the hashing function used in the sets in this part, causes their contents to be written out in a different order when converted to an array. This in turn, causes the two solvers to pick different paths, meaning that I can't directly compare the output. Once this issue is solved, I should be able to calculate difficulty ratings for the Sudoku's. My current plan for solving it, devised while writing this blog post, is to add a function to both solvers, to sort the arrays that differ in such a way that they don't, then compare the resulting output of the solver. Once I have done that, I will go through the code, adding plenty of comments to explain the methodology of the solver (this is once I have worked it out for myself!).
{"url":"http://cbaines.blogspot.com/2012/07/gsoc-update-4-solving-sudokus.html","timestamp":"2014-04-20T00:42:56Z","content_type":null,"content_length":"54294","record_id":"<urn:uuid:a002c84e-ca41-47d9-879a-fcf7788d0d04>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Upper Marlboro Trigonometry Tutor Find an Upper Marlboro Trigonometry Tutor ...I came to u.s in 1980 with an in-service scholarship award to study communication engineering at the university of Illinois Chicago campus where i graduated in march 1984. After my graduation,i went back to my country Nigeria to work and retired there having worked in various telecommunication... 7 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...When I was deployed while in the military, I would even help out students by providing them with 1-to-1 tutoring sessions, as that seems to be the issue with online courses. I love math. I love 17 Subjects: including trigonometry, calculus, algebra 1, algebra 2 ...They need to be challenged with rich problems, while being provided with the tools to tackle those problems with creativity and confidence. With several years of experience teaching math and tutoring, I know how to help students build both their conceptual understanding of mathematics and their ... 16 Subjects: including trigonometry, English, writing, calculus ...However, I'm also extremely competent with vocabulary and grammar. While in High School, I studied Latin up to college level, which has allowed me a much greater understanding of English grammar and the roots of our words. I do not have any professional tutoring experience, but I have had good experiences tutoring my friends and family. 32 Subjects: including trigonometry, reading, algebra 2, calculus ...I spent my last year of undergraduate as an ESL (English as a Second Language) tutor for a pregnancy center that targeted Spanish-speaking woman. There we tailored a curriculum to teach these women conversational English and the essentials that they would need for work, doctor visits, and everyd... 17 Subjects: including trigonometry, Spanish, writing, physics
{"url":"http://www.purplemath.com/upper_marlboro_md_trigonometry_tutors.php","timestamp":"2014-04-18T08:45:14Z","content_type":null,"content_length":"24452","record_id":"<urn:uuid:f3eedf27-fd6b-4d16-b3a1-1ea894cd7c66>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
The Sum Of The First Billion Primes This problem from Programming Praxis came about in the comments to my last post and intrigued me. So today, we are trying to sum the first one billion primes. Summing the first hundred, thousand, even million primes isn’t actually that bad. But it takes a bit more effort when you scale it up to a billion. And why’s that? Before I get started, if you’d like to download today’s source code and follow along, you can do so here: billion primes source Now that that’s out of the way, the first problem is time. A naive approach would be to go through all of the numbers from 2 to a billion and test if each is prime. To do that, test each number up to your current number and see if the latter divides the former. Simple enough: ; test if n divides m (define (divides? m n) (= 0 (remainder m n))) ; test if n is prime by trial division (define (prime? n) (and (not (divides? n 2)) (for/and ([i (in-range 3 (+ 1 (ceiling (sqrt n))) 2)]) (not (divides? n i))))) ; sum the first n primes directly (define (sum-primes-direct n) (let loop ([i 3] [count 1] [sum 2]) [(= count n) sum] [(prime? i) (loop (+ i 2) (+ count 1) (+ sum i))] (loop (+ i 2) count sum)]))) Not too bad, we can sum the first hundred thousand primes pretty easily: > (time (sum-primes-direct 100000)) cpu time: 3068 real time: 3063 gc time: 79 If we waited a bit longer, we could even get the first billion that way. Still, that’s 3 seconds for just only 1/10,000th of the problem. I think that we can do better. What’s the next idea? Perhaps if rather than dividing by all numbers from 2 up to the number we’re dealing with, why don’t we just divide by the previous primes: ; sum the first n primes by keeping a list of primes to divide by (define (sum-primes-list n) (let loop ([i 3] [count 1] [sum 2] [primes '()]) [(= count n) [(andmap (lambda (prime) (not (divides? i prime))) primes) (loop (+ i 2) (+ count 1) (+ sum i) (cons i primes))] (loop (+ i 2) count sum primes)]))) Simple enough. And theoretically it should be faster, yes? After all, we’re doing far fewer divisions for each number. But no. It turns out that it’s not actually faster at all. If you cut it down to the first 10,000 primes, the direct solution only takes 91 ms but this solution takes a whopping 9 seconds. That’s two whole orders of magnitude. Ouch! > (time (sum-primes-direct 10000)) cpu time: 91 real time: 90 gc time: 0 > (time (sum-primes-list 10000)) cpu time: 8995 real time: 8987 gc time: 0 At first, you might think that that doesn’t make the least bit of sense. After all, we’re doing essentially the same thing, we’re just performing fewer divisions. So why isn’t it faster? Basically, it all comes down to memory access. In the first direct case, we basically aren’t using the system’s RAM. Everything (or just about) can be done in registers directly on the CPU. In the second case though, there’s constant swapping as the list grows too large to hold in registers alone. And memory access is orders of magnitude slower than any single instruction on the CPU. Really, this is a perfect example of both this phenomenon and the cost of premature optimization. Just because something should be faster according to runtime alone, that’s not the entire story. Still, we’re not quite done. I know we can do better than the direct method. So this time, let’s use a more intricate method, specifically the Sieve of Eratosthenes. The basic idea is to start with a list of all of the numbers you are interested in. Then repeatedly take the first number as prime and cross out all of it’s multiples. There’s a pretty nice graphic on the aforelinked Wikipedia page. And if we just go with a loop, the code is rather straight forward: ; sum the first n primes using the Sieve of Eratosthenes ; algorithm source: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes (define (sum-primes-sieve-eratosthenes-list n) (define-values (lo hi) (guess-nth-prime n)) (let loop ([ls (range 2 hi)] [count 0] [sum 0]) [(= count n) sum] (lambda (i) (not (divides? i (car ls)))) (cdr ls)) (+ count 1) (+ (car ls) sum))]))) There’s one interesting bit–the guess-nth-prime function: ; estimate the nth prime, return lower and upper bounds ; source: http://en.wikipedia.org/wiki/Prime_number_theorem (define (guess-nth-prime n) (values (inexact->exact (floor (* n (log n)))) (if (<= n 1) (+ (* n (log n)) (* n (log (log n))))))))) By default, the Sieve of Eratosthenes generates all of the primes from 1 to some number n. But that’s not what we want. Instead, we want the first n primes. After a bit of searching though, I found the Wikipedia page on the Prime number theorem. That defines the function pi(n) which approximates the number of primes less than or equal to n. Invert that function and you find that the value of the nth prime p[n] falls in the range: $n * ln(n) < p_n < n * ln(ln(n))$ That upper bound is the one that lets us generate enough primes with the Sieve of Eratosthenes so that we can sum the first n. The best part is that it turns out that it’s at least faster than the list based method: > (time (sum-primes-sieve-eratosthenes-list 10000)) cpu time: 4347 real time: 4344 gc time: 776 Still. That’s not good enough. The problem here is much the same as the list based method. We’re passingly along and constantly building a list that would eventually have a billion elements in it. Not something that’s particularly easy to deal with. So instead of a list, why don’t we use a vector of #t/#f? ; sum the first n primes using the Sieve of Eratosthenes with a vector ; algorithm source: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes (define (sum-primes-sieve-eratosthenes-vector n) (define-values (lo hi) (guess-nth-prime n)) (define v (make-vector hi #t)) (vector-set! v 0 #f) (vector-set! v 1 #f) (for* ([i (in-range 2 hi)] #:when (vector-ref v i) [j (in-range (* i i) hi i)]) (vector-set! v j #f)) (let loop ([i 3] [count 1] [sum 2]) [(= count n) sum] [(vector-ref v i) (loop (+ i 2) (+ count 1) (+ sum i))] (loop (+ i 2) count sum)]))) So how does it perform? > (time (sum-primes-sieve-eratosthenes-vector 10000)) cpu time: 6 real time: 6 gc time: 0 Dang that’s nice. > (time (sum-primes-sieve-eratosthenes-vector 1000000)) cpu time: 892 real time: 889 gc time: 2 Less than a second isn’t too shabby. It’s slower than I’d like, but I could wait a thousand seconds (a bit over 16 minutes) if I had to. > (time (sum-primes-sieve-eratosthenes-vector 1000000000)) out of memory Oops. It turns out that calling make-vector to make a billion element vector doesn’t actually work so well on my machine… We’re going to have to get a get a little sneakier. Perhaps if we used the bitvectors from Monday’s post? (And now you know why I made that library :)). All we have to do is swap out each instance of make-vector, vector-ref, or vector-set! for make-bitvector, bitvector-ref, or bitvector-set!. > (time (sum-primes-sieve-eratosthenes-bitvector 1000000)) cpu time: 5174 real time: 5170 gc time: 0 So it run about five times slower than the simple vector based method (which makes sense if you think about it; twiddling bits doesn’t come for free). Still, we’re using a fair bit less memory. Let’s see if it can handle a billion: > (time (sum-primes-sieve-eratosthenes-bitvector 1000000000)) cpu time: 9724165 real time: 9713671 gc time: 5119 Dang. Nice. 2 hrs 41 minutes may be more than twice as long as I was expecting based on the 16 minute estimate for the one million run vector version and the 5x slowdown between the vector and bitvector versions. Still, it worked. And that’s a pretty good base all by itself. Still, I think we can do better. Upon some searching, it turns out that you can actually create vectors containing one billion entries. You just can’t have them all in the same vector. So instead, I created another datatype: the multivector. Essentially, the idea is to create several smaller vectors and abstract the ref and set! methods to minimize the changes to the Sieve of Eratosthenes code. (define-struct multivector (size chunks default data) #:constructor-name make-multivector-struct) (define (make-multivector size [chunks 1] [default #f]) (define per-chunk (inexact->exact (ceiling (/ size chunks)))) size chunks default (for/vector ([i (in-range chunks)]) (if (= i (- chunks 1)) (make-vector (- size (* (- chunks 1) per-chunk)) default) (make-vector per-chunk default))))) (define (multivector-ref mv i) (vector-ref (vector-ref (multivector-data mv) (quotient i (multivector-chunks mv))) (remainder i (multivector-chunks mv)))) (define (multivector-set! mv i v) (vector-set! (vector-ref (multivector-data mv) (quotient i (multivector-chunks mv))) (remainder i (multivector-chunks mv)) You can test it if you’d like, but it does work. So let’s try it in the Sieve of Eratosthenes. The same as before, just swap out make-vector, vector-ref, or vector-set! for make-multivector, multivector-ref, or multivector-set!. So how does the performance compare? > (time (sum-primes-sieve-eratosthenes-multivector 1000000)) cpu time: 6635 real time: 6625 gc time: 435 Hmm. Well, it doesn’t actually run any faster than the bitvector, but it also doesn’t run out of memory. I think we may have a winner, but before we wind down, there are two other sieves linked to from the Sieve of Eratosthenes page: the Sieve of Atkin and the Sieve of Sundaram. The algorithms are a bit more complicated than the Sieve of Eratosthenes, but still entirely doable. It is interesting just how they work though. The Sieve of Eratosthenes is intuitive. These two? A bit less so. First, we have the Sieve of Atkin: ; sum the first n primes using the Sieve of Atkin ; algorithm source: http://en.wikipedia.org/wiki/Sieve_of_Atkin (define (sum-primes-sieve-atkin n) (define-values (lo hi) (guess-nth-prime n)) (define v (make-vector hi #f)) ; add candidate primes (for* ([x (in-range 1 (+ 1 (sqrt hi)))] [y (in-range 1 (+ 1 (sqrt hi)))]) (define x2 (* x x)) (define y2 (* y y)) (let ([i (+ (* 4 x2) y2)]) (when (and (< i hi) (or (= 1 (remainder i 12)) (= 5 (remainder i 12)))) (vector-set! v i (not (vector-ref v i))))) (let ([i (+ (* 3 x2) y2)]) (when (and (< i hi) (= 7 (remainder i 12))) (vector-set! v i (not (vector-ref v i))))) (let ([i (- (* 3 x2) y2)]) (when (and (> x y) (< i hi) (= 11 (remainder i 12))) (vector-set! v i (not (vector-ref v i)))))) ; remove composites (for ([i (in-range 5 (+ 1 (sqrt hi)))]) (when (vector-ref v i) (for ([k (in-range (* i i) hi (* i i))]) (vector-set! v k #f)))) ; report (let loop ([i 5] [count 2] [sum 5]) [(= count n) sum] [(vector-ref v i) (loop (+ i 2) (+ count 1) (+ sum i))] (loop (+ i 2) count sum)]))) It’s pretty much a direct translation of the code on the Wikipedia page. Since it uses vector, it won’t be able to calculate the sum of the first billion, but you could pretty easily replace swap out for a bitvector or multivector. Still, I’m mostly interested in the implementation and performance to start with. Speaking of which: > (time (sum-primes-sieve-atkin 1000000)) cpu time: 2421 real time: 2421 gc time: 415 So this particular version is about three times slower than the vector version of the Sieve of Eratosthenes. The Wikipedia page mentions that there are a number of optimizations that you could do to speed this up which I may try some day, but not today. What’s interesting though is that if you do swap out a bitvector for a vector, it’s actually faster: > (time (sum-primes-sieve-atkin-bitvector 1000000)) cpu time: 3059 real time: 3058 gc time: 0 If that proportion follows through to the billion element run, we should be able to finish in just an hour and a half. Let’s try it out. > (time (sum-primes-sieve-atkin-bitvector 1000000000)) cpu time: 5304855 real time: 5300800 gc time: 1237 An hour and a half, spot on. None too shabby if I do say so myself. (Although I bet we could get even faster. I’ll leave that as an exercise for another day though.) Finally, the Sieve of Sundaram. This one is even more different than the previous ones, not removing multiples of primes but rather removing all composites less than n by noting that they all have the form i + j + 2ij ≤ n: ; sum the first n primes using the Sieve of Sundaram ; algorithm source: http://en.wikipedia.org/wiki/Sieve_of_Sundaram (define (sum-primes-sieve-sundaram n) (define-values (lo hi) (guess-nth-prime n)) (define dn (quotient hi 2)) (define v (make-vector dn #t)) (for* ([j (in-range 1 dn)] [i (in-range 1 (+ j 1))] #:when (< (+ i j (* 2 i j)) dn)) (vector-set! v (+ i j (* 2 i j)) #f)) (let loop ([i 1] [count 1] [sum 2]) [(= count n) sum] [(vector-ref v i) (loop (+ i 1) (+ count 1) (+ sum (+ 1 (* 2 i))))] (loop (+ i 1) count sum)]))) Very straight forward code, how does it perform? > (time (sum-primes-sieve-sundaram 10000)) cpu time: 32066 real time: 32055 gc time: 0 Eesh. Note that that’s only on 10,000 and it still took 30 seconds. I think I’ll skip running this one even out to a million. Well, that’s enough for today I think. Here’s a nice timing summary for the methods: Algorithm Ten thousand One million One billion Direct 91 ms 86.0 s — Previous primes 9.0 s — — Eratosthenes (list) 4.3 s — — Eratosthenes (vector) 6 ms 0.9 s — Eratosthenes (bitvector) 31 ms 5.2 s 2 hr 42 min Eratosthenes (multivector) 34 ms 6.6 s — Atkin (vector) 12 ms 2.4 s — Atkin (bitvector) 20 ms 3.1 s 1 hr 28 min Atkin (multivector) 23 ms 4.4 s — Sundaram 32.1 s — — Segmented Sieve 7 ms 0.9 s 25 min 12 sec And the actual values: Ten thousand 496165411 One million 7472966967499 One billion 11138479445180240497 If you’d like to download the source code for today’s post, you can do so here: - billion primes source - bitvector source - multivector source Edit: After Will’s comments, I actually got around to writing a segmented version. It’s pretty amazing the different it made too. It runs about 3x faster than even the Sieve of Atkin. Sometimes optimization is awesome. You can find that post here and the source code here. ← NaNoWriMo 2012 All Confession – Day 1 → ← Pandigital Sums By category Project Euler → 6 comments on “The Sum Of The First Billion Primes” 1. Hi, you left out the _segmented_ sieve of Eratosthenes. Try it, it should be faster – you work on one short vector and sum up the primes, then go on to the next chunk, using the same vector. This supposed to improve cache behaviour. You can have a separate sieve for your primes supply up to 10^4.5 = 31623. That’s 3400 odd primes total, to sieve your segments by. Why odd primes only? Because evens are all non-prime, above 2. That means you don’t need to represent evens in your vector at all. Have consecutive entries at 1,2,3 to stand for odds, x, x+2, x+4, … . This will shrink memory needs in two and speed it up twice. 2. BTW the talk page for sieve of Atkin WP page says that the code in the article itself is no good. 3. Regarding the segmented sieve, I’ve been meaning to try a version of that. It would be interesting to see how fast that could actually get. I haven’t really looked into it, but how do they get around apply the older primes to the newer blocks? I can see it if they keep them all in memory but only do the removal once per block, improving cache efficiency that way, but I’m not as sure about having only a single vector. So far as the Sieve of Atkin, how do you mean? There’s a blurb on the Wikipedia page that states the psuedocode (which mine is based on) is written for clarity rather than speed, but it does work well enough. Another todo on my list would be to get the much faster example by the author of the algorithm, but I’m not sure how well it would translate to Racket. We’ll see. 4. Why get around? You do apply older – “sieving” – primes to each new “block”. These sieving primes can be pre-calculated by an ordinary sieve, – there are relatively few of them, – and stored in a separate array. You have one vector; it represents consecutive ranges; you flip the bits and count primes; then reuse the vector for the next range. For lower ranges you need fewer sieving As for Atkin, that’s what’s written on the talk page. If the initial run is for x upto sqrt n, y upto sqrt n, we alread have O(n) algo – but it’s supposed to be sublinear. I’m no expert – that’s what it says there on the talk page, http://en.wikipedia.org/wiki/Talk:Sieve_of_Atkin#How_is_this_faster_than_the_Sieve_of_Eratosthenes.3F . 5. the specific comment is “The pseudocode is not O(N/log log N); the sweep has to be done more carefully for that to happen.” from 03:22, 19 December 2011 (UTC). 6. Got it (the segmented sieve). For some reason that idea just wasn’t clicking in my head until just now. I wrote a new version using that and it’s kind of amazing how much more quickly it runs. Now it’s down to only 25 minutes, so three times faster than my previous best. I’ve edited the post above with the time and link to the new post or you can find it here: One Billion Primes – Segmented Sieve So far as the Sieve of Atkin, the comment on the talk page is related to the comment on the main page about how the psuedo-code isn’t the most efficient. I have looked at the more efficient version, but it makes the code even more complicated, so I think I’m going to pass for the moment on implementing it. Perhaps someday though. Thanks for the comments, it was just what I needed to actually write up the segmented sieve.
{"url":"http://blog.jverkamp.com/2012/11/01/the-sum-of-the-first-billion-primes/","timestamp":"2014-04-16T18:56:35Z","content_type":null,"content_length":"134067","record_id":"<urn:uuid:b0f5c475-c9b1-4b81-b927-cb8a5f1463ae>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring Compactness What do you mean by district compactness? How is compactness measured? A circle is compact. A square is compact. A polygon with donut holes and tentacles extending in many directions is not. Most compactness measures attempt to quantify the geometric shape of a district relative to a perfectly compact shape, often a circle.* The compactness measures used in Azavea’s redistricting study can be divided into two categories: those that measure the extent to which the shape of a district is spread out from its center (Reock and Convex Hull) and those that measure how smooth or contorted the boundaries of a district are (Polsby-Popper and Schwartzberg). What are some other compactness measures? The academic literature describes more than thirty different ways to measure compactness. For our study, we chose four of the most commonly used compactness measures: Reock, Convex Hull, Polsby-Popper and Schwartzberg. Other measures use simple length and width ratios, or sum the perimeters of all the districts included in a plan. More complicated shape-based compactness measures calculate the moment of inertia for a district shape (the variance of distances from all points in the districts to the district’s areal center) or evaluate the number of interior angles in a district shape. Population measures--somewhat more difficult to calculate for the entire country--are based on the distribution of the population within a district. Why is compactness important? Does it really matter? While Polsby and Popper have lent their names to a particular compactness measure, they argue that the establishment of any compactness standard is preferable to none. Others have questioned the utility of such thresholds, and research indicates that the extent to which various compactness measures agree with one another is highly inconsistent. Because each measure of compactness captures a slightly different geometric or geographical phenomenon, it is a somewhat arbitrary choice to select a particular compactness metric as the means of accepting or rejecting a single district boundary. Probably a better question is, "Should compactness be a requirement in the redistricting process?" While geometric compactness measures may appear to be neutral, combined with geography and real-life patterns of population distribution they may produce reliable political outcomes. One study* concluded that a compactness requirement reduces the representation of racial minorities. Other scholarly work** identifies a variety of biases inherent in automated redistricting and compactness standards, including favoring the majority political party. Clearly, other important components of the redistricting process, such as aggregation of "communities of interest" are not necessarily well served by examining only compactness. A number of scholars have suggested that compactness measures are best used not as absolute standards against which a single district’s shape is judged, but rather as a way to assess the relative merits of various proposed plans. Above all, compactness is most meaningful within the framework of an institutional redistricting process. * Barabas & Jerit, "Redistricting Principles and Racial Representation," State and Politics Quarterly¸4 (4), 2004, pp. 415-435. ** Altman, "Is Automation the Answer? - The Computational Complexity of Automated Redistricting," Rutgers Computer and Technology Law Journal 23 (1), pp. 81-142, 1997.
{"url":"http://www.redistrictingthenation.com/whatis-compactness.aspx","timestamp":"2014-04-16T07:13:34Z","content_type":null,"content_length":"10921","record_id":"<urn:uuid:a999a4a3-5ea5-4429-86ad-a3c2096a8223>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
How I Made Wine Glasses from Sunflowers July 28, 2011 — Christopher Carlson, Senior User Interface Developer, User Interfaces Eons ago, plants worked out the secret of arranging equal-size seeds in an ever-expanding pattern around a central point so that regardless of the size of the arrangement, the seeds pack evenly. The sunflower is a well-known example of such a “spiral phyllotaxis” pattern: It’s really magical that this works at all, since the spatial relationship of each seed to its neighbors is unique, changing constantly as the pattern expands outwardly—unlike, say, the cells in a honeycomb, which are all equivalent. I wondered if the same magic could be applied to surfaces that are not flat, like spheres, toruses, or wine glasses. It’s an interesting question from an aesthetic point of view, but also a practical one: the answer has applications in space exploration and modern architecture. To reproduce the flat sunflower pattern mathematically, you need to know three secrets of the arrangement: 1. Seeds spiral outward from the center, each positioned at a fixed angle relative to its predecessor. 2. The fixed angle is the golden angle, γ = 2π(1 – 1/Φ), where Φ is the golden ratio. 3. The ith seed in the pattern is placed at a distance from the center proportional to the square root of i. You can make a picture of that arrangement easily if you think in polar coordinates, with the ith seed placed at coordinate {r, θ} = {√i, i γ}: The reason that the radial distance of the ith seed is proportional to √i is not difficult to understand. Suppose the ith seed is at distance d from the center. Then the disk of radius d contains i seeds. In order to achieve an even density of seeds, the number of seeds i in a disk must stand in constant proportion to its area πd^2, or: i ∝ d^2 Inverting that relationship gives: d ∝ √i You can apply the same reasoning to the problem of distributing seeds evenly on a hemisphere. To deal with hemispherical surfaces, it helps to think of the positions of the seeds in fixed-radius spherical coordinates {φ, θ}, where θ plays the same angular role as in polar coordinates and φ—the angular distance from the north pole along the hemisphere to a point—plays the role of r in polar coordinates. This figure shows the correspondence between polar coordinates and fixed-radius spherical coordinates: The relationship between the area of a spherical cap and its angular radius is not the square root, as in the plane, but something else. To determine what it is, I started with the expression given in MathWorld for the area of a surface of revolution generated by rotating the parametric curve {x(t), z(t)} about the z-axis: For a spherical cap with angular radius φ, the generating curve is the circular arc defined by as t goes from 0 to φ. To find the formula for the cap’s area, you don’t need to know anything about integration. You just plug the values of x, z, t[0] = 0, and t[1] = φ into the area formula, and out pops the cap’s area: As in the planar case, I wanted i, the number of seeds, to stand in constant proportion to the area of the hemispherical cap, or: i ∝ (1-Cos[φ]) Introducing c as the constant of proportionality, I found the relationship I sought using Solve: The two solutions differ in the direction that the seeds spiral around the center. The constant of proportionality c governs the density of the seeds. I solved for its value as a function of the total number of seeds n and the maximum angular radius φ[max]: Thus if I want 1,000 seeds in the entire hemisphere, which corresponds to an angular radius φ of 90°, I set c to Putting all these results together yields the seed-covered hemisphere. The forms of the coordinate and graphics expressions here are analogous to the forms in the planar case above: If you were building a hemispherical house, this would be the basis of a good roofing pattern, since each equal-sized shingle overlaps the joint between the two below it. Or, if you wanted to design approximately equal-area flat panels to assemble into a dome, this distribution would be a starting point. This problem of “tectonics”—how to realize curved surfaces with flat materials—is a hot topic in current architectural research. A similar problem arose in the Starshine 3 student satellite project, which required a spherical satellite to be evenly covered with reflective mirrors. The folks at the U.S. Naval Research Laboratory solved that problem with a phyllotaxic pattern. Computing a custom function to cover each new kind of surface was an interesting mathematical journey, but a lot of work. What I really wanted was a single function that would, once and for all, distribute points evenly on any surface of revolution whose generating curve I could describe parametrically. As with the disk and hemisphere, the secret to achieving an even distribution of points on an arbitrary surface of revolution is to make the number of points in an area proportional to the area. Unfortunately, the integral that describes that relationship often has no closed-form solution, even for simple parametric curves. But fortunately, we don’t need a closed-form solution to obtain a result. Numerical integration does nicely. The numerical analog of the area integral above for generating functions x and z, and parameter interval t[0] to t[1], uses the NIntegrate function in place of Integrate: This function gives area as a function of curve parameter t, but I needed the inverse relationship: given an area, I wanted to know what t is. Since I couldn’t invert this function directly, I made a table of {area, t} pairs and interpolated that to give myself a function that, from empirical evidence, approximates the inverse sufficiently well: With those components, I assembled the function for rendering arbitrary phyllotaxic surfaces. Parameters x and z give the parametric description of the generating curve, {t0,t1} specifies the interval of the curve that generates the surface, density specifies the density of points, and radius gives the radius of the spheres that render the points. I tested the function with a covering of a complete sphere: As on a sunflower’s disk, the points on the sphere group into spirals that regroup as you move from the poles toward the equator. To make that structure more apparent, I modified the rendering function so I could specify that every nth point should have a given color, and rendered spheres with n = 2, 3, …, 13. The resulting images revealed a variety of spiral structures lurking within the Combinations of colorings layered on top of one another show the complex interplay of spiral structures: Since Mathematica‘s integration functions are completely general, I could explore generating curves of all sorts, including piecewise and interpolated curves. Here’s a sample of some phyllotaxic surfaces of revolution I encountered (click an image to see an enlargement): Curious what a phyllotaxic wine glass would look like, I grabbed an image of a Peugeot wine glass from the web and used the Get Coordinates function to digitize its outline, which I fed to Interpolation to get the x and z parametric functions of the curve. This is the resulting parametric curve: With those functions, it’s a no-brainer to make a phyllotaxic wine glass. The spiral structures in this form, with its concave and convex surfaces and wide variation of diameters, were especially interesting to explore. Here’s one study that I particularly liked. It exploits the fact that when you color every seventh point, you get three distinct spirals at small diameters. I omitted the other six points entirely so that those spirals stood on their own in the stem, and gave each of them a different color. By choosing cyan, yellow, and magenta, the base and bowl of the glass are a neutral gray when viewed from a distance, and yet it shimmers almost iridescently as it moves, due to the moiré patterns induced by the patterns on the opposite sides. The magic of spiral phyllotaxis unites all the parts into one festive whole. Alas, the effect requires spheres too tiny to bond together, so they’d have to be embedded in a clear glass matrix. When the day arrives that clear glass can be 3D-printed, I’ll be first in line. Meanwhile, there are plenty of other intriguing phyllotaxic surfaces waiting to be discovered. 17 Comments Posted by Mark D July 28, 2011 at 9:49 am Posted by Mark D July 28, 2011 at 10:00 am Posted by igo July 28, 2011 at 3:40 pm Posted by Lou August 23, 2011 at 8:07 am Posted by Lance Otis September 16, 2011 at 12:23 am Posted by Renee Kredell September 18, 2011 at 8:47 am Posted by Christopher Carlson September 19, 2011 at 9:19 am Posted by yogesh September 23, 2011 at 2:36 am Posted by Roger Bagula October 23, 2011 at 12:11 pm Posted by @rselva October 4, 2012 at 6:14 am Posted by Tal Einav January 27, 2014 at 5:57 pm
{"url":"http://blog.wolfram.com/2011/07/28/how-i-made-wine-glasses-from-sunflowers/","timestamp":"2014-04-17T07:10:55Z","content_type":null,"content_length":"138768","record_id":"<urn:uuid:c9b8ad1c-10fd-4f5b-898f-1496bca00916>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Pythagorean Wins [Archive] - White Sox Interactive Forums 05-10-2005, 01:01 PM It's been asked a few times about what the formula for Pythagorean wins is and I found this article. In short, the formula for expected win percentage is: the square of runs scored divided by the sum of the square of runs scored and the square of runs allowed. When the Sox offense heats up and a few more blowouts occur the Pythagorean wins will return to the actual wins. And Pythagoras couldn't even hit a curveball. For propeller heads only, the article also describes an alternative poisson based formula. However, after a bit of experimentation, I believe that altering the value of e provides a better basis for manipulating the model. The article does mention that the spread of values is way too low. An issue I had to deal with while working on a doctoral thesis (not mine.)
{"url":"http://www.whitesoxinteractive.com/vbulletin/archive/index.php/t-49838.html","timestamp":"2014-04-17T22:11:27Z","content_type":null,"content_length":"18165","record_id":"<urn:uuid:8331957a-108f-4ee6-ada2-6cde1cde2a76>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Rochester Hills, MI West Bloomfield, MI 48322 Master Certified Coach for Exam Prep, Mathematics, & Physics ...tudents and Parents, It is a pleasure to make your acquaintance, and I am elated that you have found interest in my profile. I am the recipient of a Master of Science: Physics and Master of Arts: ematics with a focus in computation and theory. I am... Offering 10+ subjects including algebra 1, algebra 2 and calculus
{"url":"http://www.wyzant.com/Rochester_Hills_MI_Math_tutors.aspx","timestamp":"2014-04-18T19:44:37Z","content_type":null,"content_length":"61525","record_id":"<urn:uuid:0a9169ac-d2f6-461b-9c04-cdd4865c60cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Mill Valley Geometry Tutor Find a Mill Valley Geometry Tutor ...I think this is a wonderful combination: I can relate to students, understand their frustrations and fears, and at the same time I deeply understand math and take great joy in communicating this to reluctant and struggling students, as well as to able students who want to maximize their achieveme... 20 Subjects: including geometry, calculus, statistics, biology ...He enjoys music, hiking and geocaching.Dr. Andrew G. has a Ph.D. from Caltech in environmental engineering science with a minor in numerical methods. In addition he has over 30 years experience as a practicing atmospheric scientist and dispersion modeler. 13 Subjects: including geometry, calculus, physics, algebra 2 ...I am an educator who is passionate about equipping today's youth with the tools and resources they need to attain their life goals. I obtained a Master's in Education, as well as my Teaching Credential, from Stanford University (through the Stanford Teacher Education Program). I obtained a Bache... 11 Subjects: including geometry, biology, elementary math, algebra 1 ...I first started playing the oboe in 2002, and have continued to play ever since, giving me over ten years of experience. I've played in high school and college bands, in addition to extra curricular groups including pit orchestras for musicals, youth orchestras, and band camps. I've used PC's since Windows 95, all the way up through Windows 8. 4 Subjects: including geometry, chemistry, oboe, Microsoft Windows ...I score in the 99th percentile in the US in English. I have a background in presentational and journalistic writing as well as public speaking and rhetoric. I am skilled in proofreading as well as coaching and have helped many students successfully. 32 Subjects: including geometry, reading, chemistry, English
{"url":"http://www.purplemath.com/mill_valley_ca_geometry_tutors.php","timestamp":"2014-04-17T07:54:34Z","content_type":null,"content_length":"24077","record_id":"<urn:uuid:0313d16e-d039-4c3a-a785-dfe4ad105fdb>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 21 a stone of mass 50g is being rotated in a circle of radius 50cm with a uniform speed of 2m/s.what is thje accelerationof the stone? a stone of mass 50g is being rotated in a circle of radius 50cm with a uniform speed of 2m/s.what is thje accelerationof the stone? lpu collage An urn contains 7 red and 3 white marbles. Three marbles are drawn from the urn one after another. Find the probability that first two are red and third is white if, a) Marbles drawn are replaced after every draw. b) Marbles drawn are not replaced after every draw. find the number of seat in a theater containing 30 rows if the first row has 14 seats and each row after that has 2 more seats than the previous row maths --plse help me.. ABCD IS A PARALLELOGRAM, G IS THE POINT ON AB SUCH THAT AG = 2GB, E IS A POINT OF CD SUCH THAT CE = 2DE AND AND F IS A POINT OF BC SUCH THAT BF = 2FC. PROVE THAT 1) ar(EGB) = 1/6 ar(ABCD) 2)ar(EFC) = 1/2 ar(EBF) 3) ar(EBG) = ar(EFC) 4) FIND WHAT PORTION OF THE ARE OF PARALLELO... maths --plse help me.. In a purse there are 20-rupee notes 10-rupee notes and 5-rupee notes. the number of 5-rupee notes exceeds two times the 10-rupee notes by one. the 20-rupee notes are 5 less than the 10-rupee notes. if the total value of the money in the purse is Rs185 find the number of each v... A patialy moves along half the circumference of a circle of 1 meter radius. Calculate the work done if the force at any point inclined at 60o to the tangent at that point has 5 New tones magnitude Thank for ur immediate response. Which one is correct? 1) The distance between New Delhi to Mumbai is 600kms. 2) The distance between New Delhi and Mumbai is 600kms. The length and breadth of a rectangle are in the ration 8:5. The length is 10.5 centimeters more than the breadth. What are the length and breads of the rectangle? Kindly give steps also. The potential difference applied to an X-ray tube is 5 kV and the current through it is 3.2 mA. Then the number of electrons striking the target per second and speed at which they strike? give a brief note of the measures of central tendency together with their merits and demerits. which is the best measure of central tendency and why? explain the purpose of fabulous presentation of statistical data why it is necessary to summarise the data distribution?explain the approaches available to summarise the data distribution. QUESTION- machines are used to pack sugar into packets supposedly containing 1.20kg each.on testing a large number of packets over a long perodof time ,it was found that the mean weight of the packets was 1.24kg and the standard deviation was 0.04kg. a particular machine is se... question- a packaging device is set to fill detergent powder packets with a mean weight of 5 kg .the standered deviation is known to be 0.01kg.these are known to drift upward over a period of time due to machine fault,which is not tolerable.a random sample of 100 packets is ta... college physics elements of Desgn What is meant by visual language social studies describe the impact of colonization through policies of the colonizers-steps taken for the development of these communities and impact of colonization on their development in the 18th and 19th Who is first women minister in Rajasthan Fill in the blanks follows 80, 70, 61, 53, 46, 40, ....
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=sunil","timestamp":"2014-04-21T14:43:57Z","content_type":null,"content_length":"10437","record_id":"<urn:uuid:bdb4f2e6-793b-4cb4-b723-338053c29060>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with price calculation April 9th 2013, 10:04 PM Need help with price calculation I have a Retail Price of $165 and give a discount of 66.67% to get a Wholesale Price of $55. The question is... How do I calculate the Retail Price if I am just given the Wholesale Price of $55 and know that the Retail Price x with a discount of 66.67% = $55 (rounded up) April 10th 2013, 01:19 AM Re: Need help with price calculation In words 'from the retail price subtract two thirds of the retail price to get the wholesale price of $55' In math symbols $x - \frac{2x}{3} = 55$ Now solve for x (the retail price) April 10th 2013, 03:38 AM Re: Need help with price calculation Thanks agentmulder, but I need the result to be the retail price =x not the wholesale price =55 I need a calculation that I can use in Excel where -- Example numbers in () x is retail price (165) y is a discount percentage (66%) z is wholesale price (55) April 10th 2013, 04:19 AM Re: Need help with price calculation $x - \frac{xy}{100} = z$ $x(1 -\frac{y}{100}) = z$ $x = \frac{100z}{100 - y}$ Is it the last one you need? We get that by solving for x in the top one. April 10th 2013, 05:54 AM Re: Need help with price calculation The last one is it. Thank you VERY MUCH agentmulder.
{"url":"http://mathhelpforum.com/new-users/217152-need-help-price-calculation-print.html","timestamp":"2014-04-20T11:36:37Z","content_type":null,"content_length":"6079","record_id":"<urn:uuid:7b6138f5-1121-44eb-98d8-7e026e95e119>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Speed of Sound Air is a gas, and a very important property of any gas is the speed of sound through the gas. Why are we interested in the speed of sound? The speed of "sound" is actually the speed of transmission of a small disturbance through a medium. Sound itself is a sensation created in the human brain in response to sensory inputs from the inner ear. (We won't comment on the old "tree falling in a forest" discussion!) Disturbances are transmitted through a gas as a result of collisions between the randomly moving molecules in the gas. The transmission of a small disturbance through a gas is an isentropic process. The conditions in the gas are the same before and after the disturbance passes through. Because the speed of transmission depends on molecular collisions, the speed of sound depends on the state of the gas. The speed of sound is a constant within a given gas and the value of the constant depends on the type of gas (air, pure oxygen, carbon dioxide, etc.) and the temperature of the gas. For hypersonic flows, the high temperature of the gas generates real gas effects that can alter the speed of sound. An analysis based on conservation of mass and momentum shows that the square of the speed of sound a^2 is equal to the the gas constant R times the temperature T times the ratio of. specific heats gamma a^2 = R * T * gamma Notice that the temperature must be specified on an absolute scale (Kelvin or Rankine). The dependence on the type of gas is included in the gas constant R. which equals the universal gas constant divided by the molecular weight of the gas, and on the ratio of specific heats. If the specific heat capacity of a gas is a constant value, the gas is said to be calorically perfect and if the specific heat capacity changes, the gas is said to be calorically imperfect. At subsonic and low supersonic Mach numbers, air is calorically perfect. But under low hypersonic conditions, air is calorically imperfect. Derived flow variables, like the speed of sound and the isentropic flow relations are slightly different for a calorically imperfect gas than the conditions predicted for a calorically perfect gas because some of the energy of the flow excites the vibrational modes of the diatomic molecules of nitrogen and oxygen in the air. The equation given above was derived for a calorically perfect gas. If we include the effects of caloric imperfection, some additional terms are added to the equation. For the calorically imperfect case, mathematical models based on a simple harmonic vibrator have been developed. The details of the analysis were given by Eggars in NACA Report 959. A synopsis of the report is included in NACA Report 1135. To a first order approximation, the equation for the speed of sound for a calorically imperfect gas is given by: a^2 = R * T * {1 + (gamma - 1) / ( 1 + (gamma-1) * [(theta/T)^2 * e^(theta/T) /(e^(theta/T) -1)^2]) } where theta is a thermal constant equal to 3056 degrees Kelvin, gamma is the calorically perfect value of the ratio of specific heats, and T is the static temperature. The speed of sound in air depends on the type of gas and the temperature of the gas. On Earth, the atmosphere is composed of mostly diatomic nitrogen and oxygen, and the temperature depends on the altitude in a rather complex way. Scientists and engineers have created a mathematical model of the atmosphere to help them account for the changing effects of temperature with altitude. We have created an atmospheric calculator to let you study the variation of sound speed with altitude. Here's another Java program to calculate speed of sound and Mach number for different altitudes and speed. You can use this calculator to determine the Mach number of an aircraft at a given speed and To change input values, click on the input box (black on white), backspace over the input value, type in your new value. Then hit the red COMPUTE button to send your new value to the program. You will see the output boxes (yellow on black) change value. You can use either English or Metric units and you can input either the Mach number or the speed by using the menu buttons. Just click on the menu button and click on your selection. There is a sleek version of this program for experienced users who do not need these instructions. You can also download your own copy of this program to run off-line by clicking on this button: Guided Tours Navigation ..
{"url":"http://www.grc.nasa.gov/WWW/BGH/sound.html","timestamp":"2014-04-16T07:22:23Z","content_type":null,"content_length":"13709","record_id":"<urn:uuid:e1e62fc8-bd02-439e-ab57-2586767b36c5>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 83 Appendix A The Large Star Sample Required for A Photometric Planetary Search Consider a planet In a circular orbit, of radius a, around a main- sequence star of mass Ma and radius R*. An empirical relation between stellar size and mass is given approximately by Rut ~ M*0 78. The amplitude of an occultation is just the ratio of the cross-sectional areas, or A = 1.3p-2/3Mp 213 M -~.s6 where Mp and M. are in solar units and p is the mean density of the planet, which could be somewhere in the range O.S < p < 8 g cm~3. A minimal detection of the occultation requires that A 3~, where ~ is the photometric fractional uncertainty in the measurements. In the case of the Sun, this detection would require ~ = 3 x 1O-s for an Earth occultation up = s.s Mp = 3 X 10-6) and a = 3 x 10-3 for a Jupiter occultation (p—1.3, Mp = 10-3~. In the fo310~g, p = Z1 is used, which yields the geometric mean value for crtp); the above range of values for p would yield values for ~ within a factor of 2.6. The corresponding required photometric uncertainty is ~ ~ 0.26Mp 213M* _ i.56 If Poisson statistics apply, recording the event requires that 1/~2 pho- toelectrons be counted. Assuming measurements in a 1oo~A visible pass- band with a 1-m telescope operating at 5 percent quantum efficiency, the on-target integration time required per star is 83 OCR for page 83 84 t = 3 x 10~9~~210° Seconds where ma is the apparent visual magnitude of the star. For main-sequence stars, the apparent magnitude can be reckoned from the mass and distance: me ~ - 8.4 log M* + 5 log r + 1, and so t = 10~7r2M ~0 24Mp ~4/3seconds For occultations of a 0.3-M star by the Earth and Jupiter viewed from 10 parsecs, t = 300 s and 0.13 s, respectively. The maximum duration of an occultation is 0.54 ai/2 M*0 28 days, where a is in astronomical units, and it occurs once in an orbital period P = a3/2 M*-~/2 yr. In order not to miss an event, each program star must be measured about four times dunag fiche minimum event duration sought. For estimation purposes, the limiting planet path is chosen to be offset by 0.5 stellar radii from the starts center. Then all the program stars must be measured In time: T = 0.25~/~) 0.54 amin i/2M0 28 days _ 1O4amin i/2M* 0 28 seconds, where awn is the inner boundary of the search. For amen = 1 AU and M* = 0.3,T=7.1 x 103sor2h. The occultation visibility zone decreases with increasing a. The prob- abilibr of the planet passing within 0.5 stellar radius of the star's center, as seen by an observer in a random direction, is Rota/ 2a or 2.33 x 10-3 M*0 78 am. Ib be confident that N. stars have been sampled for planetary occultations, that number times the inverse of the probability of visibility must be observed in the monitoring program, or No = N`, x 43Oamal`M* 0 78, where amen is the outer boundary of me search. For arm—1 AU and M* = 0.3, No = 1.1 x 103 Nil. The survey program must therefore measure No stars, each for an integration time t, in a total Cycle time T. Assuming an observational efflcienc y of 0.25, this requires 4Not < T. For a Apical star (M* = 0.3) and a program duration of 10 yr (P < 10, am = 3.1) this requires OCR for page 83 85 amin > 7.3 x 10~10Mp~~/3r2 . For the extreme case one takes Ash = aim,` and Mp = Mp(~max) = 2 x 10-3. For a minimum N. of 10, this requires r < 13 parsecs and No=34X 103N, =3.4x 104 stars. However, there are only about 1000 stars within 13 parsecs, so this method could not give an accurate estimate of the fraction of even Jupiter-size planets occurring around nearby stars.
{"url":"http://www.nap.edu/openbook.php?record_id=1732&page=83","timestamp":"2014-04-18T14:19:37Z","content_type":null,"content_length":"35282","record_id":"<urn:uuid:80f37feb-7900-46a8-827b-e6b63585ea24>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
blowing up general k points on the plane up vote 3 down vote favorite Del Pezzo surfaces are obtained by blowing up $1 \leq k \leq 8$ points on general position in $\mathbb{P}^2$. What does it happen when the number of points is larger than nine? In this sense, Beauville's book in surfaces presents the topic in the context of linear system of cubics: Nine points in the plane determine a cubic curve, and del Pezzo surfaces $S_{9-k}$, with $k\leq 6$, are embedded into $\mathbb{P}^{9-k}$ by the linear system of cubics through the $k $ points. Is there a nice interpretation of the surfaces obtained by linear system of plane curves of degree $d$? I suppose this is well known but I cannot find a reference. Thanks!! ag.algebraic-geometry algebraic-surfaces I'm having some trouble understanding the question. Are you asking whether the blowup of P^2 in more than 9 points is embedded in projective space by a linear system of curves of degree higher than 3? (By the way, your assertion that "del Pezzo surfaces S^9−k are embedded into P^9−k by the linear system of cubics through k≤9 points" is only true for k <= 6; after that the linear system of cubics through the points fails to be very ample.) – Artie Prendergast-Smith Sep 16 '12 at 21:17 Another small correction: the first sentence should say "1<=k<=8". – Artie Prendergast-Smith Sep 16 '12 at 21:21 thanks, I edited the question. – pmath Sep 16 '12 at 22:10 add comment 2 Answers active oldest votes The canonical divisor of the blow up $\pi: X\to \mathbb P^2$ at $k$ ordinary points is $$ K_X = -3\pi^*L +\sum_{i=1}^k E_i, $$ where $L\subset \mathbb P^2$ is a hyperplane and $E_i$ is an exceptional curve of the first kind. Choosing the representatives right and an easy computation shows that $$ K_X^2 = 9 - k, $$ so $-K_X$ could possibly be ample only if $0\leq k\leq 8$. As Artie points out, embedding into $\mathbb P^{9-k}$ only works for $0\leq k\leq 6$. It is easy to see that this cannot be true for $k>6$ as then $9-k\leq 2$ and there is no way you can embed a surface different from $\mathbb P^2$ into $\mathbb P^2$, $\mathbb P^1$, or $\mathbb P^0$. So, the interesting question is what you get if $k=7$ or $8$. up vote 7 down vote It is relatively easy to see that $-K_X$ is not very ample (unlike in the $0\leq k\leq 6$ case): By looking at the short exact sequences, $$ 0\to \pi^*\omega_{\mathbb P^2}^{-1}(-\sum_{i=1} ^r E_i) \to \pi^*\omega_{\mathbb P^2}^{-1}(-\sum_{i=1}^{r-1} E_i)\to \mathscr O_{E_r}\to 0 $$ one can see easily that $$ \dim H^0(X, \omega_X^{-1}) = 10-k. $$ In fact the $\mathbb P^{9-k}$ above is just the projectivization of this linear space. Remark Since the OP is talking about embedding a blow-up of $\mathbb P^2$ at $k$ points into $\mathbb P^{9-k}$, I assume they mean the classical definition of Del Pezzos, although the fact why $k$ can't be bigger than $8$ works for any definition, in particular for the now commonly used one asking only for that $-K_X$ is ample. Sándor: are you saying that the blowup of P^2 in 7 or 8 points is not del Pezzo? – Artie Prendergast-Smith Sep 16 '12 at 21:45 3 Artie, it depends on what definition of Del Pezzo you use. The classical one demanded that $-K_X$ is very ample, not just ample, but perhaps I should remove that statement anyway. – Sándor Kovács Sep 16 '12 at 21:47 I must admit I've never seen the definition that demands -K very ample, but I am an ignorant youth. Anyway, as long as it's clear how the terminology is being used, I guess it's just a matter of taste. – Artie Prendergast-Smith Sep 16 '12 at 21:53 1 Hartshorne, Remark V.4.7.1, page 401. – Sándor Kovács Sep 16 '12 at 21:56 1 (And sorry if my first comment was annoying. I just wanted to make sure I understood your answer.) – Artie Prendergast-Smith Sep 16 '12 at 22:03 show 4 more comments A minor correction: the blowup of $\mathbb{P}^{2}$ at 9 points cannot be del Pezzo, since its anticanonical class has self-intersection equal to 0. Much of the recent interest in the blowup $X_{k}$ of $\mathbb{P}^{2}$ at $k \geq 9$ points in general position centers around the ample cone of $X_{k},$ rather than a specific embedding of $X_{k}$ in projective space. Let $H$ be the pullback of the hyperplane class via the blowup ${\pi}: X_{k} \rightarrow \mathbb{P}^{2},$ and let $E=\sum_{i=1}^{k}E_{i}$ be the sum of the $k$ exceptional divisors on $X_ {k}.$ The anticanonical class $3H-E$ fails to be ample for $k \geq 9,$ but we can instead ask the following: for which positive integers $d,r$ is the divisor $dH-rE$ ample? Since $H$ spans a boundary ray of the ample cone of $X_{k},$ we know that $H-tE$ is ample for $0 < t << 1,$ e.g. that $dH-rE$ is ample for $d >> r.$ So what we are really interested in is $$t_{k}:=\sup \{ t > 0 : H-tE \hskip5pt {\rm ample} \}$$ An upper bound for $t_{k}$ may be obtained from the positive value of $t$ for which $(H-tE)^{2}=0,$ i.e. ${1}/{\sqrt{k}}.$ up vote 3 down vote $\textbf{Nagata's conjecture:} \hskip10pt t_{k}=1/{\sqrt{k}}.$ This statement holds when $k=m^{2}$ is a perfect square which is at least 9; this can be seen by looking at the ample cone of the blowup of $\mathbb{P}^{2}$ at a general complete intersection of two degree-$m$ plane curves and noting that the ample cone of a surface can only shrink upon specialization. There is a large body of work on Nagata's conjecture and its generalizations. A nice overview can be found in "Remarks on the Nagata Conjecture" by B. Strycharz-Szemberg and T. Szemberg, available at 2 Nice answer! Let me mention that for k=9 a bit more than Nagata's conjecture is known: actually the structure of whole ample cone is known, by a result of Borcea (a paper in Crelle, around 1991, whose title I forget). In recent years, de Fernex has given a partial description of the K-positive part of the ample cone for k=10,11 (I think), but a description of the whole cone is not known. – Artie Prendergast-Smith Sep 16 '12 at 21:57 Thanks! I wasn't aware of either of these--I'll have to take a look. – Yusuf Mustopa Sep 16 '12 at 22:30 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-surfaces or ask your own question.
{"url":"http://mathoverflow.net/questions/107345/blowing-up-general-k-points-on-the-plane","timestamp":"2014-04-19T07:29:08Z","content_type":null,"content_length":"69534","record_id":"<urn:uuid:9b42ee33-c0a9-45d9-9a3c-297ead5b4665>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
QPair Class Reference The QPair class is a template class that stores a pair of items. More... #include <QPair> Public Types typedef first_type typedef second_type Public Functions QPair () QPair ( const T1 & value1, const T2 & value2 ) QPair<T1, T2> & operator= ( const QPair<T1, T2> & other ) Public Variables Related Non-Members QPair<T1, T2> qMakePair ( const T1 & value1, const T2 & value2 ) bool operator!= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) bool operator< ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) QDataStream & operator<< ( QDataStream & out, const QPair<T1, T2> & pair ) bool operator<= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) bool operator== ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) bool operator> ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) bool operator>= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) QDataStream & operator>> ( QDataStream & in, QPair<T1, T2> & pair ) Detailed Description The QPair class is a template class that stores a pair of items. QPair<T1, T2> can be used in your application if the STL pair type is not available. It stores one value of type T1 and one value of type T2. It can be used as a return value for a function that needs to return two values, or as the value type of a generic container. Here's an example of a QPair that stores one QString and one double value: QPair<QString, double> pair; The components are accessible as public data members called first and second. For example: pair.first = "pi"; pair.second = 3.14159265358979323846; QPair's template data types (T1 and T2) must be assignable data types. You cannot, for example, store a QWidget as a value; instead, store a QWidget *. A few functions have additional requirements; these requirements are documented on a per-function basis. See also Generic Containers. Member Type Documentation typedef QPair::first_type The type of the first element in the pair (T1). See also first. typedef QPair::second_type The type of the second element in the pair (T2). See also second. Member Function Documentation QPair::QPair () Constructs an empty pair. The first and second elements are initialized with default-constructed values. QPair::QPair ( const T1 & value1, const T2 & value2 ) Constructs a pair and initializes the first element with value1 and the second element with value2. See also qMakePair(). QPair<T1, T2> & QPair::operator= ( const QPair<T1, T2> & other ) Assigns other to this pair. Member Variable Documentation T1 QPair::first The first element in the pair. T2 QPair::second The second element in the pair. Related Non-Members QPair<T1, T2> qMakePair ( const T1 & value1, const T2 & value2 ) Returns a QPair<T1, T2> that contains value1 and value2. Example: QList<QPair<int, double> > list; list.append(qMakePair(66, 3.14159)); This is equivalent to QPair<T1, T2>(value1, value2), but usually requires less typing. bool operator!= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is not equal to p2; otherwise returns false. Two pairs compare as not equal if their first data members are not equal or if their second data members are not equal. This function requires the T1 and T2 types to have an implementation of operator==(). bool operator< ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is less than p2; otherwise returns false. The comparison is done on the first members of p1 and p2; if they compare equal, the second members are compared to break the tie. This function requires the T1 and T2 types to have an implementation of operator<(). QDataStream & operator<< ( QDataStream & out, const QPair<T1, T2> & pair ) Writes the pair pair to stream out. This function requires the T1 and T2 types to implement operator<<(). See also Serializing Qt Data Types. bool operator<= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is less than or equal to p2; otherwise returns false. The comparison is done on the first members of p1 and p2; if they compare equal, the second members are compared to break the This function requires the T1 and T2 types to have an implementation of operator<(). bool operator== ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is equal to p2; otherwise returns false. Two pairs compare equal if their first data members compare equal and if their second data members compare equal. This function requires the T1 and T2 types to have an implementation of operator==(). bool operator> ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is greater than p2; otherwise returns false. The comparison is done on the first members of p1 and p2; if they compare equal, the second members are compared to break the tie. This function requires the T1 and T2 types to have an implementation of operator<(). bool operator>= ( const QPair<T1, T2> & p1, const QPair<T1, T2> & p2 ) Returns true if p1 is greater than or equal to p2; otherwise returns false. The comparison is done on the first members of p1 and p2; if they compare equal, the second members are compared to break the tie. This function requires the T1 and T2 types to have an implementation of operator<(). QDataStream & operator>> ( QDataStream & in, QPair<T1, T2> & pair ) Reads a pair from stream in into pair. This function requires the T1 and T2 types to implement operator>>(). See also Serializing Qt Data Types.
{"url":"http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/qpair.html","timestamp":"2014-04-18T23:55:48Z","content_type":null,"content_length":"21894","record_id":"<urn:uuid:ee980fc3-1b88-4b1a-95df-d9be52dc25d5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
The Prime Glossary: Euclid Prime Pages: Top 5000: Euclid (pronounced YOO klihd) was a Greek mathematician who lived approximately 330-270 B.C. Euclid compiled and systematically arranged the geometry and number theory of his day into the famous text "Elements." This text, used in schools for about 2000 years, earned him the name "the father of geometry." Even today, the geometries which do not satisify the fifth of Euclids "common notions" (now called axioms or postulates) are called non-Euclidean geometries. When the Egyptian ruler Ptolemy (reports Greek philosopher Proclus ) asked if there was a shorter way to the study of geometry than the Elements, Euclid told the Pharaoh that "there is no royal road to geometry." Little is known of Euclid's life. Proclus wrote (c. 350 AD) that Euclid lived during the reign of Ptolemy and founded the first school of mathematics in Alexandria--the site of the most impressive library of ancient times (with perhaps as many as 700,000 volumes). He wrote books on other subjects such as optics and conic sections, but most of them are now lost. See Also: EuclideanAlgorithm, TheElements Related pages (outside of this work)
{"url":"http://primes.utm.edu/glossary/page.php?sort=Euclid","timestamp":"2014-04-19T07:16:12Z","content_type":null,"content_length":"4456","record_id":"<urn:uuid:1427e2c2-ca20-443f-9466-f8916b436e2c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
One divided by Infinity What is 1 / Infinity ? Is it 0 ? "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: One divided by Infinity 1/infinity? Infinity contains all known numbers, including negative numbers, in existance... Thus, you can really divide it by 0, can you? Boy let me tell you what: I bet you didn't know it, but I'm a fiddle player too. And if you'd care to take a dare, I'll make a bet with you. Re: One divided by Infinity Inifinity. A number SO BIG that you can't ever reach it. Some people say it is the number that is equal to the largest number you can think of plus one. So, we know what 1/10 is, we know what 1/100 is, we know what 1/1,000,000,000 is (one billionth), but what is 1/Infinity? "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: One divided by Infinity Oh! Oh! 1/2? Boy let me tell you what: I bet you didn't know it, but I'm a fiddle player too. And if you'd care to take a dare, I'll make a bet with you. Re: One divided by Infinity Only if Infinity=2 ... it don't "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: One divided by Infinity ... That was the joke. O.o Boy let me tell you what: I bet you didn't know it, but I'm a fiddle player too. And if you'd care to take a dare, I'll make a bet with you. Re: One divided by Infinity Alright. I've been thinking (Ragnarok!) and I've come up with this; 1/infinity = 0? X Infinity is -not- a number, thus a mathmatical symbol cannot do anything to it. Neither is a variable, it's a concept (For example, you're not going to have 1/law, are you?). Thusly, infinity is not a number, but it is a concept, thus cannot be used in such a way. But, if you're using infinity as a short-hand version of a long number, a really really long number, then express it like so; x = The Overall number. Infinity = (x+1) Continuously. That's not too clear, but I'll try and clean it up. X is a number, say 5. You have 5 people + 1 person you have 6. x stands for the number of people you have, thus you go back to the start and you x+1 again to get 7 people and you loop it. To show off my QBasic cleverness, here it is in a machine language; - In all technicalities that'd work. So, taking that you mean Infinity in that way, 1/infinity would be infinitesimally small, not 0. Boy let me tell you what: I bet you didn't know it, but I'm a fiddle player too. And if you'd care to take a dare, I'll make a bet with you. Re: One divided by Infinity MathsIsFun wrote: So, we know what 1/10 is, we know what 1/100 is, we know what 1/1,000,000,000 is (one billionth), but what is 1/Infinity? One infinitieth! School is practice for the future. Practice makes perfect. But - nobody's perfect, so why practice? Re: One divided by Infinity And what you wrote in code, Zach, is called an "infinite loop" (though no such loop has ever gone on for infinity, at least not as fas as I know! "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: One divided by Infinity Yes. There's actually no point of the END, as I've added no loop exit factor. Boy let me tell you what: I bet you didn't know it, but I'm a fiddle player too. And if you'd care to take a dare, I'll make a bet with you. Super Member Re: One divided by Infinity that's great i hope u had fun I come back stronger than a powered-up Pac-Man I bought a large popcorn @ the cinema the other day, it was pretty big...some might even say it was "large Re: One divided by Infinity Well, I did. School is practice for the future. Practice makes perfect. But - nobody's perfect, so why practice? Power Member Re: One divided by Infinity haw haw ho Re: One divided by Infinity I think it should be 1 infinitieth or just plain 0. Re: One divided by Infinity 1 infinitieth gets my vote! Can't be zero. Becasue that would mean that an infinite number of zeros would make one, and I have tried multiplying zero by some pretty big numbers and it is always zero "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: One divided by Infinity In mathematics, 1 divided by zero is consrued to be infinity..... There's a reason for that; when 1 is divided by an extrememly small number, like the resultant is a very big number. On the other hand, When 1 is divided by an extremely large number, say a centillion, the resultant is so small that it is almost zero, repeat ALMOST zero, not zero.... But what happens when we divide 1 by a number as big as infinity? The result is zero. When 1 is divided by an infinitely large number, the result is an infinitely small number, and when this infinitely small number is added to itself infinite number of times, the result would be exactly 1, not any more. This is difficult to prove, but the logic behind it is perfectly okay. If you are interested in knowing more about infinity, use any search engine and get information about Hilbert's Hotel. Character is who you are when no one is looking. Re: One divided by Infinity How can infinity exist??? School is practice for the future. Practice makes perfect. But - nobody's perfect, so why practice? Re: One divided by Infinity Wow, this kind of conversation is usually reserved for Buddhist Monks sitting on the side of mountains in the Himalayas. The Existence of Infinity. I guess if you can imagine it, then it exists, at least in your mind. "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: One divided by Infinity MathsIsFun wrote: Wow, this kind of conversation is usually reserved for Buddhist Monks sitting on the side of mountains in the Himalayas. LOL !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! I suppose there's no other way of it existing because every number is impossible to see, and is all in the mind. School is practice for the future. Practice makes perfect. But - nobody's perfect, so why practice? Re: One divided by Infinity Infinity is not one of the "Real" numbers (ie it is not a 3 or a 7.01, or even a -1/7th), but nonetheless we can still use it. For example: y = 5 - 1/x What happens as x -> inifinity? Answer: y -> 5. "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman Re: One divided by Infinity Yes, you are right. Infinity is diffuclt to comprehend for others, not for mathematicians. Character is who you are when no one is looking. Power Member Re: One divided by Infinity Zach wrote: To show off my QBasic cleverness, here it is in a machine language; REM START that would probably result in "overflow error" =P Re: One divided by Infinity Let's consider the problem to findout the value of 1/infinity. Here the numerator doesnot suggest any specific law to findout the value of the fraction and a fraction, having numerator as 1, can take any possible value whether fininte of infinite, Now we consider the denominator, i.e., infinity. There are some specific rules to find out value of an expression having infinity as a term:- a) If infinity appears in the numerator the fraction takes the value infinity b) If infinity appears in the denominator the fraction takes the value zero. Here rule (b) about infinty suggests that the value of this fraction 1/inifnity should be zero and the numerator 1 expesses the possibility of this value and there is no contradiction between the result obtained. Hence the value of 1/inifinity must be zero. Re: One divided by Infinity abhishek_rttc wrote: a) If infinity appears in the numerator the fraction takes the value infinity b) If infinity appears in the denominator the fraction takes the value zero. Ahhh ... but who came up with the rules? And what is infinity/infinty ? Does rule a) or b) apply? "The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=11743","timestamp":"2014-04-16T04:37:25Z","content_type":null,"content_length":"37908","record_id":"<urn:uuid:b950420f-d366-4939-8446-67cbd1783b07>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Paper No 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 A Diffusion-Inertia Model for Predicting Aerosol Dispersion and Deposition in Turbulent Flows R. V. Mukin, N. I. Drobyshevsky, A. S. Filippov, V. F. Strizhov and L. I. Zaichik Nuclear Safety Institute of the Russian Academy of Sciences, Moscow, 115191, Russia Keywords: Aerosol particles, turbulent flows, deposition, duct, bend The objective of the paper is twofold: (i) to present a model (the so-called diffusion-inertia model) for predicting dispersion and deposition of aerosol particles in two-phase turbulent flows and (ii) to examine the performance of this model as applied to the flows in straight ducts and circular bends. The model predictions compare reasonable well with both experimental data and Lagrangian tracking simulations coupled with fluid DNS or LES. The existing strategies of modeling turbulent two-phase flows can be subdivided into two groups depending on the Lagrangian tracking and Eulerian continuum approaches for handling the particulate phase. In the framework of the Lagrangian method, the particles are assumed to encounter randomly a series of turbulent eddies, and the macroscopic particle properties are determined solving stochastic equations along separate trajectories. As a consequence, such a method requires tracking a very large number of particle trajectories to achieve statistically invariant solution. As the size of particles decreases, the representative number of realizations should increase because of the increasing contribution of particle interactions with turbulent eddies of smaller and smaller scale. Thus, this technique, especially when coupling with DNS or LES for the computation of fluid turbulence, provides a very useful research tool of investigating particle-laden flows, but it can be too expensive for engineering calculations. The Eulerian method deals with the particulate phase in much the same manner as with the carrier fluid phase. Therefore, the two-fluid modeling technique is computationally very efficient, as it allows us to use the governing equations of the same type for both phases. In addition, the description of fine particles does not cause great difficulties because the problem of the transport of particles with vanishing response times reduces to the turbulent diffusion of a passive impurity. Overall, the Lagrangian tracking and Eulerian continuum modeling methods complement each other. Each method has its advantages and, consequently, its own field of application. The Lagrangian method is more applicable for non-equilibrium flows (e.g., high-inertia particles, dilute dispersed media), while the Eulerian method is preferable for flows which are close to equilibrium (e.g., low-inertia particles, dense dispersed media). Since the particulate phase combines simultaneously the properties of continuum medium and discrete particles, the situation with these two approaches resembles the well-known "wave-particle" duality in the micro-word. To simulate the dispersion of low-inertia particles in turbulent flows, the Eulerian models of diffusion type appear to be very efficient. In Zaichik et al. (1997), Zaichik et al. (21' 4) a simplified Eulerian model called the diffusion-inertia model (DIM) was developed. This model was based on a kinetic equation for the probability density function (PDF) of particle velocity distribution Zaichik (1997), Zaichik et al. (1999), Zaichik et al. I(2" '4 1 I and was coupled with fluid RANS in the frame of one-way coupling. The DIM was applied to simulate various turbulent flows laden with low-inertia particles and this was incorporated in the CFD code SATURNE for modelling aerosol transport in ventilated rooms Nerisson et al. (2'i '4). The Eulerian models of diffusion type were also proposed in coupling with DNS and LES approaches for calculating the turbulent carrier fluid Druzhinin (1995), Ferry & Balachandar (2001), Rani & Balachandar (2003), Shotorban & Balachandar (2006), Zaichik et al. (2009). The advantage of the Eulerian diffusiontype models is that the particle velocity can be explicitly expressed in terms of the properties of the carrier fluid flow. By this means, one avoids the need to solve the momentum balance equations for the particulate phase, and the problem of modelling the dispersion of the particulate phase amounts to solving a sole equation for the particle concentration. By this means, computational times are seriously shortened as compared to full two-fluid Eulerian models. The disadvantage is that these are applicable only to the two-phase flows laden with low-inertia particles. For example, the DIM is valid when the particle response time is less than the integral timescale of fluid turbulence. Nevertheless, these models are capable of predicting the main trends of particle distribution, including the effect of preferential accumulation due to turbophoresis, in a fairly wide range of particle inertia. In this paper, we extend the DIM to include the back-effect 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 of particles on the fluid turbulence in the frame of two-way coupling. Moreover, the so-called inertia and crossing-trajectory effects are incorporated into the model and the boundary condition for the particle concentration equation is refined. This extended model is applied to the three-dimensional simulation of aerosol deposition in straight ducts and circular bends when the transport of particles is caused by the simultaneous action of diffusion, turbophoresis, gravity, and centrifugal force. a0 dimensionless acceleration magnitude D duct diameter D, Brownian diffusivity D, turbulent diffusivity of noninertial admixture Dz,, particle turbulent diffusion tensor De Dean number dp particle diameter F, acceleration of body forces acting on particles (e.g., gravity) J deposition flow rate j+ deposition coefficient Kn Knudsen number k, Boltzmann constant L turbulence spatial macroscale M mass particle loading of the fluid Rb radius of curvature of the bend Ro curvature ratio ScB Schmidt number of Brownian diffusion ScT turbulent Schmidt number St Stokes number T, Eulerian integral timescale T, Lagrangian integral timescale TL eddy-particle interaction timescales U average fluid velocity U mean axial fluid velocity U, fluid velocity seen by particles u, wall friction velocity (u'u') fluid kinetic stresses V average particle velocity V, relative velocity v particle velocity (v,'v) particle kinetic stresses Y wall-normal coordinate x spatial coordinate Greek letters a turbulence constant 7/ deposition efficiency vf fluid kinematic viscosity vT turbulent kinematic viscosity penetration efficiency r, Kolmogorov timescale p particle response time Stokes particle response time Taylor time microscale nondimensional particle response time particle volume fraction rebound coefficient autocorrelation function Mathematical formulation The governing equation for the concentration of low-inertia particles is given by (see Zaichik et al. (2010)) -F+ D+ 9 DU F= at yx, xL Dt C aoD, 0 a (,(u; )~f,) (1) -- -+D J --+Tp -- 9x, 9x 9x x By this means, for low-inertia particles, namely, when the particle response time is shorter than the turbulence time macroscale, the conservation equation set is reduced to the diffusion-type equation for the particle concentration, and hence one does not require solution to conservation equations for the momentum of the particulate phase. This approach is called the diffusion-inertia model (DIM). In the limit of zero-inertia particles ( r -> 0 ), Eq. (1) becomes the conventional diffusion equation C9 9U, ( 9 0) --+ a- Da-+DT (2) at Ox, 9x,[ 9x, 9x with D, J = (u,'u )T being the diffusion tensor of noninertial admixture. In comparison with (2), Eq. (1) allows us to take into account a number of effects caused by the particle inertia: (i) the impact of gravity and other body forces, (ii) the so-called inertial bias effect, i.e., the transport by reason of the deviation of particle trajectories from the fluid streamlines, (iii) the turbulent migration (turbophoresis) due to the gradients of velocity fluctuations, and (iv) the inertia and crossing-trajectory effects on particle turbulent The response time of aerosol particles is given by r =0 1+KnA 4 +A, exp(-- 1 (1+0.15Reop87)1 p pp (2) where according to Talbot et al. (1980) A1 =1.20, A2 = 0.41, and A = 0.88. The particle Reynolds number appearing in (2) is evaluated as d, [ V, 2 +2(1-2/f.)k] f +2 f Rep = f'r --- Vf 3 Paper No Paper No The Brownian diffusivity is equal to D, kT 1+Kn A +A2 exp 3 rpv, d Kndp In a quasi-isotropic approximation that corresponds to averaging over different directions, Eq. (1) and the relative velocity V, can be presented as &0 aU F( DU7 = - +-++- Ir I at ax, Px, Dt ) \ DB D Oc x, IxIq, ) [T )u dtdxdx al v, = v-u,, rF, DU, (D1 8 +q. )q>] Dm+q D m O 6x DT DTT, + 2qu, p 3 , D, = ' S T +2T" ' Lp 3 Note that, in (3) as compared to (1), the space dependence of the Brownian diffusivity is ignored. Solution to Eq. (1) or (3) right up to the wall (y = 0) is made difficult by the fact that the concentration of particles can steeply rise due to turbophoresis when y -> 0 . Equations (1) and (3), as such, cease to be true in the viscous sub-layer of the near-wall region of turbulent flow, where to determine the particulate kinetic stresses. To solve the particle concentration equation up to the wall, we use the method of wall-functions that has extensively employed starting from Launder & Spalding (1974) in modeling single-phase turbulent flows. In accordance with the wall-function method, we invoke, as the boundary condition, a relation between the flow rate of deposing particles J4 and the particle concentration in the near-wall region outside the viscous sub-layer ,D J. = I (yVIT +v) (5) 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 velocity, body force acceleration, and fluid acceleration in the near-wall region. The rebound coefficient, % , measures a probability of the rebound of a particle from the wall and its return into the flow after collision. The surface is perfectly adsorbing if x =0, and the particle deposition is absent if Z = 1. The parameter b quantifies the ratio of the 'convection-force' and 'diffusion-turbulence' components of the deposition rate. Deposition is controled by the 'convection-force' mechanism when b -> oo ( y -> 0 ), and the deposition )1 / rate tends to zero when b -> -oo ( -> -b) because the action of this inhibits the motion of particles to the wall. The deduction of the coefficient y is given in Zaichik et al. (2010). The 'diffusion' component of the deposition rate is found (4) as a result of solving the diffusion equation in the viscous sub-layer for the fourth-degree-law of rise in the turbulent diffusivity at high Schmidt numbers Levich (1962), Kutateladze (1973) S VD-F c (6) where Sc, DB/v. is the Schmidt number of Brownian The 'turbophoresis' component of the deposition rate is obtained as a result of approximating a numerical solution in the near-wall region Zaichik et al. (2010). 210-4 25u, V 1R (+l 325) max[0.8, min(1.32-0.27 nr, 1)] The boundary condition (5) along with (6) and (7) is valid for the particles with r+ =-rpou2/ < 100 when the first grid node is chosen outside the viscous sub-layer (y+ yu,/v > 20) where (O changes weakly with variation in the normal distance from the wall. In what follows let us consider the governing equations for the carrier fluid. When the volume fraction of the particulate phase is small ( << 1 ), its effect on the continuity equation of incompressible fluid is negligible and this is written as 'F U DU) exp(- b 2/) VCF Y= b C 1+erf(b/l/2)' D T Here VDT designates the 'diffusion-turbulence' component of the particle deposition rate caused by diffusion VDF and turbophoresis VT. The quantity VCF designates the 'convection-force' component of the deposition rate induced by an action of convection and body forces in the near-wall region, where U,, F and DU/Dt\, are the normal-wall components of fluid The balance fluid momentum equation is given by DU, 1 CP 8 C U, Dt p 6x, x + x , Ui u +4, A, = (v, -u,)p)dv = ), pf p p where M =p-- /pf is the mass particle loading of the fluid, and A quantifies the back-effect of particles on the fluid momentum that is determined using (4). Turbulent flow characteristics are simulated on the basis of VDT = VDF +VTR Paper No a two-equation turbulence model incorporating the equations of kinetic turbulence energy and its dissipation, that is, the k turbulence model. In the frame of this model, the fluid kinetic stresses are given by (see Zaichik et al. (2010)) , 2k65, U SU, 3 ax x, 2 SU, 3 x , (1i+Af.1k ) k 2(1-C2) + = C, , C +(lH, C)/C1 3C, m +2 (I-f":)k, m =(+Mf/')H. It is clear from (10) that, as distinct from the turbulent viscosity coefficient of the standard k- model vr0 = Ck2 /, vj incorporates two additional effects: (i) the presence of particles in the flow and (ii) the non-equilibrium of turbulence that lies in a possible inequality between the production and the dissipation. If the particles are absent (M =0, H, =H, = e) and the equilibrium between the processes of production and dissipation takes place (H = ), v, reduces to vr0. In the equilibrium approach (Hn = c,) which is valid, for example, for modelling the turbulent near-wall flow, (10) C, (1+ Alf.1)k2 The turbulence energy balance equation is conclusively given as (see Zaichik et al. (2010)) (l1+f.1)Dk ,)i + (l+Mf.) ]- + (11) Dt ox) yk 0 7 (11) +(1+Af, )n (c + e + G ) By analogy with (11), the turbulence dissipation balance equation is represented in the form ( +AfI ) De D 01* +(1+Af, T\r \e (- +(+f1 )--1---- Dt ax) f v[ a, x + [C,, (l+Afl,)n -C,+2 ( +Gp + ). (12) By this means, the standard k- model is modified in two aspects. Firstly, the modulation of turbulence due to particles is taken into consideration. Secondly, instead of standard expression for the eddy viscosity coefficient, v, is assumed to be a function of the turbulence production-to-dissipation ratio nI,/c The values of constants in (10)-(12) are usually taken to be as follows: C = 0.09, uk =1.0, a, =1.3, C,1 = 1.44, C2 = 1.92. Moreover, C = 1.1, Sc = 0.9, and a = C12 = 0.3. Calculation results and discussion 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 The DIM, consisting of the particle concentration equation (3) and the boundary condition (5), is coupled with the fluid balance equations (8), (9), (11), and (12). The model advanced is evaluated against experiments and numerical simulations of aerosol deposition in straight ducts and circular bends. The surface is assumed to be perfectly adsorbing, that is, the rebound coefficient ; is taken as zero in (5). Aerosol deposition in straight ducts First we examine the performance of the model for the deposition of particles in a vertical duct flow, when the gravity force does not exert direct action on the deposition rate. It is a common convention to describe the deposition rate of particles from turbulent flow by the dependence of the deposition coefficient j =- Jm/ u, where D, is the bulk volume particle fraction, on the particle inertia parameter r In line with the primary mechanism governing the process of deposition, the entire range of particle inertia may be subdivided into three regimes: the diffusion regime ( r <1 ), the turbophoresis regime ( 1 r, 100), and the inertia regime ( r > 100 ). The deposition process of the diffusion regime is mainly governed by Brownian and turbulent diffusion. In addition, some driving forces that cause transport of submicron particles (e.g., thermophoresis in non-isothermal flow) can play a significant role. In the situation when the diffusion mechanism plays the leading role, j, declines monotonously with r, as a result of a decrease in the Brownian diffusivity as the aerosol size increases. The basis deposition mechanism of the turbophoresis regime is the turbulent migration of particles from the flow core, which is characterized by high-level velocity fluctuation intensity, to the viscous sublayer adjacent the wall. This regime features a strong dependence of j, on r Kallio & Reeks (1989) and McLaughlin (1989) were the first to establish numerically the tendency of deposing particles to accumulate in the viscous sub-layer under the action of turbophoresis; this effect was reproduced in numerous later works. High-inertia particles ( r > 100 ) are weakly involved in turbulent flow of the carrier fluid, which causes the deposition coefficient j, in a vertical duct to decrease with r . Figure 1: The deposition coefficient in vertical duct flows. (1-3) DIM: (1) Re =10000, (2) Re =20000, (3) Re =50000, Paper No (4) experiment by Liu and Agarwal (1974), (5) DNS by McLaughlin (1989), (6) LES by Wang et al. (1997), (7) DNS by Marchioli et al. (2003), (8) DNS by Marchioli et al. Fig. 1 presents the predictions of the deposition coefficient for the pipe flow conditions which correspond to experiments by Liu & Agarwal (1974). To focus attention on the deposition mechanisms caused by the interaction of particles with turbulent eddies, the gravity and lift forces are neglected and hence F, =F 0 In Fig. 1, the deposition coefficients obtained for duct flows using DNS McLaughlin (1989), Marchioli et al. (2003), (2007) and LES Wang et al. (1997) are shown as well. Note that, in the diffusion and turbohoresis regimes, the deposition process is mainly governed by the interaction of particles with near-wall turbulent eddies. Therefore, the deposition rates determined in round pipe and flat channel flows are hardly distinguishable. As is clear from Fig. 1, the DIM properly captures the dependence of j, on r, at r < 100. The deposition coefficient predicted for high-inertia particles is found to systematically deviates from the measurements, because the model does not predicts a decrease in j, with r Thus, the DIM can be successfully employed in predicting the deposition rate in the diffusion and turbophoresis regimes. Aerosol deposition in circular bends In what follows we focus our attention on the deposition of aerosol particles in bends. Hydrodynamic structure of these flows is complex. It is characterized by the existence of curved streamlines and recirculating regions. The key nondimensional parameters that govern the flow are the Figure 2: The streamlines of the mean flow in the midplane of the bend at Re =10000 and De =4225. 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 Reynolds number defined as Re =DUm/V, and the Dean number defined as De= Re/Rl/2 where R -2Rb/D is the curvature ratio. For high Dean numbers, the flow in the bend is mainly governed by the centrifugal force which changes cardinally the flow pattern as compared to that in the straight duct. Figs. 2 and 3 show, respectively, the streamlines of the mean flow in the midplane and the streamlines of the secondary flow for the deflection angle of 900. The main features of the flow in the bend consist in separating the mean flow from the inner side, displacing it to the outer side, and generating the secondary flow in the form of a symmetric pair of counter-rotating helical vortices. The total process of aerosol deposition can be measured by the penetration of particles which is defined as the ratio of the particle flow rates in the outlet and inlet sections of the bend, 4 = Go,,t,/G,,,, or by the deposition efficiency, S=1- Fig. 4 presents the deposition efficiency predicted in the 900 bend under the conditions corresponding to experiment by Pui et al. (1987) for Re =10000, De =4225, Ro=5.6, and pp/p, =755. The inertia of particles is quantified by the Stokes number defined as St=2rpUm/D In these circumstances, the deposition of particles is caused by the simultaneous action of diffusion, turbophoresis, gravity, and centrifugal force. However, the dominating mechanism is the centrifugal force due to the curvature of the main flow and the formation of the secondary flow. As is clear from Fig. 4, the effect of the Stokes number predicted by the DIM is in good agreement with both experimental data Pui et al. (1987) and simulations Breuer et al. (2006), Berrouk & Laurence (2008). Fig. 5 demonstrates the effects of and curvature ratio and Stokes number on the penetration of particles in the 900 bend at Re =10000. Predictions are compared with Figure 3: The streamlines of the secondary flow in the bend for the deflection angle of 900 at Re =10000 and 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 0 0.4 0.8 1.2 1.6 St Figure 4: The effect of Stokes number on the deposition efficiency in the 900 bend. (1) DIM, (2) experiment by Pui et al. (1987), (3) LES by Breuer et al. (2006), (4) LES by Berrouk & Laurence (2008). experiments performed by McFarland et al. (1997) in a wide range of curvature ratio. It is obvious that the centrifugal effect increases as the curvature ratio decreases. Therefore, the penetration falls with both increasing St and decreasing R, As is clear, the DIM reasonably reproduces these effects. Some distinction between the predictions and the measurements is observed at small Stokes numbers, when the DIM overestimates the deposition rate. Fig. 6 compares the deposition efficiency as a function of bend angle with experimental data by Peters & Leith (21'"4) at Re =203000 and Ro =5. These experiments were carried out in bends of D =0.152 m at a mean velocity of 20 m/s, and hence they were the first to be directly applicable to industrial bends. As is clear from Fig. 6, the deposition efficiency increases for a given particle size. Taking into consideration a great uncertainty of measurements, Fig. 6 indicates that the DIM can reasonably describe the deposition of aerosols at such high Reynolds numbers which are typical of industrial 0 0.2 0.4 0.6 0.8 1.0 St Figure 5: The effects of curvature ratio and Stokes number on the penetration of particles in the 900 bend. (1-3) DIM, (4-6) experiment by McFarland et al. (1997): (1, 4) Ro =4, (2, 5) Ro =10; (3, 6) Ro =20. Figure 6: The effects of particle size and bend angle on the deposition efficiency. (1-3) DIM, (4-6) experiment by Peters & Leith (21." 4): (1, 4) 450; (2, 5) 900; (3, 6) 180. The paper is aimed at development and application of the DIM for the simulation of dispersion and deposition of aerosol particles in two-phase turbulent flows. The model stems from a kinetic equation for the probability density function of velocity distribution of particles whose response times do not exceed the integral timescale of fluid turbulence. The salient feature of the DIM consists in expressing the particle velocity as an expansion in terms of the properties of the carrier fluid, with the particle response time as the small parameter. By this means, the problem of modelling the dispersion of the particulate phase reduces to solving a sole equation for the particle concentration. Thus, computational times are seriously shortened as compared to full two-fluid Eulerian models. The model presented is capable of predicting the main trends of particle distribution including the effect of preferential accumulation due to The DIM has been incorporated in a CFD code and coupled with fluid RANS in the frame of two-way coupling. Simulations of aerosol deposition in straight ducts and circular bends have been performed. The results of deposition efficiency obtained using the DIM are found to be in encouraging agreement with both experimental data and Lagrangian tracking simulations coupled with fluid DNS or LES. This work was supported by the Russian Foundation for Basic Research (grant number 09-08-00084). Berrouk A.S., Laurence D., Stochastic modelling of aerosol deposition for LES of 900 bend turbulent flow, Int. J. Heat and Fluid Flow, Vol. 29, 1010-1028 (2008) Breuer M., Baytekin H.T., Matida E.A., Prediction of aerosol deposition in 900 bends using LES and an efficient Lagrangian tracking method, J. of Aerosol Science Vol. 37, Paper No 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 Flow, Vol. 29, 1793-1816 (2003) Druzhinin O.A., On the two-way interaction in two-dimensional particle-laden flows: the accumulation of particles and flow modification, J. Fluid Mech. Vol. 297, 49-76 (1995) Ferry J., Balachandar S., A fast Eulerian method for disperse two-phase flow, Int. J. Multiphase Flow, Vol. 27, 1199-1226 (2001) Kutateladze S.S., Near-wall Turbulence, Nauka, Novosibirsk, (1973) Kallio, M.W. Reeks, A numerical simulation of particle deposition in turbulent boundary layer, Int. J. Multiphase Flow, Vol. 15, 433-446 (1989) Liu B.YH., Agarwal J.K., Experimental observation of aerosol deposition in turbulent flow, J. Aerosol Sci. Vol. 5, 145-155 (1974) Launder B.E., Spalding D.B., The numerical computation of turbulent flows, Comut. Meth. Appl. Mech. Eng. Vol. 3, 269-289 (1974) Levich V.G, Physicochemical Hydrodynamics, Prentice-Hall, NJ, (1962). Marchioli C., Giusti A., Salvetti M.V, Soldati A., Direct numerical simulation of particle wall transfer in upward turbulent pipe flow, Int. J. Multiphase Flow, Vol. 29, 1017-1038 (2003) Marchioli C., M. Picciotto, A. Soldati, Influence of gravity and lift on particle velocity statistics and transfer rates in turbulent vertical channel flow, Int. J. Multiphase Flow, Vol. 33, 227-251 (2007) McLaughlin J.B., Aerosol particle deposition in numerically simulated channel flow, Phys. Fluids A, Vol. 1 1211-1224 (1989) McFarland A.R., Gong H., Muyshondt A., Wente W.B., Anand N.K., Aerosol deposition in bends with turbulent flow, Environ. Sci. Technol. Vol. 31, 3371-3377 (1997) Nerisson P., Ricciardi L., Simonin O., Fazileabasse J., Modelling aerosol transport and deposition in a ventilated room, in: Proceedings of the 6th International Conference on Multiphase Flow, Leipzig, Germany (21 "14) Peters T.M., Leith D., Particle deposition in industrial duct bends, Ann. Occup. Hyg. Vol. 48, 483-490 (2 1 14) Pui D.YH., Romay-Novas F., Liu B.Y.H., Experimental study of particle deposition in bends of circular cross-section, Aerosol Sci. and Technol. Vol. 7, 301-315 Rani S.L., Balachandar S., Evaluation of the equilibrium Eulerian approach for the evolution of particle concentration in isotropic turbulence, Int. J. Multiphase Shotorban B., Balachandar S., Particle concentration in homogeneous shear turbulence via Lagrangian and equilibrium Eulerian approaches, Phys. Fluids, Vol. 18, 065105 (2006) Talbot L., Cheng R.K., Schefer R.W., Willis D.R., Thermophoresis of particles in a heated boundary layer, J. Fluid Mech., Vol. 101, 737-758 (1980) Wang Q., Squires K.D., Chen M., McLaughlin J.B., On the role of the lift force in turbulence simulations of particle deposition, Int. J. Multiphase Flow, Vol. 23, 749-763 Zaichik L.I., Pershukov VA., Kozelev M.V, Vinberg A.A., Modeling of dynamics, heat transfer, and combustion in two-phase turbulent flows, Exper. Thermal and Fluid Science, Vol. 15, 291-322 (1997) Zaichik L.I., Modelling of the motion of particles in non-uniform turbulent flow using the equation for the probability density function, J. Appl. Mathematics and Mechanics, Vol. 61, 127-133 (1997) Zaichik L.I., A statistical model of particle transport and heat transfer in turbulent shear flow, Physics Fluids, Vol. 11, 1521-1534 (1999) Zaichik L.I., Soloviev S.L., Skibin A.P, Alipchenkov VM., A diffusion-inertia model for predicting dispersion of low-inertia particles in turbulent flows, in: Proceedings of the 5th International Conference on Multiphase Flow, Yokohama, Japan, (21 14) Zaichik L.I., Oesterl6 B., Alipchenkov VM., On the probability density function model for the transport of particles in anisotropic turbulent flow, Physics Fluids, Vol. 16, 1956-1964 (21 14. 1 Zaichik L.I., Simonin O., Alipchenkov VM., An Eulerian approach for large eddy simulation of particle transport in turbulent flows, Journal of Turbulence, Vol. 10, 4 (2009) Zaichik L.I., Drobyshevsky N.I., Filippov A.S., Mukin R.V, Strizhov VF., A diffusion-inertia model for predicting dispersion and deposition of low-inertia particles in turbulent flows, Int. J. of Heat and Mass Transfer, Vol. 53, 154-162 (2010) Paper No 1407-1428 (2006)
{"url":"http://ufdc.ufl.edu/UF00102023/00058","timestamp":"2014-04-16T07:46:44Z","content_type":null,"content_length":"48424","record_id":"<urn:uuid:58aa2222-931b-4202-b005-62acd1aaf039>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the limit: see the attachment. Please, don't throw around the word derivative, just yet. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51058918e4b0ad57a5636666","timestamp":"2014-04-20T14:07:28Z","content_type":null,"content_length":"50854","record_id":"<urn:uuid:8cf199cd-1982-4757-b0b4-752492a7860f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
{-# LANGUAGE TypeFamilies, PatternGuards, GeneralizedNewtypeDeriving #-} -- | @TrieQueue e@ is a priority queue @IQueue@ instance satisfying @QueueKey (TrieQueue e) ~ [e]@, with the property that this queue frequently performs better than any other queue -- implementation in this package for keys of type @[e]@. -- This particular implementation is highly experimental and possibly a genuinely new data structure. See the source code for details. -- However, for many cases this priority queue may be used for a heap sort that runs faster than the "Data.List" implementation, -- or the vanilla "Data.Queue.PQueue" implementation. module Data.Queue.TrieQueue (TrieQueue) where import Control.Arrow((***)) import Control.Monad import Data.Semigroup --import Data.Monoid import Data.Maybe import Data.Queue.Class import Data.Queue.QueueHelpers import Data.Queue.Fuse.PHeap --import Data.Queue.TrieQueue.Edge --import Data.Queue.TrieQueue.MonoidQueue import Data.Queue.TrieQueue.TrieLabel import GHC.Exts import Prelude hiding (null) -- On the back end it uses something called a /monoid queue/, -- which takes ordered keys associated with monoid values and returns (k, m') pairs where m' is the concatenation of every monoid value associated with k, -- with no guarantees made upon the order of the concatenation. Essentially, it is a priority queue which internally "merges" values with equal keys. -- See Data.Queue.Fuse.PHeap for details and a list of alternative implementations. -- After some experimentation, trie edge labels are currently implemented as vanilla lists; however, the implementation is modularized in -- Data.Queue.TrieQueue.EdgeLabel. (Other possible implementations include mergeable deques and Data.Sequence finger trees.) -- A trie, now, consists of an edge label xs, the number of strings ending with that label, and a monoid queue associating characters in the string to -- tries consisting of strings prefixed by that character. This is the key variation in this implementation, and it exploits the fact that -- random-access string lookup is not required in a priority queue: only extract-min and insert, operations perfectly well suited to a monoid queue. -- Note that the monoid values in the monoid queue are themselves tries, which get recursively merged as necessary. data Trie e = Trie (Label e) {-# UNPACK #-} !Int (FusePHeap e (Trie e)) deriving (Show) newtype TrieQueue e = TQ (HeapQ (Trie e)) deriving (Monoid, Show) -- This monoid instance can now get exploited for great justice by the monoid queue. instance Ord e => Semigroup (Trie e) where sappend = mergeTrie sconcat = mergeTries --{-# INLINE forceOrd #-} {-forceOrd :: Ord e => Trie e -> x -> x forceOrd t x = cmp t `seq` x where cmp :: Ord e => Trie e -> (e -> e -> Ordering) cmp _ = compare-} catTrie :: Ord e => Label e -> Trie e -> Trie e xs `catTrie` Trie ys yn yQ = Trie (xs `mappend` ys) yn yQ consTrie :: Ord e => e -> Trie e -> Trie e x `consTrie` Trie xs xn xQ = Trie (x `cons` xs) xn xQ mergeTrie :: Ord e => Endo (Trie e) xT@(Trie xs0 xn xQ) `mergeTrie` yT@(Trie ys0 yn yQ) = merging xs0 ys0 split (tail xT yT) (tail yT xT) xy where end (Trie _ xn xQ) x xs = (x, Trie xs xn xQ) split pfx x xs y ys = let xEnd = end xT x xs; yEnd = end yT y ys in Trie pfx 0 (xEnd `insert` singleton yEnd) tail (Trie xs xn xQ) yT y ys = let yEnd = end yT y ys in Trie xs xn (yEnd `insert` xQ) xy = Trie xs0 (xn + yn) (xQ `merge` yQ) --{-# INLINE compactTrie #-} compactTrie :: Ord e => Trie e -> Maybe (Trie e) compactTrie (Trie xs 0 xQ) | null xQ = Nothing | Just (y, t) <- extractSingle xQ = Just (xs `catTrie` (y `consTrie` t)) compactTrie t = Just t data Acc e f = A {-# UNPACK #-} !Int e f -- Note that a monoid queue is built up and (sometimes) torn down for each character. If every label on every trie being merged matches -- on the first character, then the monoid queue simply automatically becomes a singleton, a case handled by compactTrie with a specialized -- implementation based on extractSingle. If the labels do not match, or there are tries being merged with empty labels, -- then the monoid queue is exactly what we needed anyway. mergeTries :: Ord e => Fusion (Trie e) mergeTries ts0 = compactTrie (Trie mempty nEmpty (combine es qs)) where combine es qs = es `insertAll` mergeAll qs A nEmpty qs es = foldl procEmpty (A 0 [] []) ts0 A nEmpty qs es `procEmpty` Trie xs n q = case uncons xs of Nothing -> A (n + nEmpty) (q:qs) es Just (x, xs) -> A nEmpty qs ((x, Trie xs n q):es) --mergeTries = fusing' mergeTrie {-# INLINE fin #-} fin :: Ord e => Trie e -> Maybe (Trie e) fin (Trie _ 0 q) | null q = Nothing fin t = Just t -- If there are strings ending at this label, we obviously process those. Otherwise, we recurse to the first hanging trie from the monoid queue. -- If it is not exhausted, then we can simply replace the value in the monoid queue; if it is exhausted we may possibly compact the trie -- (e.g. if there is now only one child trie and we may in fact combine those edges). extractTrie :: Ord e => Trie e -> (Label e, Maybe (Trie e)) extractTrie (Trie xs (n+1) xQ) = (xs, inline compactTrie (Trie xs n xQ)) extractTrie (Trie xs 0 xQ) | Just (y, t) <- top xQ, (ys, t') <- extractTrie t = (xs `mappend` (y `cons` ys), case t' of Nothing -> delete xQ >>= inline compactTrie . Trie xs 0 Just t' -> Just (Trie xs 0 $ replace t' xQ)) extractTrie _ = error "Failure to detect empty queue" instance Ord e => IQueue (TrieQueue e) where type QueueKey (TrieQueue e) = [e] empty = mempty merge = mappend mergeAll = mconcat singleton = TQ . HQ 1 . pJust . single insertAll = mappend . fromListTrie fromList = fromListTrie extract (TQ (HQ n (Pt t))) = fmap ((labelToList *** (TQ . HQ (n-1) . Pt)) . extractTrie) t null (TQ (HQ _ (Pt Nothing))) = True null _ = False size (TQ (HQ n _)) = n toList (TQ (HQ _ (Pt t))) = maybe [] trieToList t where trieToList (Trie xs xn xQ) = replicate xn xs ++ [xs ++ y:ys | (y, t) <- toList xQ, ys <- trieToList t] toList_ (TQ (HQ _ (Pt t))) = maybe [] trieToList_ t where trieToList_ (Trie xs xn xQ) = replicate xn xs ++ [xs ++ y:ys | (y, t) <- toList_ xQ, ys <- trieToList_ t] fromListTrie :: Ord e => [[e]] -> TrieQueue e fromListTrie = TQ . liftM2 HQ length (Pt . mergeTries . map single) single :: Ord e => [e] -> Trie e single xs = Trie (labelFromList xs) 1 empty
{"url":"http://hackage.haskell.org/package/queuelike-1.0.8/docs/src/Data-Queue-TrieQueue.html","timestamp":"2014-04-20T21:50:37Z","content_type":null,"content_length":"38486","record_id":"<urn:uuid:e39ba546-0384-4fe7-9397-187efea575ae>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization Problem March 29th 2009, 04:30 PM #1 Mar 2009 Optimization Problem Detergent will be packaged into a square-bottomed box with volume of 450 cubic inches. What dimensions should the box have to minimize the surface area? I am a bit lost with this. Can someone please show me the steps on how to correctly solve this? Thank you for any help. let x = side length of square bottom h = box height V = (x^2)h = 450 A = 2x^2 + 4xh using the volume equation, solve for h in terms of x, then substitute for h in the surface area formula to get it in terms of a single variable. find dA/dx and minimize I understand what you wrote and that is how I solved it, initially. However, my answer just didn't seem right because that is what you would get if you solved it without Calculus. My work: V = L^2(h) 450 = L^2(h) h = 450/(L^2) SA = (2L^2) + (1800L/(L^2)) I took the derivative: SA' = 4L - (1800/(L^2)) Set it equal to zero and solved for "L:" 0 = 4L - (1800/(L^2)) -4L = - (1800/(L^2)) -4L^3 = -1800 L^3 = 450 L = 7.66 From here, I plugged it back into the volume formula to solve for "h:" 450 = 58.6756h h = 7.669 = 7.67 Basically, my height, width, and length would approximately be 7.66 inches to minimize the Surface Area. Is this correct? I understand what you wrote and that is how I solved it, initially. However, my answer just didn't seem right because that is what you would get if you solved it without Calculus. My work: V = L^2(h) 450 = L^2(h) h = 450/(L^2) SA = (2L^2) + (1800L/(L^2)) I took the derivative: SA' = 4L - (1800/(L^2)) Set it equal to zero and solved for "L:" 0 = 4L - (1800/(L^2)) -4L = - (1800/(L^2)) -4L^3 = -1800 L^3 = 450 L = 7.66 From here, I plugged it back into the volume formula to solve for "h:" 450 = 58.6756h h = 7.669 = 7.67 Basically, my height, width, and length would approximately be 7.66 inches to minimize the Surface Area. Is this correct? remember your work on this ... the minimum surface area for a rectangular prism of fixed volume is a cube. March 29th 2009, 04:38 PM #2 March 29th 2009, 04:46 PM #3 Mar 2009 March 29th 2009, 04:55 PM #4 March 29th 2009, 04:58 PM #5 Mar 2009
{"url":"http://mathhelpforum.com/calculus/81323-optimization-problem.html","timestamp":"2014-04-20T07:51:37Z","content_type":null,"content_length":"44538","record_id":"<urn:uuid:df3a6c8e-e4fd-45c5-bfc2-0b74c78e9f99>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
High order of accuracy difference schemes for the inverse elliptic problem with Dirichlet condition The overdetermination problem for elliptic differential equation with Dirichlet boundary condition is considered. The third and fourth orders of accuracy stable difference schemes for the solution of this inverse problem are presented. Stability, almost coercive stability, and coercive inequalities for the solutions of difference problems are established. As a result of the application of established abstract theorems, we get well-posedness of high order difference schemes of the inverse problem for a multidimensional elliptic equation. The theoretical statements are supported by a numerical example. MSC: 35N25, 39A14, 39A30, 65J22. difference scheme; inverse elliptic problem; high order accuracy; well-posedness; stability; almost coercive stability; coercive stability 1 Introduction Many problems in various branches of science lead to inverse problems for partial differential equations [1-3]. Inverse problems for partial differential equations have been investigated extensively by many researchers (see [3-18] and the references therein). Consider the inverse problem of finding a function u and an element p for the elliptic equation in an arbitrary Hilbert space H with a self-adjoint positive definite operator A. Here, λ is a known number, φ, ξ, and ψ are given elements of H. Existence and uniqueness theorems for problem (1.1) in a Banach space are presented in [5]. The first and second accuracy stable difference schemes for this problem have been constructed in [15]. High order of accuracy stable difference schemes for nonlocal boundary value elliptic problems are presented in [19-21]. Our aim in this work is the construction of the third and fourth order stable accuracy difference schemes for the inverse problem (1.1). In the present paper, the third and fourth orders of accuracy difference schemes for the approximate solution of problem (1.1) are presented. Stability, almost coercive stability, and coercive stability inequalities for the solution of these difference schemes are established. In the application, we consider the inverse problem for the multidimensional elliptic equation with Dirichlet condition Here, is the open cube in the n-dimensional Euclidean space with boundary S, , ( ), , , ( ), ( , ) are given smooth functions, ( ), and , are given numbers. The first and second orders of accuracy stable difference schemes for equation (1.2) are presented in [15]. We construct the third and fourth orders of accuracy stable difference schemes for problem The remainder of this paper is organized as follows. In Section 2, we present the third and fourth order difference schemes for problem (1.1) and obtain stability estimates for them. In Section 3, we construct the third and fourth order difference schemes for problem (1.2) and establish their well-posedness. In Section 4, the numerical results are given. Section 5 is our conclusion. 2 High order of accuracy difference schemes for (1.1) and stability inequalities We use, respectively, the third and fourth order accuracy approximate formulas for . Here, , is a notation for the greatest integer function. Applying formulas (2.1) and (2.2) to , we get, respectively, the third order of accuracy difference problem and the fourth order of accuracy difference problem for inverse problem (1.1). For solving of problems (2.3) and (2.4), we use the algorithm [14], which includes three stages. For finding a solution of difference problems (2.3) and (2.4) we apply the substitution In the first stage, applying approximation (2.5), we get a nonlocal boundary value difference problem for obtaining . In the second stage, we put and find . Then, using the formula we define an element p. In the third stage, by using approximation (2.5), we can obtain the solution of difference problems (2.3) and (2.4). In the framework of the above mentioned algorithm for , we get the following auxiliary nonlocal boundary value difference scheme: for the third order of accuracy difference problem (2.3) and for the fourth order of accuracy difference problem (2.4). For a self-adjoint positive definite operator A, it follows that [22] is a self-adjoint positive definite operator, where , , I is the identity operator. Moreover, the bounded operator D is defined on the whole space H. Now we give some lemmas that will be needed below. Lemma 2.1The following estimates hold[23]: Lemma 2.2The following estimate holds[23]: has an inverse such that and the estimate is valid. Proof We have Applying estimates of Lemma 2.1, we have By using the triangle inequality, formula (2.10), estimates (2.9), (2.12), and Lemma 2.2 of paper [15], we obtain for any small positive parameter τ. From that follows estimate (2.9). Lemma 2.3 is proved.□ has an inverse and the estimate is satisfied. Proof We can get where G is defined by formula (2.11) and Applying estimates of Lemma 2.1, we have Using the triangle inequality, formula (2.14), estimates (2.13), (2.15), and Lemma 2.3 of paper [15], we get for any small positive parameter τ. From that follows estimate (2.13). Lemma 2.4 is proved.□ Let and be the spaces of all H-valued grid functions in the corresponding norms, Theorem 2.1Assume thatAis a self-adjoint positive definite operator, and ( ). Then, the solution of difference problem (2.3) obeys the following stability estimates: Proof We will obtain the representation formula for the solution of problem (2.7). Applying the formula [23], we get By using formula (2.19) and nonlocal boundary conditions we get the system of equations Solving system (2.20), we obtain Therefore, difference problem (2.7) has a unique solution which is defined by formulas (2.19), (2.21), and (2.22). Applying formulas (2.19), (2.21), (2.22), and the method of the monograph [23], we The proofs of estimates (2.17), (2.18) are based on formula (2.5) and estimate (2.23). Using formula (2.5) and estimates (2.23), (2.17), we obtain inequality (2.16). Theorem 2.1 is proved.□ Theorem 2.2Suppose thatAis a self-adjoint positive definite operator, and ( ). Then, the solution of difference problem (2.4) obeys the stability estimates (2.16), (2.17), and (2.18). Proof By using the representation formula (2.19) for the solution of (2.8), formula (2.19), and the nonlocal boundary conditions we obtain the system of equations Solving system (2.24), we have So, the difference problem (2.8) has a unique solution , which is defined by formulas (2.19), (2.25), and (2.26). By using formulas (2.19), (2.25), (2.26), and the method of the monograph [23], we can get the stability estimate (2.23) for the solution of difference problem (2.8). The proofs of estimates (2.17), (2.18) are based on (2.5) and (2.23). Applying formula (2.5) and estimates (2.23), (2.17), we get estimate (2.16). Theorem 2.2 is proved.□ Theorem 2.3Assume thatAis a self-adjoint positive definite operator, and . Then, the solutions of difference problems (2.3) and (2.4) obey the following almost coercive inequality: Theorem 2.4Assume thatAis a self-adjoint positive definite operator, and ( ). Then, the solutions of difference problems (2.3) and (2.4) obey the following coercive inequality: The proofs of Theorems 2.3 and 2.4 are based on formulas (2.5), (2.19), (2.21), (2.22), (2.25), (2.26), Lemmas 2.1 and 2.2. 3 High order of accuracy difference schemes for the problem (1.2) and their well-posedness Now, we consider problem (1.2). The differential expression [22,23] defines a self-adjoint strongly positive definite operator acting on with the domain The discretization of problem (1.2) is carried out in two steps. In the first step, we define the grid spaces To the differential operator generated by problem (1.2) we assign the difference operator defined by the formula acting in the space of grid functions , satisfying the condition for all . To formulate our results, let and be spaces of the grid functions defined on , equipped with the norms Applying formula (2.5) to , we arrive for functions, at auxiliary nonlocal boundary value problem for a system of ordinary differential equations In the second step, auxiliary nonlocal problem (3.2) is replaced by the third order of accuracy difference scheme and by the fourth order of accuracy difference scheme Let τ and be sufficiently small positive numbers. Theorem 3.1The solutions of difference schemes (3.4) and (3.5) obey the following stability estimates: Theorem 3.2The solutions of difference schemes (3.4) and (3.5) obey the following almost coercive stability estimate: Theorem 3.3The solutions of difference schemes (3.4) and (3.5) obey the following coercive stability estimate: The proofs of Theorems 3.1-3.3 are based on the abstract Theorems 2.1-2.4, symmetry properties of the operator in and the following theorem on the coercivity inequality for the solution of the elliptic difference problem in . Theorem 3.4[24] For the solution of the elliptic difference problem the following coercivity inequality holds: 4 Numerical results In this section, by using the third and fourth order of the accuracy approximation, we obtain an approximate solution of the inverse problem for the elliptic equation. Note that and are the exact solutions of equation (4.1). For the approximate solution of the nonlocal boundary value problem (3.2), consider the set of grid points which depends on the small parameters and . Applying approximations (3.4) and (3.5), we get, respectively, the third order of the accuracy difference scheme and the fourth order of the accuracy difference scheme for the approximate solutions of the auxiliary nonlocal boundary value problem (3.2). Applying approximation (3.3) and the second order of the accuracy in x in the approximation of A, we get the following values of the p function in the grid points: In this step, applying to the boundary value problem for the function for the third and fourth order approximation in the variable t, we get, respectively, the third order of the accuracy difference and the fourth order of the accuracy difference scheme We can rewrite the difference scheme (4.2) in the following matrix form: Here, I is the identity matrix, is column matrix, A, B, C, D, E are square matrices. Moreover, For the solution of the linear matrix equation (4.7), we use the modified Gauss elimination method [25]. Namely, we seek a solution of equation (4.7) by the formula Here, and ( ) are square matrices, ( ) are column matrices which are defined by with , and are the zero matrix, , . and are defined by formulas We rewrite the difference scheme (4.5) in matrix form, Here, is an column matrix, A, B, D, E are defined by formulas (4.8) and (4.9). We use square matrices, and C is the following matrix: We can write the difference scheme (4.3) in matrix form (4.12), where A, B, D, E are defined by formulas (4.8) and (4.9), is defined by equation (4.10), C is defined by We have the difference scheme (4.6) in the matrix form of equation (4.12), where A, B, D, E are defined by formulas (4.8) and (4.9), is defined by formula (4.10), C is defined by equation (4.13), is defined by Now we give the results of the numerical analysis using MATLAB programs. The numerical solutions are recorded for different values of N, M; and represents the numerical solutions of these difference schemes at the grid points of , and represents the numerical solutions at . For comparison with the exact solutions, the errors are computed by Tables 1-3 are constructed for , , , . Hence, the third order and fourth order of the accuracy difference schemes are more accurate than the second order of the accuracy difference schemes (ADS). Table 1 gives the error between the exact solution and solutions derived by difference schemes for the nonlocal problem. Table 2 includes the error between the exact p solution and approximate p derived by the difference schemes. Table 3 gives the error between the exact u solution and solutions derived by the difference schemes. 5 Conclusion In this paper, the overdetermination problem for an elliptic differential equation with Dirichlet boundary condition is considered. The third and fourth orders of accuracy difference schemes for approximate solutions of this problem are presented. Theorems on the stability, almost coercive stability, and coercive stability estimates for the solutions of difference schemes for the elliptic equation are proved. As a result of the application of established abstract theorems, we get well-posedness of high order difference schemes of the inverse problem for a multidimensional elliptic equation. Numerical experiments are given. As can be seen from Tables 1-3, the third and fourth orders of the accuracy difference schemes are more accurate than the second order of the accuracy difference scheme. The author would like to thank Prof. Allaberen Ashyralyev (Fatih University, Turkey) for his helpful suggestions in improving the quality of this paper. Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2014/1/5","timestamp":"2014-04-19T14:35:12Z","content_type":null,"content_length":"203955","record_id":"<urn:uuid:eebabbc7-4e56-47fc-b048-ac3954f5adfc>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
What Are The Forces In The Struts Supporting The Loaded Point Chapter 7 : The maximum shear stress occurs at the points along neutral axis since Q is maximum and determine the shear force resisted by the web of the beam. V = 20 kN,. 474 shear force V that the strut can support if the allowable shear stress for the subjected to a load P that is just l statics ch5 : Determine the force along the pinconnected knee strut BC (short . If it is intended to support a maximum load of 400 lb placed at point G2, Two and Three : If an element has pins or hinge supports at both ends and carries no load in between, Some common examples of twoforce members are columns, struts, hangers, The line of action of the force at point C must also pass through point B EGR 181 Homework Solutions : supported by the pin at A for the loaded bracket. The position vector of the force at B from point A can be Noting that the force in the strut is. Truss examples : Vertical component of reaction at support A VA = 4.29 kN 1. Vertical component The reader should complete this calculation to determine the member forces as indicated in Figure 3.18. system and apply a unit load only in a vertical dir Cap07 Solutions Mechanics of Materials Hibbeler 5th : Determine the shear stress at point B on the web of the cantilevered strut at section aa. . Determine the largest end forces Pthat the member can support if the . The beam has a rectangular cross section and is subjected to a load P that Sample Test : 3 0 Points Possible (10 pts per page) NAME. NOTES: Calculator and one The hood of the auto is supported by the strut AB, which exerts a force of F = 24 lb on the hood. The distributed loading shown at the right can be replaced by a . Verification and Implementation of Strutand : The NCHRP is supported by annual voluntary contributions from the intersection points of struts and ties are called nodes. With the forces in each strut and tie determined from basic statics, the resulting stresses within compress force vectors : support reactions, primary tensile and compressive force in structural members. Simply supported beams: distribution of shear force and bending moment for a loaded beam e.g. without overhang, beam with overhang and point of contraflexure. . stress and strain : Structural members: struts and ties direct stress and strain, dimensional changes The diagrams show the way that point loads and uniform loads . The force. ( 1.407 MN). 3. A circular metal column is to support a load of 500 Tonne and it Evaluation of Load Transfer and Strut Strength of Deep Beams with : occurs indirectly from load point into support through two or more struts that form strut forces at the bottom nodes, and compression stresses near the top of the 159 3 45 Determine the tension in the cables in order to support : the three struts needed to support the 500kg block. 0.75 m. 1.25 m. 3 m from point O. 12 ft. 9 ft this loading has an equivalent resultant force that is equal. Development of strut and tie models for simply supported deep : Depending on the nature of forces, nodes can be classified as CCC, CCT, CTT Simply supported beam subject to a central point load = 0.6 L &lt 1 In Figure 5, Dome structural analysis basics Geo : Force a single point of load, we wont get into multiple load or loads spread over Below we have a simple triangle frame held on two supports with a force of 50 A geodesic dome on the other hand has timber struts of a similar length and 2 : o ¤ θ ¤ 90o. ) for strut AB so that force developed along strut AD for equilibrium of the 400lb crate. 3 60. its end as shown, determine the moment of this force about point C. 4 53. cable BC needed to support the 500lb load. Neglect Mechanics of Materials Second Edition : The tensile forces supporting the weight of the Mackinaw bridge (Figure 4.1a) Connecting rods in an engine, struts in aircraft engine mounts, members of a The relative displacement of point B with respect to A is 0.05 mm, from which we can find the axial strain. The axial load is applied such that there is no bending. 4 5 Procedures for Diaphragms : seismic forces to vertical lateral force resisting elements. They also provide lateral support for walls and pats. . Tier 2 Evaluation Procedure:The load path around developed at the points marked X, these will be the diaphragm force transfer is the collector, or drag strut. In Figure 431, a member is added to collect. Methodology for Connecting Strut Loads Calculations during : This is done by appropriately selecting load sets, singlepointconstraint set and multipoint grid in later subcase will force the deformation of the grid to zero. case control request and the supports at locations 3 and 4 are removed. 9 PHYSICAL STRUCTURES : support a load, so a structure can be thought of as an assembly of materials so . collapse of the framework, then the rod is in thrust, a strut. tie strut. P. T. R. Q Two forces, in newtons, act at the point O as shown in the diagram opposite. Find Strength of material : thickness of each yoke that will support a load P = 14 kips without exceeding a shearing stress of 209 if the points of application of the 6000lb and the 4000 lb forces are Calculate the stress in each bar and the force in the strut AE when. Posted on October 3, 2013 by Prijom Man in posts What Are The Forces In The Struts Supporting The Loaded Point
{"url":"http://prijom.com/posts/what-are-the-forces-in-the-struts-supporting-the-loaded-point.php","timestamp":"2014-04-16T04:12:54Z","content_type":null,"content_length":"28658","record_id":"<urn:uuid:d4b626a7-c81f-46a1-b350-23393c346ca2>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential equation--Moth problem January 13th 2012, 06:12 PM #1 Junior Member Jan 2010 Differential equation--Moth problem One theory about the behaviour of moths states that they navigate at night by keeping fixed angle between their velocity vector and the direction of the Moon [or some bright star]. A certain moth flies near to a candle and mistakes it for the Moon. What will happen to the moth? Hints: in polar coordinates (r,θ ), the formula for the angle ω between the radius vector and the velocity vector is given by Use the formula to solve for r as a function of . I do not know how to set up the necessary equations. I tried sketching a diagram,but I cannot seem to find a relation between ω and θ necessary to solve the formula given. Is my diagram correct? Thank you for your help. Re: Differential equation--Moth problem One theory about the behaviour of moths states that they navigate at night by keeping fixed angle between their velocity vector and the direction of the Moon [or some bright star]. A certain moth flies near to a candle and mistakes it for the Moon. What will happen to the moth? Hints: in polar coordinates (r,θ ), the formula for the angle ω between the radius vector and the velocity vector is given by Use the formula to solve for r as a function of . I do not know how to set up the necessary equations. I tried sketching a diagram,but I cannot seem to find a relation between ω and θ necessary to solve the formula given. Is my diagram correct? Thank you for your help. First take the position of the candle as the origin. Re: Differential equation--Moth problem What CB suggested is excellent!... to proceeding we suppose that the speed of the moth is in modulus a constant v, so that only its direction can be changed, and we indicate with $\alpha$ the 'fixed angle' defined in the original post. Setting the position of the moth as a complex number... $z(t)= x(t)+ i\ y(t) = r(t)\ e^{i \theta(t)}$ (1) ... the complex equation describing the flight of the moth is... $z^{'}= (r^{'} + i\ r\ \theta^{'})\ e^{i\ \theta}= v\ e^{i\ (\theta- \frac{\pi}{2} + \alpha)} \rightarrow -r\ \theta^{'} +i\ r^{'} = v\ e^{i\ \alpha}$ (2) The (2) is equivalent to a system of two DE in two variables... $r^{2}\ \theta^{'\ 2} + r^{'\ 2}= v^{2}$ $\frac{r^{'}}{r\ \theta^{'}}= - \tan \alpha$ (3) ... and a sucessive post is dedicated to investigate about the solution of (3)... Kind regards Re: Differential equation--Moth problem For semplicity sake we set v=1 so that the system of DE becomes... $r\ \theta^{'}=- \cos \alpha$ $r^{'}=\sin \alpha$ (1) Of course the solution second DE is immediate... $r(t)= t\ \sin \alpha + c_{1}$ (2) ... and that means that... a) for $0< \alpha< \frac{\pi}{2}$ the distance of the moth from the candle will increase without limits... b) for $\alpha=0$ the distance of the moth from the candle remains constant and the motion is 'uniform circular'... c) for $-\frac {\pi}{2}<\alpha<0$ the distance of the moth from the candle vanishes at the time $- \frac{c_{1}}{\sin \alpha}$ and the moth is 'kaput'... Now if we substitute (2) in the first DE we obtain... $\theta^{'}= - \frac{\cos \alpha}{t\ \sin \alpha + c_{1}}$ (3) .... and the solution is... $\theta(t)= \frac{1}{\tan \alpha}\ \ln \frac{1}{t\ \sin \alpha\ + c_{1}} + c_{2}$ (4) Kind regards Last edited by chisigma; January 15th 2012 at 08:44 AM. Re: Differential equation--Moth problem Being in the trade I know that we do not normally go for an explicit solution, but observe that the range decreases at a constant rate, and that the angle rate goes to infinity as range goes to zero (as is obvious from chisigma's equations). Also since such systems are usually (body) rate limited you can calculate the range at which the lead-pursuit model breaks down (when you should end up circling the flame). Last edited by CaptainBlack; January 15th 2012 at 10:17 AM. Re: Differential equation--Moth problem Being in the trade I know that we do not normally go for an explicit solution, but observe that the range decreases at a constant rate, and that the angle rate goes to infinity as range goes to zero (as is obvious from chisigma's equations). Also since such systems are usually (body) rate limited you can calculate the range at which the lead-pursuit model breaks down (when you should end up circling the flame). I suppose You intend the case c) , when is $- \frac{\pi}{2}<\alpha<0$. If the hypothesis of speed constant in modulus is true, then in such a situation when the decreasing of the distance of the moth from the flame produces a great increasing of the angular speed so that it is not surprising that... $\lim_{t \rightarrow t_{0}} \theta (t)= - \infty\ ,\ t_{0}= - \frac{c_{1}}{\sin \alpha}$ Of course different hypothesis about the speed produce a different equation and a different scenario... Kind regards Re: Differential equation--Moth problem I suppose You intend the case c) , when is $- \frac{\pi}{2}<\alpha<0$. If the hypothesis of speed constant in modulus is true, then in such a situation when the decreasing of the distance of the moth from the flame produces a great increasing of the angular speed so that it is not surprising that... $\lim_{t \rightarrow t_{0}} \theta (t)= - \infty\ ,\ t_{0}= - \frac{c_{1}}{\sin \alpha}$ Of course different hypothesis about the speed produce a different equation and a different scenario... Kind regards I think either you are using a different convention from me about which angle is $\alpha$ or you have the trig functions reversed. I think $\dot{r}=-\cos(\alpha)$, where $\alpha$ is the angle from the moth's velocity vector to the line-of-sight to the candle (when $\alpha$ is zero the moth flies directly towards the candle). The cases with $-\frac{\pi}{2}<\alpha<\frac{\pi}{2}$ are all closing, and changing the sign of $\alpha$ gives the mirror image trajectory. The remaining cases are all opening or constant range Now because we are talking about moths I have assumed we are only interested in closing trajectories, as otherwise we would not be aware of the behavior. January 14th 2012, 02:42 AM #2 Grand Panjandrum Nov 2005 January 15th 2012, 05:14 AM #3 January 15th 2012, 08:20 AM #4 January 15th 2012, 10:04 AM #5 Grand Panjandrum Nov 2005 January 15th 2012, 12:06 PM #6 January 15th 2012, 07:44 PM #7 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/differential-equations/195250-differential-equation-moth-problem.html","timestamp":"2014-04-17T01:22:49Z","content_type":null,"content_length":"63826","record_id":"<urn:uuid:9284d854-b2a7-496c-896a-054c02cc58e6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
convert 550 cc to cup You asked: convert 550 cc to cup Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/convert_550_cc_to_cup","timestamp":"2014-04-18T06:22:17Z","content_type":null,"content_length":"57305","record_id":"<urn:uuid:3cbeb696-8d29-4bfa-b346-a6e4a36f9e11>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Euler's Formula Date: 01/27/98 at 21:33:23 From: Anonymous Subject: Trig id's; Euler's equation I have been using trig for some time now, but the identities are still one of my weaknesses. I read the answer to a related question you posted. A few years ago I saw a electrical engineer use a technique that involved a combination of Euler's equation and the unit circle. He could quickly derive whatever identity he needed from this simple set-up. I took notes at that time but only was shown two examples and never understood exactly how to do it. Do you know what I am referring to? If so, please explain it to me. Date: 01/31/98 at 14:24:10 From: Doctor Luis Subject: Re: Trig id's; Euler's equation If by the trigonometric identities you mean the addition formulas and the like, I can give you a few hints.. Of course, you know that Euler's formula is exp(i*t) = cos(t) + i*sin(t) where exp(w) = e^w Now, interesting things happen when you start doing regular operations with the exponential function and then applying the identity.. exp(i*a)*exp(i*b) = exp(i*(a+b)) exponent law Now, substitute the identities, exp(i*(a+b)) = cos(a+b)+i*sin(a+b) = (cos(a)+i*sin(a))*(cos(b)+i*sin(b)) multiply these two complex no. = (cos(a)*cos(b)-sin(a)*sin(b)) + i*(cos(a)*sin(b)+sin(a)*cos(b)) cos(a+b)+i*sin(a+b) = (cos(a)*cos(b)-sin(a)*sin(b)) + i*(cos(a)*sin(b)+sin(a)*cos(b)) Since two complex numbers are equal only when their real parts are equal to each other, and their imaginary parts as well, that means (real part) cos(a+b) = cos(a)*cos(b)-sin(a)*sin(b) and also, (imaginary parts) sin(a+b) = cos(a)*sin(b)+sin(a)*cos(b) Notice, that all I needed to remember was Euler's identity, and how to multiply complex numbers.. Need an expression for cos(3x) ? No problem: exp(i*3t) = (exp(i*t))^3 cos(3t)+i*sin(3t) = (cos(t)+i*sin(t))^3 with a little bit of algebra, you can see that (x+y)^3 = x^3 + 3(x^2)y + 3x(y^2) + y^3 And so, cos(3t)+i*sin(3t) = (cos(t)+i*sin(t))^3 = (cos(t))^3 + 3((cos(t))^2)*(i*sin(t)) + 3(cos(t))*(i*sin(t))^2 + (i*sin(t))^3 = cos^3(t) + (3cos^2(t)sin(t))*i -3cos(t)sin^2(t) - i*sin^3(t) Taking the real part of both sides, we find, cos(3t) = cos^3(t) - 3cos(t)sin^2(t) We obtain a similar identity for sin(3t) by taking the imaginary part, sin(3t) = 3cos^2(t)sin(t) - sin^3(t) If you know the binomial theorem you can quickly come up with an identity for cos(n*x). -Doctor Luis, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/53843.html","timestamp":"2014-04-16T13:33:17Z","content_type":null,"content_length":"7527","record_id":"<urn:uuid:57583270-73b2-470f-8980-e05a29216993>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 11 For a geometric series, S4/S8= 1/17, determine the first three terms of the series. If 22.15 milliliters of 0.100 molarity sulfuric acid is required to neutralize 10.0 milliliters of lithium hydroxide solution, what is the molar concentration of the base? Mathematics, Math During a period of acute shortage of raw materials, the prices of some manufactured goods were raised by 25%. After the shortage was over, prices were lowered by 15%. If the original price of an article was $56, find its price after the shortage was over. Would it have been ch... Mathematics, Math During a period of acute shortage of raw materials, the prices of some manufactured goods were raised by 25%. After the shortage was over, prices were lowered by 15%. If the original price of an article was $56, find its price after the shortage was over. Would it have been ch... d is supposed to be internal Personal Project :) Hey I'm Kaela, 9th grade and we're required to do a mini personal project. I can't decide on what to do :/ Anyone help? I'm interested with models, runways, model photo shoots, etc etc. :) 9th grade Use the following Info. At sea level the speed of sound in air is linearly related to air temp. If 35 degrees Celsius sound travels at a rate of 352meters per sec. if 15degrees Celsius sound travels at 340 meters per sec. How would i write a linear equation that models speed o... lcm of two #s is 60. one of the #s is 20. other # is even & has only two prime factors. what is the other number? & how did u get the answer? what is the role of prouducers in a ecosystem 2- The earth s atmospheric pressure p is often modeled by assuming that the rate dp/dh at Which p changes with the altitude h above sea level is proportional to p. Suppose that the pressure at sea level is 1013 millibars (about 14.7 pounds per square inch) and that the pr... nucleus true facts i need facts on the nucleus
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Kaela","timestamp":"2014-04-23T23:35:11Z","content_type":null,"content_length":"8332","record_id":"<urn:uuid:29c1b9d9-71c8-42af-b895-bedeeb648c07>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Displaying a set of patterns Displaying a set of patterns Hi all, I need to write pseudo-codes for these 2 patterns. I did execute the program in C++,it displays it like: but I am unable to display where there are the same amount of asterisks, I highlight them in red. I am unable to make it work well where the same amount of asterisks is being repeated on the 2 I am placing my pseudocodes too. The pattern has the following format: 1. For the ascending pattern: declare variables x,y,a as integers read a for x=0 to a increment x by 1 for y=0 to x increment y by 1 display "*" end for end for 2. For the descending pattern: declare variables x,y,a as integers read a for x=0 to a-1 decrement x by 1 for y=0 to x increment y by 1 display "*" end for end for Plz I would like to have some suggestions to what I post.. I am in a hurry to fix it. :cool: You want maybe (I am not sure if this is what you mean) declare variables x,y,a as integers, temp read a for x=0 to a increment x by 1 if (x == 3 || x == 3) temp = x-1; temp = x; for y=0 to temp increment y by 1 display "*" end for end for Wow. How many times must we see this same question?
{"url":"http://cboard.cprogramming.com/cplusplus-programming/127556-displaying-set-patterns-printable-thread.html","timestamp":"2014-04-21T08:54:40Z","content_type":null,"content_length":"9174","record_id":"<urn:uuid:19cc53d1-e885-4cd3-89bc-3bf0776a02a0>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Inductor Capacitance and Inductance Estimation To be able to estimate the parasitic capacitance effects on an inductor you need to do (at least) two measurements, because with one measurement we can only determine one parameter. The simplest way to do this is to resonate the inductor with two different capacitors in parallel and measure the two resulting resonance frequencies; for this purpose, a grid-dip meter or a simple oscillator can be used. With the two measured resonant frequencies and the two known capacitance values we can estimate both the true inductance and the distributed capacitance. Note that, to determine the inductance, only the difference between the two values of capacitance and not their absoulte values needs to be known. That means that you can at first resonate the inductor at a convenient frequency with any capacitor you find at hand, and then shift the resonance adding in parallel a low-tolerance capacitor to get a good estimate. If you know precisely both capacitance values (that means knowing also the parasitic effects from the fixture, oscillator, wiring, etc.) you can obtain also an estimate of the inductor parallel The modelization of an inductor as an ideal inductance in parallel with a capacitance is valid only at low frequencies. A complete model will have an infinite number of parallel LC resonators connected in series.[1] [1] B.A. Anicin, D.M. Davidovic, P. Karanovic, V.M. Miljevic, V. Radojevic, "Circuit properties of coils," IEE Proceedings, vol. 144, no. 5, Sep. 1997.
{"url":"http://www.qsl.net/in3otd/inductors.html","timestamp":"2014-04-23T09:19:11Z","content_type":null,"content_length":"7553","record_id":"<urn:uuid:75e0ccfb-e9a0-4c4d-ad78-391a8fb52a6d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Twaddell number A hydrometer scale usually used for liquids denser than water, 19^th – 20^th century, mostly used in England, for example in the leather industry to check tanning solutions, and for sulfuric acid and milk. Abbr, �Tw. To convert a Twaddell number to a scale in which the specific gravity of water = 1, multiply by 0.005, then add 1. For example, 20�Tw is equivalent to a specific gravity of 1.100. To convert specific gravity to a Twaddell number: In the Twadell hydrometer, which was used in this country [the United Kingdom] long before the introduction of Fleischer's densimeter, and which is still in general use for technical work, the degrees are half of the above value. Each degree corresponds to 0005 sp. gr., prefixed by unity. Thus, 7� Tw. = 1'035; 20� Tw.= 1�100; 100� Tw.= 1'500 sp. gr., etc. No table is therefore required for the conversion of degrees Twaddell into ordinary specific gravities, and the value of a degree Twaddell is quite definite. It is remarkable that this rational and practical hydrometer, the scale of which is usually distributed over six spindles, should be almost universally used in this country, which is so very conservative and unpractical with regard to weights and measures, whilst the continental nations, which have all adopted the metric system of weights and measures (with the exception of Russia), have not yet adopted a rational hydrometer. George Lunge, editor of the German edition. Technical Methods of Chemical Analysis. Charles Alexander Keane, editor of the English translation. New York: D. Van Nostrand, 1908. home | units index | search | to contact Sizes | acknowledgements | help | terms of use
{"url":"http://www.sizes.com/units/hydrometer_twaddell.htm","timestamp":"2014-04-19T15:00:20Z","content_type":null,"content_length":"6415","record_id":"<urn:uuid:a4a9123f-452d-4f30-a837-29434dbe0335>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Is there a way to reset an accumulate function? [Numpy-discussion] Is there a way to reset an accumulate function? Cera, Tim tim@cerazone.... Tue Oct 23 19:04:07 CDT 2012 > How about this hackish solution, for a quick non-looping fix? > In [39]: a = np.array([1,2,3,4,np.nan,1,2,3,np.nan,3]) > idx = np.flatnonzero(np.isnan(a)) > a_ = a.copy() > a_[idx] = 0 > np.add.reduceat(a_, np.hstack((0,idx))) > Out[39]: array([ 10., 6., 3.]) Close, but not exactly what I need. I want the 'cumsum', so given the 'a' in your example: I just made a loop, testing for 'nan'. Not elegant, but it works so I am not complaining. Kindest regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20121023/94cd3bdf/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-October/064257.html","timestamp":"2014-04-16T07:59:34Z","content_type":null,"content_length":"3695","record_id":"<urn:uuid:7d649e3a-1720-410e-bb96-a22b3e39e045>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Appearing Square Copyright © University of Cambridge. All rights reserved. 'Appearing Square' printed from http://nrich.maths.org/ Make an eight by eight square, the layout is the same as a chessboard. You can print out and use the square below. What is the area of the square? Divide the square in the way shown by the red dashed lines. Cut along the red lines. Rearrange the four pieces to make a rectangle that has one side of five squares. What is the size of the other side? What is the area of the rectangle you have constructed? Is there a difference between the two areas that you found? Can you explain your results?
{"url":"http://nrich.maths.org/1076/index?nomenu=1","timestamp":"2014-04-17T12:51:36Z","content_type":null,"content_length":"3801","record_id":"<urn:uuid:74f426d6-4a0a-42bf-96d5-0cea9735227e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating Armour Effectiveness - Newcomers' Forum Yeah, first of all, i'm not really a newcomer, i know, but the answer of this question might help some newcomers to understand how to calculate armour effectiveness. I was trying to calculate T110s upper glacis plate, which is 254mm thick, at 30° angle (from the horizontal) i believe, so i typed into my calculator "254 (nominal armour thickness) / sin(30°) The result was: -257.07mm Armour Thickness - dafuq? so i tried again with sin(60°) The Result was: - 833.04mm Armour Thickness - dafuq? x2 SO, how do i calculate the effective armour?
{"url":"http://forum.worldoftanks.eu/index.php?/topic/96749-calculating-armour-effectiveness/","timestamp":"2014-04-20T05:45:50Z","content_type":null,"content_length":"69956","record_id":"<urn:uuid:5dcc156b-9212-4d11-9082-8f1d9e534712>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Standard Deviation You are exceeding array bounds on line 9. Should be < not <= , like you have on line 3. Hard to say what's wrong otherwise. There are 4 variables being used which aren't declared in the function: sum, sum2, avg and sD. I don't know their types (could affect accuracy and how operations are carried out) or the initial values of sum or sum2. Did you assign sum=0 and sum2=0 before calling the Your standard deviation function is incomplete and contains error. It should not compile. sum, avg, sum2 and sD are never declared and initialized. Before the first for loop you need double sum = 0.0; Line 7, put double before avg, double avg = sum / n; You do not need float in front of n. The compiler will automatically promote n to a double since sum is a double. Before the second for loop you need double sum2 = 0.0; Line 13, put double in front of sD, double sD = sqrt(sum2 / (n - 1)); You should get rid of the leading zeroes for 08 and 05 as the compiler will attempt to treat them as octal instead of decimal. Line 14, since sD is a double you can eliminate the static_cast and just use return sD; Using the supplied data I obtained a standard deviation of 29.7642 fun2code i had <= which instead should of been <. thank you for letting me cacth my error Originally When i had <= n on line 9 the sd was 32..... I didn't understand why until i had realize its was greater than or equal symbol i had inserted to which made a complete difference on the Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/114490/","timestamp":"2014-04-19T20:06:58Z","content_type":null,"content_length":"13830","record_id":"<urn:uuid:ef745a77-e5d5-4b21-ac5e-d8b756894c11>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Integral Notation January 1st 2008, 07:58 PM #1 Global Moderator Nov 2005 New York City Integral Notation One of the thing that bothered me for a long time with the $\bold{d}x$ appearing in the end of the integral. What is it for? And why was it but there? People told me that it is because we want to show what we variable we are integrating, but it is clear without it even being there. Behold! I have dreamt a dream and I have a revelation! Behold! It was Riemann he told me the answer. Behold! It was the Riemann-Stieltjes Integral. I want to explain what it is, it will make the notion of why we put a $\bold{d}x$ in the integral a lot more clearer. First let us begin with a simpler question. What is a Riemann Integral? If you taken a basic course in analysis you would know there are two ways to define it. The classical Riemann definition which is a little discussed in a Calculus course also, and another one (which is exactly the same thing) developed by Darboux. Since Riemann's definition is more elementary (but not as neat) let us do that definition. Definition: Let $f$ be a bounded function on a closed interval $[a,b]$. We say that $f$ is integrable on this interval when there exists a real number $I$ such that: for any $\epsilon > 0$ there exists a $\delta >0$ so that for any partition $P= \{ a=x_0 < x_1< ... < x_{n-1}<x_n=b \}$ satisfing $\text{mesh}(P) = \max_{1\leq k\leq n} \ \{x_{k} - x_{k-1} \} < \delta$ we have that $\left| I - \sum_{i=1}^n f(t_k)(x_k-x_{k-1}) \right| < \epsilon$ where $t_k$ is any point chosen on $[x_{k-1},x_k]$-subinterval. Basically the definition is saying that we can make the finite sums (approximating areas) as close as we want to the true value which we call $I$* as long as the partition $P$ of the interval is fine (or thin) enough. And note how much freedom we have, it says for any partition and there are infinitely many, and it says any point in sub-interval and again there are infinitely many. So there is so much freedom with these finite Riemann sums. So we know that if $f(x) = x \mbox{ on }[0,1]$ then to show that $\int_0^1 f = \frac{1}{2}$ we need to show that the number $I = \frac{1}{2}$ is this number we need to satisfy the definition given above. Note, that is what the fundamental theorem of Calculus is doing. Instead of going through all that difficult definition, it says that if we can find the anti-derivative it is that value $I$ that we are looking for. If you think the Riemann integral definition is complicated just look at the Riemann-Steiljes integral definition. Now the Riemann-Steiljes integral is more general. It is an integral with respect to another function. Before stating their definition there is just one technical detail. Definition: Let $g:[a,b]\mapsto \mathbb{R}$ is a bounded variation when there exist a constant $M>0$ so that if for any partition $P = \{ a=x_0<...<x_n = b\}$ we have that $\sum_{k=1}^n |g(x_k)-g (x_{k-1})| \leq M$. Now we can state the definition (which might look monstrous in the beginning). Definition: Let $f$ be a bounded function on $[a,b]$ and $g$ be a bounded variation on $[a,b]$. We say $f$ is Riemann-Steiljes integration with respect to $g$ when there exists a real number $I$ such that: for any $\epsilon > 0$ there exists $\delta > 0$ so that for any partion $P = \{a = x_0<x_1<...<x_n = b\}$ satisfing $\text{mesh}(P) < \delta$ we have that $\left| I - \sum_{k=1}^n f (t_k)[g(x_{k}) - g(x_{k-1})] \right| < \epsilon$ where $t_k$ are any points in the $[x_k-x_{k-1}]$ sub-interval. We call this distinguished number $I$ to be the Riemann-Steiljes integral of $f$ on $[a,b]$ with respect to $g$. And write $I = \int_a^b f \bold{d}g$. Now why is this a generalization? Because if $g(x) = x$ then it is the standard Riemann integral! And that means with respect to $x$ we would write $\int_a^b f \bold{d}x$. And that is where the $ \bold{d}x$ comes from. In fact it turns out that if $f$ is continous and $g$ is smooth (continously differenciable) then it is a bounded variation and: $\int_a^b f \bold{d}g = \int_a^b fg'$. Where the RHS is the standard Riemann Integral. So not only does this explain the $\bold{d}x$ part it also explain the differencial part of a function. For example, $\int_0^\pi \sin x d(x^2+x) = \int_0^{\pi} \sin x (2x+1) dx$. By the Riemann-Steiljes Integral. Maybe you find that interesting, that is why I posted it. *)It can be easily show that if $I_1,I_2$ are any possible real values for then Riemann integral then $I_1 = I_2$. Meaning there is only one such possible value $I$ and we define it to be the integral $\int_a^b f$. If you would like to explore more of the topics you have introduced then that are two classics that are standards: INTRODUCTION TO THE THEORY OF INTERGRATION by T. H. Hildebrandt and THEORY OF THE INTEGRAL by Stanislaw Saks. I think that Hildebrandt is still the best on Riemann and Riemann-Stieltjes integrals there is. He also has a good discussion of content of a set which was the subject of another one of your postings. If you are interested in modern work in integration theory I would suggest Robert McLeod’s book THE GENERALIZIED RIEMANM INTEGRAL. It explores a relatively new definition of the integral that makes any derivative, F’(x), integrable on [a,b] to F(b)-F(a). That is not true of earlier attempts. That book is an MAA publication: Carus#20. Thank you. I do like theory of integration, one of my favorite in an analysis course but since I am doing so much stuff it seems I have to wait until I have time to explore integration in more Whose is Stieltjes pronounced? January 3rd 2008, 09:18 AM #2 January 3rd 2008, 11:26 AM #3 Global Moderator Nov 2005 New York City January 3rd 2008, 12:56 PM #4
{"url":"http://mathhelpforum.com/calculus/25442-integral-notation.html","timestamp":"2014-04-21T04:07:03Z","content_type":null,"content_length":"56539","record_id":"<urn:uuid:185ed739-d243-4751-b1eb-7e580e3e070d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
MA 460: Topics in Analysis Kurt Bryan, Spring 2011-12 What will this course be about? This course will take Real Analysis to the next level. Nonetheless, it has a much less "epsilon-delta" feel to it than Reals I. It is in many ways essential material for people going to grad school, and it also forms the theoretical backbone of most modern applied math in the physical sciences and engineering, especially numerical work and simulation. We'll start with some basic info on metric, normed, and inner product spaces; it's sort of like linear algebra, but with infinitely many dimensions. Then we'll cover Lebesgue integration, which is the modern approach to integration. We'll talk about "Hilbert" spaces and operators on Hilbert spaces, an extension of the idea of matrices or linear mappings in linear algebra. Finally, we'll take a look at application of this stuff to Quantum Mechanics! No physics background needed! This course will have a proof-writing component---not quite as intense as real analysis, but every assignment will have some proofs, as well as some more routine computations. The course work will consist of homework assignments (about one per week) and a couple of take home exams.
{"url":"http://www.rose-hulman.edu/~bryan/real2ad.html","timestamp":"2014-04-21T00:01:56Z","content_type":null,"content_length":"1847","record_id":"<urn:uuid:dcc6d961-5574-4773-97ee-bd4065b6ad4b>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Circumscribed circle 667pages on this wiki In geometry, the circumscribed circle or circumcircle of a polygon is a circle which passes through all the vertices of the polygon. The center of this circle is called the circumcenter. A polygon which has a circumscribed circle is called a cyclic polygon. All regular simple polygons, all triangles and all rectangles are cyclic. A related notion is the one of a minimum bounding circle, which is the smallest circle that completely contains the polygon within it. Not every polygon has a circumscribed circle, as the vertices of a polygon do not need to all lie on a circle, but every polygon has unique minimum bounding circle, which may be constructed by a linear time algorithm. Even if a polygon has a circumscribed circle, it may not coincide with its minimum bounding circle; for example, for an obtuse triangle, the minimum bounding circle has the longest side as diameter and does not pass through the opposite vertex. All triangles are cyclic, i.e. every triangle has a circumscribed circle. The circumcenter of a triangle can be found as the intersection of the three perpendicular bisectors. (A perpendicular bisector is a line that forms a right angle with one of the triangle's sides and intersects that side at its midpoint.) This is because the circumcenter is equidistant from any pair of the triangle's points, and all points on the perpendicular bisectors are equidistant from those points of the triangle. In coastal navigation, a triangle's circumcircle is sometimes used as a way of obtaining a position line using a sextant when no compass is available. The horizontal angle between two landmarks defines the circumcircle upon which the observer lies. The circumcenter's position depends on the type of triangle: • If and only if a triangle is acute (all angles smaller than a right angle), the circumcenter lies inside the triangle • If and only if it is obtuse (has one angle bigger than a right angle), the circumcenter lies outside • If and only if it is a right triangle, the circumcenter lies on one of its sides (namely, the hypotenuse). This is one form of Thales' theorem. The diameter of the circumcircle can be computed as the length of any side of the triangle, divided by the sine of the opposite angle. (As a consequence of the law of sines, it doesn't matter which side is taken: the result will be the same.) The triangle's nine-point circle has half the diameter of the circumcircle. The diameter of the circumcircle of the triangle ΔABC is $D = \frac{abc}{2\cdot\text{area}} = \frac{|AB| |BC| |CA|}{2|\Delta ABC|}= \frac{abc}{2\sqrt{s(s-a)(s-b)(s-c)}}= \frac{2abc}{\sqrt{(a+b+c)(-a+b+c)(a-b+c)(a+b-c)}}= \frac{abc}{\sqrt{\frac{(a^2+b^ 2+c^2)^2}{4}-\frac{(a^4+b^4+c^4)}{2}}}= s(2-\frac{4}{n})tan(\frac{180}{n})$ where a, b, c are the lengths of the sides of the triangle and $s= \frac{(a + b + c)}{2}$ is the semiperimeter. The radical in the second denominator above is the area of the triangle, by Heron's In any given triangle, the circumcenter is always collinear with the centroid and orthocenter. The line that passes through all of them is known as the Euler line. The isogonal conjugate of the circumcenter is the orthocenter. The useful minimum bounding circle of three points is defined either by the circumcircle (where three points are on the minimum bounding circle) or by the two points of the longest side of the triangle (where the two points define a diameter of the circle.). It is common to confuse the minimum bounding circle with the circumcircle. The circumcircle of three collinear points is the line on which the 3 points lie, often referred to as a circle of infinite radius. Nearly collinear points often lead to numerical instability in computation of the circumcircle. Circumcircles of triangles have an intimate relationship with the Delaunay triangulation of a set of points. The radius is: $R= \frac{abc}{4A}= \frac{abc}{\sqrt{(a^2+b^2+c^2)^2-2(a^4+b^4+c^4)}}$ $R= s(1-\frac{2}{n})tan(\frac{180}{n})$ For a right triangle: $R= \frac{\sqrt{a^2+b^2}}{2}= \frac{c}{2}$ The area is: $A= \frac{a^2b^2c^2}{16A^2}\pi= \frac{a^2b^2c^2}{(a^2+b^2+c^2)^2-2(a^4 + b^4 +c^4)}\pi= s^2\frac{\pi}{4}= s^2\pi(1-\frac{2}{n})^2tan^2(\frac{180}{n})$ The perimeter is: $P= \frac{abc}{2A}\pi= \frac{abc}{\sqrt{\frac{(a^2+b^2+c^2)^2}{4}-\frac{(a^4+b^4+c^4)}{2}}}\pi= s\pi= s\pi(2-\frac{4}{n})tan(\frac{180}{n})$ The Radius is $R= \sqrt{\frac{ab(c^3d+cd^3)+a^2b^2(c^2+d^2)+cd(a^3b+ab^3)+c^2d^2(a^2+b^2)}{16A^2}}$ For a rectangle $R= \frac{\sqrt{l^2+w^2}}{2}$ For any quadrilateral $R= \sqrt{\frac{ab(c^3d+cd^3)+a^2b^2(c^2+d^2)+cd(a^3b+ab^3)+c^2d^2(a^2+b^2)}{(a^2+b^2+c^2+d^2)^2+8abcd-2(a^4+b^4+c^4+d^4)}}$ Circumcircles of polygons The circumradius of a regular n-sided polygon is: $R= \frac{s}{2sin(\frac{180}{n})}$ For the diameter: $D= s\frac{2}{2sin(\frac{180}{n})}$ Area and Perimeter Circmcircle area: $A= s^2\frac{\pi}{4sin^2(\frac{180}{n})}$ Circmcircle perimeter: $P= s\frac{2\pi}{2sin(\frac{180}{n})}$ Circumcircle equations The following content is either copied to or copied from Wikipedia. One source or the other should cite the original content. In the Euclidean plane, it is possible to give explicitly an equation of the circumcircle in terms of the Cartesian coordinates of the vertices of the inscribed triangle. Thus suppose that $\mathbf{A} = (A_x,A_y)$ $\mathbf{B} = (B_x,B_y)$ $\mathbf{C} = (C_x,C_y)$ are the coordinates of points A, B, and C. The circumcircle is then the locus of points v = (v[x],v[y]) in the Cartesian plane satisfying the equations $|\mathbf{v}-\mathbf{u}|^2 - r^2 = 0$ $|\mathbf{A}-\mathbf{u}|^2 - r^2 = 0$ $|\mathbf{B}-\mathbf{u}|^2 - r^2 = 0$ $|\mathbf{C}-\mathbf{u}|^2 - r^2 = 0$ guaranteeing that the points A, B, v are all the same distance r^2 from the common center u of the circle. Using the polarization identity, these equations reduce to a the condition that the matrix $\begin{vmatrix} |\mathbf{v}|^2 & -2v_x & -2v_y & -1 \\ |\mathbf{A}|^2 & -2A_x & -2A_y & -1 \\ |\mathbf{B}|^2 & -2B_x & -2B_y & -1 \\ |\mathbf{C}|^2 & -2C_x & -2C_y & -1 \end{vmatrix}$ have a nonzero kernel. Thus the circumcircle may alternatively be described as the locus of zeros of the determinant of this matrix: $\det\begin{vmatrix} |\mathbf{v}|^2 & v_x & v_y & 1 \\ |\mathbf{A}|^2 & A_x & A_y & 1 \\ |\mathbf{B}|^2 & B_x & B_y & 1 \\ |\mathbf{C}|^2 & C_x & C_y & 1 \end{vmatrix}=0$ Expanding by cofactor expansion, let $\quad S_x=\frac{1}{2}\det\begin{vmatrix} |\mathbf{A}|^2 & A_y & 1 \\ |\mathbf{B}|^2 & B_y & 1 \\ |\mathbf{C}|^2 & C_y & 1 \end{vmatrix},\quad S_y=\frac{1}{2}\det\begin{vmatrix} A_x & |\mathbf{A} |^2 & 1 \\ B_x & |\mathbf{B}|^2 & 1 \\ C_x & |\mathbf{C}|^2 & 1 \end{vmatrix},$ $a=\det\begin{vmatrix} A_x & A_y & 1 \\ B_x & B_y & 1 \\ C_x & C_y & 1 \end{vmatrix},\quad b=\det\begin{vmatrix} A_x & A_y & |\mathbf{A}|^2 \\ B_x & B_y & |\mathbf{B}|^2 \\ C_x & C_y & |\mathbf {C}|^2 \end{vmatrix}$ we then have a|v|^2 − 2Sv − b = 0 and, assuming the three points were not in a line (otherwise the circumcircle is that line that can also be seen as a generalized circle with S at infinity), |v − S/ a|^2 = b/a + |S|^2/a^2, giving the circumcenter S/a and the circumradius √ (b/a + |S|^2/a^2). A similar approach allows one to deduce the equation of the circumsphere of a tetrahedron. An equation for the circumcircle in trilinear coordinates x : y : z is a/x + b/y + c/z = 0. An equation for the circumcircle in barycentric coordinates x : y : z is 1/x + 1/y + 1/z = 0. The isogonal conjugate of the circumcircle is the line at infinity, given in trilinear coordinates by ax + by + cz = 0 and in barycentric coordinates by x + y + z = 0. Coordinates of circumcenter Cartesian coordinates The Cartesian coordinates of the circumcenter are $\frac{B_yA_x^2 - C_yA_x^2 - B_y^2A_y + C_y^2A_y + B_x^2C_y + A_y^2B_y + C_x^2A_y - C_y^2B_y - C_x^2B_y - B_x^2A_y + B_y^2C_y -A_y^2C_y}{D},$ $( A_x^2C_x + A_y^2C_x + B_x^2A_x - B_x^2C_x + B_y^2A_x - B_y^2C_x - A_x^2B_x - A_y^2B_x - C_x^2A_x + C_x^2B_x - C_y^2A_x + C_y^2B_x) / D)$ $D = 2( A_yC_x + B_yA_x - B_yC_x - A_yB_x -C_yA_x + C_yB_x ).\,$ Without loss of generality this can be expressed in a simplified form after translation of the vertex A to the origin of the Cartesian coordinate systems, i.e., when $A' = A - A = (A'_x,A'_y) = (0,0) $. In this case, the coordinates of the vertices B' = B − A and C' = C − A represent the vectors from vertex A' to these vertices. Observe that this trivial translation is possible for all triangles and the circumcenter coordinates of the triangle A'B'C' follow as $(( C'_y(B^{'2}_x + B^{'2}_y) - B'_y(C^{'2}_x+C^{'2}_y) )/ D', \,$ $( B'_x(C^{'2}_x+C^{'2}_y) - C'_x(B^{'2}_x+B^{'2}_y) )/ D') \,$ $D' = 2( B'_xC'_y - B'_yC'_x ). \,$ Barycentric coordinates as a function of the side lengths The circumcenter has trilinear coordinates (cos α, cos β, cos γ) where α, β, γ are the angles of the triangle. The circumcenter has barycentric coordinates $\left( a^2(-a^2 + b^2 + c^2), \;b^2(a^2 - b^2 + c^2), \;c^2(a^2 + b^2 - c^2) \right), \,$ where a, b, c are edge lengths (BC, CA, AB respectively) of the triangle. Barycentric coordinates from cross- and dot-products In Euclidean space, there is a unique circle passing through any given three non-collinear points P[1], P[2], and P[3]. Using Cartesian coordinates to represent these points as spatial vectors, it is possible to use the dot product and cross product to calculate the radius and center of the circle. Let $\mathrm{P_1} = \begin{bmatrix} x_1 \\ y_1 \\ z_1 \end{bmatrix}, \mathrm{P_2} = \begin{bmatrix} x_2 \\ y_2 \\ z_2 \end{bmatrix}, \mathrm{P_3} = \begin{bmatrix} x_3 \\ y_3 \\ z_3 \end{bmatrix}$ Then the radius of the circle is given by $\mathrm{r} = \frac {\left|P_1-P_2\right| \left|P_2-P_3\right|\left|P_3-P_1\right|} {2 \left|\left(P_1-P_2\right) \times \left(P_2-P_3\right)\right|}$ The center of the circle is given by the linear combination $\mathrm{P_c} = \alpha \, P_1 + \beta \, P_2 + \gamma \, P_3$ $\alpha = \frac {\left|P_2-P_3\right|^2 \left(P_1-P_2\right) \cdot \left(P_1-P_3\right)} {2 \left|\left(P_1-P_2\right) \times \left(P_2-P_3\right)\right|^2}$ $\beta = \frac {\left|P_1-P_3\right|^2 \left(P_2-P_1\right) \cdot \left(P_2-P_3\right)} {2 \left|\left(P_1-P_2\right) \times \left(P_2-P_3\right)\right|^2}$ $\gamma = \frac {\left|P_1-P_2\right|^2 \left(P_3-P_1\right) \cdot \left(P_3-P_2\right)} {2 \left|\left(P_1-P_2\right) \times \left(P_2-P_3\right)\right|^2}$ Parametric equation A unit vector perpendicular to the plane containing the circle is given by $\hat{n} = \frac {\left( P_2 - P_1 \right) \times \left(P_3-P_1\right)} {\left| \left( P_2 - P_1 \right) \times \left(P_3-P_1\right) \right|}$ Hence, given the radius, r, center, P[c], a point on the circle, P[0] and a unit normal of the plane containing the circle, $\hat{n}$, one parametric equation of the circle starting from the point P [0] and proceeding in a positively oriented (i.e., right-handed) sense about $\hat{n}$ is the following: $\mathrm{R} \left( s \right) = \mathrm{P_c} + \cos \left( \frac{\mathrm{s}}{\mathrm{r}} \right) \left( P_0 - P_c \right) + \sin \left( \frac{\mathrm{s}}{\mathrm{r}} \right) \left[ \hat{n} \times \left( P_0 - P_c \right) \right]$ The angles at which the circle meets the sides The angles at which the circumscribed circle meet the sides of the triangle coincide with angles at which sides meet each other. The side opposite angle α meets the circle twice: once at each end; in each case at angle α (similarly for the other two angles). The alternate segment theorem states that the angle between the tangent and chord equals the angle in the alternate segment. Triangle centers on the circumcircle of triangle ABC In this section, the vertex angles are labeled A, B, C and all coordinates are trilinear coordinates: • Steiner point = $\frac{bc}{b^2-c^2} : \frac{ca}{c^2-a^2} : \frac{ab}{a^2-b^2}$ = the nonvertex point of intersection of the circumcircle with the Steiner ellipse. (The Steiner ellipse, with center = centroid(ABC), is the ellipse of least area that passes through A, B, and C. An equation for this ellipse is 1/(ax) + 1/(by) + 1/(cz) = 0.) • Tarry point = sec (A + ω) : sec (B + ω) : sec (C + ω) = antipode of the Steiner point Cyclic quadrilaterals Quadrilaterals that can be circumscribed have particular properties including the fact that opposite angles are supplementary angles (adding up to 180° or π radians). See also External links
{"url":"http://math.wikia.com/wiki/Circumscribed_circle","timestamp":"2014-04-21T14:40:39Z","content_type":null,"content_length":"90606","record_id":"<urn:uuid:1cfa45bd-b2b9-4674-9781-f853cb4495b5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Find work done in stretching rubber band He did it everything the same way except for the limits of integration. How can I explain to him? Putting the shoe on the other foot? If trying to convince a disbelieving student, I might try this approach: Suppose you nominate some realistic data values (for F , l , and s) and plot the graph F vs. s. (Surely your prof would agree that the work done is represented by the area under the graph.) Then all that remains is to have his show whether his integral evaluates to the same answer as that graphical area, or yours. But, most likely he is just having a senior moment. After a restful night's sleep he'll probably smack his forehead and wonder what on earth he was dreaming about to say it so wrong.
{"url":"http://www.physicsforums.com/showthread.php?t=688068","timestamp":"2014-04-20T18:23:53Z","content_type":null,"content_length":"49063","record_id":"<urn:uuid:ed8ccc52-80f3-43dc-8722-bae6a78b8fc4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Extra Dimensions One of the most apparently obvious properties of the world we live in is that it has three spatial dimensions (obvious to you because you can move left or right, walk forward or backward, or jump up and down). But one of the most fascinating non-obvious properties that the world might exhibit is that it may have additional (“extra”) dimensions of space that you and I are unable to perceive, either directly through our senses, or indirectly through the many machines that we humans have built, up to the year 2011. This possibility has been considered for at least 90 years, in various forms, and it is alive and well for physicists working at the Large Hadron Collider, and beyond. This is a very big subject and requires a lot of sub-articles, so I’ll be building this section for quite a while. Of course there are many other articles and books on this subject, including tomes for the public by my famous colleagues Brian Greene and Lisa Randall, but I hope I’ll transcend redundancy here by providing some complementary insights. Here’s an article (with sub-articles that give examples) on how to think about extra dimensions. We may as well get that straight before trying to describe how extra dimensions might show up at the Large Hadron Collider! Next, an article (with sub-articles that explain more details) describing some of the signs of extra dimensions that we might look for in experiments. 9 responses to “Extra Dimensions” 1. “Most apparently obvious properties”… I don’t think processes in the world are subject to the ihuman idea of dimensions.They happen as they happen. Whether we look at processes from a three or four or ? dimensional view, the results are the same. Length, height and width are independent. The diredtions of time are the same as the directions in space. As soon as there is even the slightest fluctuation (= energy = mass) somewhere, time starts in all directions and there is space, expanding with the speed of time., Time within time, space within space etc . Time, energy, mass and space are considered by me as relevant “dimensions”. They are not independent. □ No, I am afraid this is both conceptually and technically incorrect. You say: the dimensions of time are the same as the dimensions of space. This is false. There are crucial minus signs that appear in the equations that assure that you cannot get confused about whether a dimension is a time dimension or a space dimension. Moreover, there is a notion of past and future in time; there is no such notion in space. A physics experiment involves setting up a situation in the past, letting things happen over time, and measuring the result in the future; there is no such notion in space. Causality is all about time; A can cause B only if the distance between A and B in time (divided by the speed of light) is greater than the distance between A and B in space, and if A lies in the past of B (which makes sense if the distances are as I just described.) If you don’t keep track of these differences, you will find modern physics (for instance, black holes) very difficult to understand. Second: energy and mass are not spatial dimensions, nor are they time dimensions; they are formally dimensions, yes, but you must not put them in the same category as space and time. If two objects are found at the same time and at the same point in space, they can affect one another, no matter how much energy and mass they have; but if two objects have the same energy and/ mass, they cannot affect one another unless they are close together in space and in time. To say this more elegantly: physical laws are local in space and in time, but they are not local in energy or mass. That is a huge and crucial difference between space dimensions and other dimensions like energy. See http://profmattstrassler.com/articles-and-posts/some-speculative-theoretical-ideas-for-the-lhc/extra-dimensions/extra-dimensions-how-to-think-about-them/ 2. IThank you. “No, I am afraid ……incorrect.” If this is your answer to my “I don’t think ………. independent.”, I don’t understand it. Are dimensions like length, width and height not independent.? Are events caused or affecred by such “You say ….” etc. I need more time to think about that. For the time being I like to mention that any event originating in a particular point at a particular moment will unfold histories from that point and that moment on into each and every direction, thereby creating space.. So I don’t see the problem you mention with regard to the notion of past (and future in space. 3. As example I take a cube. If one of the three dimensions of the cube changes, the other two have to change as well, otherwise the cube is not a cube anymore. This means that the dimensions of the cube are not independent. If one dimension changes the inescapable changes of the other two dimensions are fully predictable One could say that a cube is an intelligent design. Space however is not. The directions in space are the same as the directions of time.Energy and mass do affect time and space: the curvature of space-time, an acumulation of events.. 4. “If one of the three dimensions of the cube changes, the other two have to change as well, otherwise the cube is not a cube anymore” Marten: are you confusing ‘dimension’ with ‘length’ ? Should your statement read: If the length of 1 of the 3 sides of the cube changes, the other 2 have to change as well, otherwise the cube is not a cube anymore. This is a true statement but it involves simple geometry & proportions of a cube and not dimensions. I also do not agree that time, energy and mass are ‘dimensions’. We are talking about spatial dimensions as in left/right, forward/backward & up/down….. (x, y & z coordinates for the 3 5. @ Joe Chan Back from a lot of work in France, here is my answer. Dimensions like length, width and height shape, are decisive factors for, spatial objects like cubes. They are not endogenous however. They don’t “work together” out of their own. What are the dimensions that shape the world? Directions like up/down, back/forward, left/right? Are thery relevant in space? If I go due “east”, the “South Pole is at my right. If I go due “west”, the “South Pole is at my left.You don’t find x, y and z in space. You find them in geometry books. So, talking about spatial dimensions, the point in question as far as I am concerned is what dimensions/properties of the world/decisive factors shape the world endogenously, being generic and not independent, their proportions being fully interrelated., continuously changing each other’s proprtions. 6. champion t1011
{"url":"http://profmattstrassler.com/articles-and-posts/some-speculative-theoretical-ideas-for-the-lhc/extra-dimensions/","timestamp":"2014-04-17T12:30:46Z","content_type":null,"content_length":"104741","record_id":"<urn:uuid:225e79f5-5310-418b-9829-1bbb1eeb460f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Dedekind , a set if some proper subset B . Explicitly, this means that there is a bijective function onto some proper subset . A set is if it is not Dedekind-infinite. Comparison with the usual definition of infinite set This definition of "infinite set" should be compared with the usual definition: a set A is infinite when it cannot be put in bijection with a finite ordinal, namely a set of the form {0,1,2,...,n−1} for some natural number n. During the latter half of the 19th century, most mathematicians simply assumed that a set is infinite if and only if it is Dedekind-infinite. However, this equivalence cannot be proved with the axioms of Zermelo-Fraenkel set theory without the axiom of choice (AC) (usually denoted "ZF"). The full strength of AC is not needed to prove the equivalence; in fact, the equivalence of the two definitions is strictly weaker than the axiom of countable choice (CC). (See the references below.) Dedekind-infinite sets in ZF The following conditions are equivalent in ZF. In particular, note that all these conditions can be proved to be equivalent without using the AC. • A is Dedekind-infinite. • There is a function f: A → A which is injective but not surjective. • There is an injective function f : N → A, where N denotes the set of all natural numbers. • A has a countably infinite subset. Every Dedekind-infinite set A also satisfies the following condition: • There is a function f: A → A which is surjective but not injective. This is sometimes written as "A is dually Dedekind-infinite". It is not provable (in ZF without the AC) that dual Dedekind-infinity implies that A is Dedekind-infinite. (For example, if B is an infinite but Dedekind-finite set, and A is the set of finite one-to-one sequences from B, then "drop the last element" is a surjective but not injective function from A to A, yet A is Dedekind It can be proved in ZF that every dually Dedekind infinite set satisfies the following (equivalent) conditions: • There exists a surjective map from A onto a countably infinite set. • The powerset of A is Dedekind infinite (Sets satisfying these properties are sometimes called weakly Dedekind infinite.) It is not provable in ZF that every Dedekind infinity implies dual Dedekind infinity. It can be shown in ZF that weakly Dedekind infinite sets are infinite. ZF also shows that every well-ordered infinite set is Dedekind infinite. Relation to AC and ACω Since every infinite, well-ordered set is Dedekind-infinite, and since the AC is equivalent to the well-ordering theorem stating that every set can be well-ordered, clearly the general AC implies that every infinite set is Dedekind-infinite. However, the equivalence of the two definitions is much weaker than the full strength of AC. In particular, there exists a model of ZF in which there exists an infinite set with no denumerable subset. Hence, in this model, there exists an infinite, Dedekind-finite set. By the above, such a set cannot be well-ordered in this model. If we assume the CC (AC[ω]), then it follows that every infinite set is Dedekind-infinite. However, the equivalence of these two definitions is in fact strictly weaker than even the CC. Explicitly, there exists a model of ZF in which every infinite set is Dedekind-infinite, yet the CC fails. The term is named after the German mathematician Richard Dedekind, who first explicitly introduced the definition. It is notable that this definition was the first definition of "infinite" which did not rely on the definition of the natural numbers (unless one follows Poincaré and regards the notion of number as prior to even the notion of set). Although such a definition was known to Bernard Bolzano, he was prevented from publishing his work in any but the most obscure journals by the terms of his political exile from the University of Prague in 1819. Moreover, Bolzano's definition was more accurately a relation which held between two infinite sets, rather than a definition of an infinite set per se. For a long time, many mathematicians did not even entertain the thought that there might be a distinction between the notions of infinite set and Dedekind-infinite set. In fact, the distinction was not really realised until after Ernst Zermelo formulated the AC explicitly. The existence of infinite, Dedekind-finite sets was studied by Bertrand Russell and Alfred North Whitehead in 1912; these sets were at first called mediate cardinals or Dedekind cardinals. With the general acceptance of the axiom of choice among the mathematical community, these issues relating to infinite and Dedekind-infinite sets have become less central to most mathematicians. However, the study of Dedekind-infinite sets played an important role in the attempt to clarify the boundary between the finite and the infinite, and also an important role in the history of the AC. • Moore, Gregory H., Zermelo's Axiom of Choice, Springer-Verlag, 1982 (out-of-print), ISBN 0-387-90670-3, in particular pp. 22-30 and tables 1 and 2 on p. 322-323 • Jech, Thomas J., The Axiom of Choice • Herrlich, Horst, Axiom of Choice, Springer-Verlag, 2006, Lecture Notes in Mathematics 1876, ISSN print edition 0075–8434, ISSN electronic edition: 1617-9692, in particular Section 4.1.
{"url":"http://www.reference.com/browse/columbia/Dedekind","timestamp":"2014-04-20T04:20:35Z","content_type":null,"content_length":"87456","record_id":"<urn:uuid:d28d9b5b-a860-484d-a0ad-9190058073e9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
"Large" diffeomorphisms in general relativity Why? It is a diffeomorphism and does not create a singularity Well, then it can't change the geometry, at least as defined by anything you can compute using the metric. This really has nothing to do with GR, it is differential geometry. My understanding is that topology of a differentiable manifold is encoded in how coordinate patches overlap. So, if we don't change this (and we don't need to for the Dehn twist), and we don't change anything computable from the metric, what can change? In my (1) and (2) I was trying to get at the idea of making the operation 'real' so it does change geometry, versus treating as a pure coordinate transform, such that the corresponding metric transform preserves all geometric facts. I've heard the terms active versus passive difffeomorphism. I don't fully understand this, but I wonder if it is relevant to this distinction.
{"url":"http://www.physicsforums.com/showthread.php?p=3167529","timestamp":"2014-04-19T04:44:12Z","content_type":null,"content_length":"80774","record_id":"<urn:uuid:bc523eda-24ac-4fa5-ae33-15bb54b79ed4>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
Effort to get us all on the same page (balloon analogy) are a few questions from an entry level layperson acceptable here? if not then remove this post. 1. we are stationary but with the universe is expanding, the distance between us and distant galaxies is increasing due to expansion, so does that mean that the distance between us and the cmb is also increasing? 2. the cmb is 45b lyrs away, is that actual distance right now? the light we get from it was emittied 13.7 b yrs ago, was the cmb 13.7b light yrs away from us when that light was emitted? 3. we cannot say that space is a physical thing like the rubber of the baloon. We can only say that distance is increasing? 4. if distance is increasing does that mean that the three dimensional volume of the univers is increasing? 5. is the cmb analagous to the horizion that you would see if you were standing on the surface of the baloon? this is exactly the most helpful kind of feedback. something like this draft essay on what you can learn from the balloon analogy has to go thru editing and revision. questions like this are exactly what are needed to help guide revision. 3. what space is, physically, is something that physicists are working on---have a look at Frank Wilczek's new book Lightness of Being which is about the leading edge understanding of empty space. make your local librarian order the book. the link is in my sig. also read the SciAm article by Loll about the emergence of spacetime from a kind of chaos at the microscopic level---this is an unproven interesting conjecture which they simulate on the computer. we don't know yet what empty space is. but we do have a mathematical model for the increasing distances---that has to do for now. 4. yes, the instantaneous 3D volume of space can be defined and estimated in the case that it is finite, and recent satellite data gives a lower bound on the volume, and it is increasing in a perfectly normal way as the cube of the scale factor. Of course if the 3D volume of space is not finite then it becomes more complicated to talk about it increasing. But if it is finite then we have this lower bound and it is easy to discuss. If you want a link to a reference, or simply to know the volume in cubic lightyears, please let me know. 5. what passes for the LOCATION OF THE CMB ORIGIN is a large spherical surface called the surface of last scattering where the stuff is that emitted the light we are now getting. In the past we were getting CMB light from other stuff that is nearer, but that light has already gone by us. In the future we will be getting CMB light from other stuff that is out beyond our current surface of last scattering---but that light is still on its way and has not reached us. All the matter in the universe, including the matter we are made of, participated in radiating the CMB light. The CMB light that our matter emitted is now 45 billion away from us, where other people can catch some if they make antennas. Every patch of matter made CMB, it is just a question of TIMING to say where the matter is whose light you are currently receiving at this moment. So your image of a has some degree of rightness about it. Not a perfect analogy but it does tell the listener to focus not on the material stuff but on the mathematical object (the spherical surface, like the crcle of horizon on earth). there was a momentary onetime event when expansion was 380,000 years old and the glowing hot fog became transparent, and released its somewhat reddish orange light. Each photon of that light is now 45 billion lightyears from its point of origin. 1. you ask is the distance to the CMB increasing? the distances between all widely separated stationary things are increasing by Hubble Law, so the distance between us and the matter which sent us the CMB light we got yesterday is increasing as part of that general process. two approximately stationary patches of matter, their distance apart increases 1/140 percent every million years. but something else is happening. the distance to the surface of last scattering is increasing in a more serious way. we only get the CMB light from some particular batch of matter once. it passes by. tomorrow we will get light from matter that is farther away than that batch whose light we got yesterday. Question 2 was your best question of all. 2. the cmb is 45b lyrs away, is that actual distance right now? the light we get from it was emittied 13.7 b yrs ago, was the cmb 13.7b light yrs away from us when that light was emitted? No, the matter that emitted the CMB light which we are now getting was, when it emitted the light, at a distance of 41 MILLION lightyears from our matter. You should get this number for yourself by going to Ned Wright calculator and putting in z = 1090. this is the redshift of the CMB light. It says that while the light has been traveling towards us the universe has expanded by a factor of 1090 (and the wavelength of the light increased by the same factor) Since both our matter and the matter that emitted the light are stationary, and the distance between is NOW 45 billion, it must be that the distance THEN was 45 billion divided by 1090! If you divide 45 billion by 1090 you will get 41 million. therefore the distance to the matter then, when it emitted the light, was 41 million lightyears. that's a pretty condensed explanation
{"url":"http://www.physicsforums.com/showthread.php?t=261161","timestamp":"2014-04-18T08:19:09Z","content_type":null,"content_length":"102700","record_id":"<urn:uuid:ad2327e3-a036-4c4a-a8c7-4f490bb8261f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Determine if the bold part of each sentence is a phrase or a clause. • one year ago • one year ago Best Response You've already chosen the best response. brilliant :D all are correct :D Best Response You've already chosen the best response. yes , really none of them are wrong, you have marked right over clause and i am sure you know phrase is something different that is something very short but enriched with meaning that is the only last option , so you are right :D Best Response You've already chosen the best response. alright thanks :) Best Response You've already chosen the best response. :) YW Best Response You've already chosen the best response. umm :D can you check a few more? i already did them all im just making sure :D Best Response You've already chosen the best response. of course keep posting :D Best Response You've already chosen the best response. dont you think last one is also sentence because it gives complete sense :) greek mythology is my favorite subject, its complete so it should be sentence :D Best Response You've already chosen the best response. oh okay :D i thought just because it says very little its not a complete thought Best Response You've already chosen the best response. i eat food, its a sentence too, even though it is very short, so that is a sentence Best Response You've already chosen the best response. really? that is too short Best Response You've already chosen the best response. sentence means --> something that can give you sense or meaning like --> Jeny plays guitar , its a sentence and fragment is something that is incomplete like --> i ran , i sleep , which is fragment so basically length doesnt matter , sense of combination of words make it sentence or fragment :D Best Response You've already chosen the best response. oh okay i understand now :) Best Response You've already chosen the best response. cool :) Best Response You've already chosen the best response. :D YW , btw you are welcome is a sentence :D Best Response You've already chosen the best response. but YW is not a complete sentence :P lol Best Response You've already chosen the best response. huh :/ Best Response You've already chosen the best response. its just initials Best Response You've already chosen the best response. yes i got your point :P lol Best Response You've already chosen the best response. any more correction , captain ? keep posting Best Response You've already chosen the best response. i can read it , hold on ;) Best Response You've already chosen the best response. you can see this better :D Best Response You've already chosen the best response. oh thank you, i could have written more to thank you but i cant publicly lol hold on ;) Best Response You've already chosen the best response. hehe its okay i understand ;) Best Response You've already chosen the best response. 1,2 and 3--> correct but i have doubt about 4th Best Response You've already chosen the best response. yes me too, i thought it is either A or B Best Response You've already chosen the best response. it should be tasted and quenched because that shows the action of the subject Best Response You've already chosen the best response. a compund predicate means an action? so delicious and thirst is not an action Best Response You've already chosen the best response. oh yes :) thanks Best Response You've already chosen the best response. predicate --> The part of a sentence or clause containing a verb and stating something about the subject Best Response You've already chosen the best response. oh YW :) Best Response You've already chosen the best response. well i havent gotten that far yet, i still need to figure it out on my own and check if its right :D Best Response You've already chosen the best response. alright whatever you have done start posting ;) and were you writing something? Best Response You've already chosen the best response. alright and no i wasn't it does that for some reason after i post something :/ weird Best Response You've already chosen the best response. oh i got it, captain Best Response You've already chosen the best response. umm captain? hahaha Best Response You've already chosen the best response. you're being trained , colonel :D Best Response You've already chosen the best response. oh yes but its not that kinda thing, im not in the army :P Best Response You've already chosen the best response. if you wont post your question i will keep changing your designation , Major vika Best Response You've already chosen the best response. oh god stop it pleaseee i hate it :P Best Response You've already chosen the best response. okay brigadier vika , post your question i am going to stop it :D Best Response You've already chosen the best response. i think most of them are wrong Best Response You've already chosen the best response. hold on a minute :) Best Response You've already chosen the best response. sorry the last part got cut off, here you go :) Best Response You've already chosen the best response. all of them are correct :) i dont think any of it is wrong :D wow Best Response You've already chosen the best response. really? well i was thinking the one that is wrong is number 27 is it ? Best Response You've already chosen the best response. omg, i was checking that again because predicate means something stating about subject so i guess it should be A ;) because newspaper is subject and comic strip describes subject Best Response You've already chosen the best response. so 27 is A, right ? Best Response You've already chosen the best response. absolutely right :) Best Response You've already chosen the best response. mention not , captain :) Best Response You've already chosen the best response. oops sorry ;) Best Response You've already chosen the best response. i wish it was personal place :( Best Response You've already chosen the best response. :) okay im done with these i will answer few more, its okay :) Best Response You've already chosen the best response. post it :) Best Response You've already chosen the best response. but thanks!!!! :) :D Best Response You've already chosen the best response. im almost done im at the hard part but its okay :) Best Response You've already chosen the best response. oh god :((((( Best Response You've already chosen the best response. i understand that captain, please post it :D Best Response You've already chosen the best response. okay i'll post all that i have done so far :P Best Response You've already chosen the best response. sure that's perfect Best Response You've already chosen the best response. i need to finish last ones Best Response You've already chosen the best response. you have already posted the one that you posted now Best Response You've already chosen the best response. hehe sorry wrong one :D Best Response You've already chosen the best response. Best Response You've already chosen the best response. thats last ones :D Best Response You've already chosen the best response. wait let me finish then i will post my answers :D Best Response You've already chosen the best response. well the answer of 29 is okay :) 30 should be b ) while nikki talked, i recognised her. and 31 ) seems incompletely posted :( Best Response You've already chosen the best response. and yea i am waiting ;) Best Response You've already chosen the best response. am i wrong? :( Best Response You've already chosen the best response. hold on Best Response You've already chosen the best response. you havent completed your question 30 , you have to write a sentence to make it complete Best Response You've already chosen the best response. it doesn't say there to write a sentence Best Response You've already chosen the best response. oops my bad, i thought make it complete lol and yes you are right but for the last part the sentence starts from when alexa arrived for holidays it leaves the sentence incomplete because it starts with a question so make it complete as your question says Best Response You've already chosen the best response. alright i will do that :) thanks for all the help :) Best Response You've already chosen the best response. what you have written is correct and dont run lol and i guess now you can submit your paper :D Best Response You've already chosen the best response. Best Response You've already chosen the best response. do you want to see what i got for yesterdays assingment? with that story thing i had to do ? Best Response You've already chosen the best response. i would love to before you go to bed show your progress , Best Response You've already chosen the best response. but this one is going to be graded only on monday because there is no school tomorrow Best Response You've already chosen the best response. i got it , now show me the one which has been graded Best Response You've already chosen the best response. you didnt listen to me in coming of age and see what you did , well the best thing is you didnt ask me in rest of the other and you have got excellent marks lol :P :P nice :D how about story ? Best Response You've already chosen the best response. by the way, in EOS they dont have letter grades.. its made like if you get 60% or higher that is a pass and anything below 60% is a fail Best Response You've already chosen the best response. thats nice :D you have 90% which is excellent :D Best Response You've already chosen the best response. wait ill show you haha it is horrible :P and totally wrong Best Response You've already chosen the best response. let me see , story without climax and rising and falling action ;) Best Response You've already chosen the best response. yeahh im not so good at it :/ Best Response You've already chosen the best response. i got 5 wrong for that assignment :/ what a fail Best Response You've already chosen the best response. where is the grade? it just says incorrect :( Best Response You've already chosen the best response. well there is no grade i just know that, that question was worth10 points and i lost them :P Best Response You've already chosen the best response. oh dear :( thats very bad i wish you could have listened to me and have scored 11 on 10 , my story was good not too good but would have reminded your teacher of his youth :/ , dont worry you will get next chance Best Response You've already chosen the best response. haha :D well i dont know what they would think if i turned in some romantic story thats what i was worried about Best Response You've already chosen the best response. come on there were loads or romantic authors and still there are so dont worry take a chance and see what they say ;) Best Response You've already chosen the best response. or you can talk to them and show them this romantic story personally and you can have their opinion Best Response You've already chosen the best response. yes i can ask them to retake it :) Best Response You've already chosen the best response. hmm thats good ;) Best Response You've already chosen the best response. miss BP it remained a wish to read your one :( hope you like this Best Response You've already chosen the best response. @LonelyandForgotten here Best Response You've already chosen the best response. is that yours ??????????????????????????????????????? Best Response You've already chosen the best response. no silly, its not mineee.. who else would it be :P Best Response You've already chosen the best response. my name is at the top lol Best Response You've already chosen the best response. well i am paralysed before i say anything , did you mention stuff that you wrote in the last two paragraph, did you mention that in your essay and after that i am seeing your grades???? did you really mention that too ? Best Response You've already chosen the best response. of course that is what my essay was about and guess what?? my teacher looooved it! she kept saying it is wonderful, wonderful blah blah blah i thought she will get mad at me for writing about this kinda thing but she really loved it and was very much interested in it :D Best Response You've already chosen the best response. hmmm very much interested heheeh well i told you that your essay is perfect and doesnt need any editing, and i was pretty sure that you would get around 90-95% but i am damn amazed to see that its 100% WOW, you desereve treat for this and you did it, its amazing :D :D and really excellent :D keep up the good work :D i am really happyyyyyy with this :D Best Response You've already chosen the best response. oh its nothing just an essay :P there is plenty more stuff to come that is way harder than this, but thank you :) ill try my best next time :D Best Response You've already chosen the best response. yea yea nothing , celebrate everything ;) and be ready for those coming on the way ;) Best Response You've already chosen the best response. well, its not that hard when you have help of teachers sometimes its better than sitting home doing it myself, thats one good advantage :P Best Response You've already chosen the best response. hmmm thats true teachers can give you good advice and can help you to develop new ways of writing Best Response You've already chosen the best response. i have to rewrite this paragraph and include all those things ohhh god kill me now :P Best Response You've already chosen the best response. haha its easy, just begin with introduction, then use two compound sentence and then two complex sentence and then one sentence that interprets the para and lastly one conclusion , did you get it Best Response You've already chosen the best response. so how do you write a compund sentence, like umm... Bill went to the store to buy milk, but he was out of money.. something like that? Best Response You've already chosen the best response. it can be written as, we were facing problem of skateboard in our town because there is no place for that, this is your introduction then you have to include compound sentence Best Response You've already chosen the best response. oh okay :/ sorry that example has nothing to do with this paragraph lol Best Response You've already chosen the best response. then you can write , we decided to build our own skateboard but the scarcity of space was first trouble but one of my friends lives on a farm outside of town Best Response You've already chosen the best response. nooo you have to make those three things into different sentences cuz thats way too long Best Response You've already chosen the best response. okay look let me write and you can read what it says Best Response You've already chosen the best response. hold on :) Best Response You've already chosen the best response. i will write how i think it should be written and ill show it to you :P Best Response You've already chosen the best response. sure, go ahead :) and we can see each others work Best Response You've already chosen the best response. done ?? Best Response You've already chosen the best response. come on :P lol i am ready Best Response You've already chosen the best response. im half way done Best Response You've already chosen the best response. ohh gosh :( im so confused :( Best Response You've already chosen the best response. what have you been doing :( whatever you have done make it fast Best Response You've already chosen the best response. i was trying to work on this but i cant really grasp the idea what am i supposed to do here :/ Best Response You've already chosen the best response. whatever you have written finish it up and show it :) Best Response You've already chosen the best response. Best Response You've already chosen the best response. this is what i have written so far.. My friends and I enjoy skateboarding, but unfortunately there’s no skate park in our town where we can do what we enjoy doing. So for that reason, we decided to create our own skate park but in order to do that, we had to choose the best place to build it. Fortunately, my friend lives in a rural area outside of time where we could build it. And the great thing about this is, there were no other houses on his street which means we could do whatever we like and no one will complain about it. It took a long time and a lot of effort. Best Response You've already chosen the best response. not time wow.. i mean TOWN Best Response You've already chosen the best response. in 18 minutes you've written this ? Best Response You've already chosen the best response. well this is work of 5 minutes lol let me show you mine Best Response You've already chosen the best response. but i told you im confused i didnt know what to do Best Response You've already chosen the best response. and sure you can... Best Response You've already chosen the best response. read this , hope this helps you , and its okay i got it :D Best Response You've already chosen the best response. lol nice name ;) Best Response You've already chosen the best response. yea ;) Best Response You've already chosen the best response. hmmm this is great :D but umm does it include all those things? Best Response You've already chosen the best response. yes, did you read that? Best Response You've already chosen the best response. well this is it then :D Best Response You've already chosen the best response. damn why is repeating what i say :/ Best Response You've already chosen the best response. hmmm may be because it wants you to keep posting things ;) Best Response You've already chosen the best response. i dont think so Best Response You've already chosen the best response. im gonna get bad grade on this i know :/ Best Response You've already chosen the best response. well you are done, you dont need to worry D: Best Response You've already chosen the best response. but the other questions i dont think i did well on those Best Response You've already chosen the best response. what do they say ? Best Response You've already chosen the best response. nothing its alright im just doing this now :/ by the way my grades have dropped, this is terrible :( Best Response You've already chosen the best response. do you wanna see my recent grades? :( Best Response You've already chosen the best response. sure :D lol and please dont hesitate to ask, else you will get at bottom Best Response You've already chosen the best response. this is bad :( Best Response You've already chosen the best response. wow, it has badly fallen down , look at the last one :( Best Response You've already chosen the best response. at least i passed, thank god but i have made a goal for myself that i wouldnt get any grade below 80 but look at it now :( Best Response You've already chosen the best response. you have to maintain that :( well what rest of the questions say, show it Best Response You've already chosen the best response. what question? Best Response You've already chosen the best response. you said rest of the questions arent done well those ones Best Response You've already chosen the best response. they are done im just not sure if they are correct or not, i will probably get 0 on this one Best Response You've already chosen the best response. that is what i am asking, can you show me to make sure if you have done right? Best Response You've already chosen the best response. all of them, how will you be able to see there are 30 questions Best Response You've already chosen the best response. wait.. that is 30 sorry Best Response You've already chosen the best response. keep posting by screenshots Best Response You've already chosen the best response. whatever it is silly Best Response You've already chosen the best response. okay hold on babe ;) hehe Best Response You've already chosen the best response. hmm sure ;) Best Response You've already chosen the best response. 3--B, and not sure about 9 Best Response You've already chosen the best response. rest are correct :) Best Response You've already chosen the best response. why is three B? Best Response You've already chosen the best response. because i think comma is given after a sentence that ends with dependent clause Best Response You've already chosen the best response. yes but not everytime so its A. :/ Best Response You've already chosen the best response. okay mam :) Best Response You've already chosen the best response. okay so now some of the other answered questions :/ Best Response You've already chosen the best response. sure, please keep it up :D Best Response You've already chosen the best response. im sure all of them are wrong :P Best Response You've already chosen the best response. were you kidding me? all of them are correct i dont see any mistake in them Best Response You've already chosen the best response. yes they are :/ there must be some wrong, i was half asleep while taking this in class :P Best Response You've already chosen the best response. nope, its correct all of them i strongly believe so Best Response You've already chosen the best response. alright thank you :) Best Response You've already chosen the best response. :D you are welcome, next ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. 21 - correct and 20 is incorrect because much more time is not a right combination Best Response You've already chosen the best response. okay hold on, this is the directions for this question The following sentences may contain an introductory expression, an interrupting expression, or both. Determine whether each sentence uses commas correctly. (Each question is worth one point) Best Response You've already chosen the best response. still incorrect? Best Response You've already chosen the best response. then you are right Best Response You've already chosen the best response. 23, D Best Response You've already chosen the best response. okay thanks :D Best Response You've already chosen the best response. keep asking kiddy Best Response You've already chosen the best response. last one got cut out :/ Write a sentence containing an adjective clause that is essential to the meaning of the sentence. Then, write a sentence containing a nonessential adjective clause. ESSENTIAL- The waiter, who worked in the restaurant brought out all kinds of delicious plates of food. NONESSENTIAL-The playground next to the school was loud, kids were running and playing around making a lot of noise. Best Response You've already chosen the best response. i dont know :/ this one was harder little bit Best Response You've already chosen the best response. i will do the last question myself :D Best Response You've already chosen the best response. but it seems like you are right :) and you have made the right classification Best Response You've already chosen the best response. Best Response You've already chosen the best response. My friends and I enjoy skateboarding, but unfortunately there’s no skate park in our town where we can do what we enjoy doing. So for that reason, we decided to create our own skate park but in order to do that, we had to choose the best place to build it. Fortunately, my friend lives in a rural area outside of town where we could build it. And the great thing about this is, there were no other houses on his street, which means we could do whatever we like and no one will complain about it. We took time and a lot of effort to design the way our park was going to look like but still, we managed to figure it out. During the process of building this park, Jeremy's parents were a lot of help, they cut the wood with an electric saw. And finally, we managed to make two cool looking ramps and a rail. After we were done, we realized it was already too dark to skate, so we went home. Best Response You've already chosen the best response. i dont think you need comma after in order to do that Best Response You've already chosen the best response. but there should be pause there, i think :/ Best Response You've already chosen the best response. no :) Best Response You've already chosen the best response. i think i said managed to, too many times lol :P Best Response You've already chosen the best response. no managed to is okay but you have used too much of commas :( Best Response You've already chosen the best response. haha i know :P Best Response You've already chosen the best response. and cool skateboard is not right thing to be mentioned in a paragraph Best Response You've already chosen the best response. so what should i say? Best Response You've already chosen the best response. yea :-I , just say that we managed to construct a good looking skateboard fulfilling our needs :D Best Response You've already chosen the best response. Best Response You've already chosen the best response. hmm ;) Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. why are you sad :( i thought you will laugh :( is it bad Best Response You've already chosen the best response. and twin kids are healthy and fine :( Best Response You've already chosen the best response. and where is my medal ? Best Response You've already chosen the best response. no its perfect and im not looking at how could you draw its just really wonderful that you would draw something like this :( i thought it would be something else :( but wow Best Response You've already chosen the best response. really ? and i love this picture and its dedicated to you and me :) i thought i made you sad by this :( Best Response You've already chosen the best response. what up @LonelyandForgotten ? Best Response You've already chosen the best response. i hope you dont mind this, i will need it to finish at home :) Best Response You've already chosen the best response. hahahah why would i let me see ;) Best Response You've already chosen the best response. noo you dont have to its just stuff i have for homework :P Best Response You've already chosen the best response. so what are we supposed to do in here? Best Response You've already chosen the best response. well i have to read everything here and i have a packet that i need to fill out with questions for homework.. so i have to get busy now :) Best Response You've already chosen the best response. alright , get it done :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5101f5fce4b03186c3f88b49","timestamp":"2014-04-17T22:08:24Z","content_type":null,"content_length":"749590","record_id":"<urn:uuid:b06fe506-4e00-4880-bec6-41d7e5653426>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
I recently attended the 2011 Australasian Mathematical Psychology Conference. This post summarises a few thoughts I had on the use of R, Matlab and other tools in mathematical psychology flowing from discussions with researchers at the conference. I w... Twin Cities R User Group Meeting Tonight! TCRUG will be having a meeting TONIGHT (2/16) at 5:30 PM. We will meet in ROOM 29 in Willey Hall. Willey Hall is located on the West Bank of the Minneapolis campus. See the Google map at http:// goo.gl/tnRnU. Erik Iverson will be giving a talk ... Twin Cities R User Group Meeting Tonight! TCRUG will be having a meeting TONIGHT (2/16) at 5:30 PM. We will meet in ROOM 29 in Willey Hall. Willey Hall is located on the West Bank of the Minneapolis campus. See the Google map at http:// goo.gl/tnRnU. Erik Iverson will be giving a talk ... R 2.12.2 scheduled for February 25 The next release of R is scheduled for release February 25, and R 2.12.2 will likely be the final bug-fix release of the 2.12 series before R 2.13 is released in April. According to the NEWS file in the latest daily build, 2.12.2 will improve complex-arithmetic support on some rare platforms that don't support complex types in C99, and... Annotated source code We programmers are told that reading code is a good idea. It may be good for you, but it's hard work. Jeremy Ashkenas has come up with a simple tool that makes it easier: docco. Ashkenas is also behind underscore.js and coffeescript, a dialect of ja... Annotated source code We programmers are told that reading code is a good idea. It may be good for you, but it's hard work. Jeremy Ashkenas has come up with a simple tool that makes it easier: docco. Ashkenas is also behind underscore.js and coffeescript, a dialect of ja... Teach Yourself How to Create Functions in R As you can tell from my previous posts, I am diving in head first into learning how to program (and simplify) my analytical life using R. I have always learned by example and have never really prospered from the “learn from scratch” school of thought. As I follow along with some other fellow R programmers, New R User Group in Minneapolis/St. Paul The Twin Cities R User Group has been around for a little while, but has just launched a new site at meetup.com. Their next meeting will be on February 16, where Erik Iverson will be giving a talk on using R to generate dynamic statistical reports using R's literate programming tool, Sweave. If you're in the Minneapolis-St.Paul area, this... How do you explain reproducible research to clients? Most of the statistics work I do now is reproducible research - this can offer a big advantage for clients but of course that doesn't necessarily mean they realise it ... Below is a text we have been pasting in at the bottom of the source d... How do you explain reproducible research to clients? Most of the statistics work I do now is reproducible research – this can offer a big advantage for clients but of course that doesn’t necessarily mean they realise it … Below is a text we have been pasting in at the bottom of the source documents (and which therefore appears in the pdf’s) to
{"url":"http://www.r-bloggers.com/page/28/?s=sweave","timestamp":"2014-04-16T10:31:25Z","content_type":null,"content_length":"35830","record_id":"<urn:uuid:19710d57-9b9f-46cd-8313-37c9ffda3da7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Eigenvalue criteria for existence of positive solutions of impulsive differential equations with non-separated boundary conditions In this paper, we discuss the existence of positive solutions for second-order differential equations subject to nonlinear impulsive conditions and non-separated periodic boundary value conditions. Our criteria for the existence of positive solutions will be expressed in terms of the first eigenvalue of the corresponding nonimpulsive problem. The main tool of study is a fixed point theorem in a MSC: 34B37, 34B18. impulsive differential equation; positive solution; fixed point theorem; non-separated periodic boundary value condition 1 Introduction Let ω be a fixed positive number. In this paper, we are concerned with the existence of positive solutions for the following boundary value problem (BVP) with impulses: Here, denotes the quasi-derivative of . The condition (1.1c) is called a non-separated periodic boundary value condition for (1.1a). We assume throughout, and with further mention, that the following conditions hold. (H1) Let , and , , , . , where (respectively ) denotes the right limit (respectively, the left limit) of at . A function defined on is called a solution of BVP (1.1) ((1.1a)-(1.1c)) if its first derivative exists for each , is absolutely continuous on each close subinterval of , there exist finite values , the impulse conditions (1.1b) and the boundary conditions (1.1c) are satisfied, and the equation (1.1a) is satisfied almost everywhere on . For the case of ( ), the problem (1.1) is related to a non-separated periodic boundary value problem of ODE. Atici and Guseinov [1] have proved the existence of a positive and twin positive solutions to BVP (1.1) by applying a fixed point theorem for the completely continuous operators in cones. In [2], Graef and Kong studied the following periodic boundary value problem: where . Based upon the properties of Green’s function obtained in [1], the authors extended and improved the work of [1] by using topological degree theory. They derived new criteria for the existence of non-trivial solutions, positive solutions and negative solutions of the problem (1.2) when f is a sign-changing function and not necessarily bounded from below even over . Very recently, He et al.[3] studied BVP (1.1) without impulses and generalized the results of [1,4] via the fixed point index theory. The problem (1.2) in the case of , the usual periodic boundary value problem, has been extensively investigated; see [4-7] for some results. On the other hand, impulsive differential equations are a basic tool to study processes that are subjected to abrupt changes in their state. There has been a significant development in the last two decades. Boundary problems of second-order differential equations with impulse have received considerable attention and much literature has been published; see, for instance, [8-17] and their references. However, there are fewer results about positive solutions for second-order impulsive differential equations. To our best knowledge, there is no result about nonlinear impulsive differential equations with non-separated periodic boundary conditions. Motivated by the work above, in this paper we study the existence of positive solutions for the boundary value problem (1.1). By using fixed point theorems in a cone, criteria are established under some conditions on concerning the first eigenvalue corresponding to the relevant linear operator. More important, the impulsive terms are different from those of papers [8,9]. 2 Preliminaries In this section, we collect some preliminary results that will be used in the subsequent section. We denote by and the unique solutions of the corresponding homogeneous equation under the initial boundary conditions Put , then by [[1], Lemma 2.3], . Definition 2.1 For two differential functions y and z, we define their Wronskian by Theorem 2.1The Wronskian of any two solutions for equations (2.1) is constant. Especially, . Proof Suppose that y and z are two solutions of (2.1), then therefore, the Wronskian is constant. Further, from the initial conditions (2.2), we have . The proof is complete.□ Consider the following equation: From Theorem 2.5 in [1], equation (2.3) has a Green function for all , which has the following properties: ( ) is continuous in t and s for all . Combining with Theorem 2.1, we can also prove that Remark 1 From paper [1], we can get when ( ) and , Especially, in the case of , ( ), Green’s function has the form Define an operator then it is easy to check that is a completely continuous operator. By virtue of the Krein-Rutman theorem, the authors in [3] got the following result. Lemma 2.1The spectral radius andThas a positive eigenfunction corresponding to its first eigenvalue . In what follows, we denote the positive eigenfunction corresponding to by ϕ and . Define a mapping Φ and a cone K in a Banach space by Lemma 2.2The fixed point of the mapping Φ is a solution of (1.1). Proof Clearly, Φu is continuous in t. For , Using ( ) and ( ), we have , and which implies that the fixed point of Φ is the solution of (1.1). The proof is complete.□ The proofs of the main theorems of this paper are based on fixed point theory. The following two well-known lemmas in [18] are needed in our argument. Lemma 2.3[18] LetXbe a Banach space andKbe a cone inX. Suppose and are open subsets ofXsuch that , and suppose that is a completely continuous operator such that Lemma 2.4[18] LetXbe a Banach space andKbe a cone inX. Suppose and are open subsets ofXsuch that , and suppose that is a completely continuous operator such that • There exists such that for and , for , or • There exists such that for and , for . 3 Main results Recalling that δ was defined after Lemma 2.1, for convenience, we introduce the following notations. Assume that the constant and γ is some positive function on J, Theorem 3.1Assume that there exist positive constantsα, βsuch that , , , and Then (1.1) has at least one positive solutionusuch that . Proof Clearly, , let , . Define the open sets Then is completely continuous. By (3.1) and the definition of , , , , there exists such that If not, there exist and such that . Let . Noting that for any , we obtain that for , which implies that , a contradiction. On the other hand, for , , we have From Lemma 2.4 it follows that Φ has a fixed point . Furthermore, and , which means that is a positive solution of Eq. (1.1). The proof is complete.□ In the next theorem, we make use of the eigenvalue and the corresponding eigenfunction ϕ introduced in Lemma 2.1. Theorem 3.2Assume that there exist positive constantsα, βsuch that , , , and here onJ. Then (1.1) has at least one positive solutionusuch that . Proof Obviously, , put , . Define the open sets At first, we show that . For any , from ( ), we have On the other hand, It is easy to check that is completely continuous. Next, we show that If not, there exist and such that . Hence, Multiplying the first equation of (3.8) by ϕ and integrating from 0 to ω, we obtain that One can find that Substituting (3.10) into (3.9), we get which implies that a contradiction. Finally, we show that Since and are negative for and , the condition (3.6) implies that . Hence, for and for any , Suppose that there exist and such that , that is, Multiplying the first equation of (3.11) by ϕ and integrating from 0 to ω, we obtain that One can get that Substituting (3.13) into (3.12), we get a contradiction. From Lemma 2.3 it follows that Φ has a fixed point . Furthermore, and , which means that is a positive solution of Eq. (1.1). The proof is complete.□ Corollary 3.1Assume that , , , and here onJ. Then (1.1) has at least one positive solution. Corollary 3.2Assume that there exists a constantαsuch that , ( , αand ∞) and here onJ. Then there exists one open interval such that (1.1) has at least two positive solutions for . Example 1 Consider the equation here and . Since , and , by Theorem 3.1, (3.14) has at least one positive solution for any . Example 2 Consider the equation It is well known that, for the problem consisting of the equation , , and the boundary condition the first eigenvalue is 0 (see, for example, [[19], p.428]). It follows that the first eigenvalue is for the problem consisting of the equation and the boundary condition (3.16). Meanwhile, we can obtain the positive eigenfunction corresponding to . It is also easy to check that , , and (here ). So, the right-hand side of the inequality in Corollary 3.2 is obviously satisfied. Considering the monotonicity of and , we can choose a sufficiently small positive constant α such that the left-hand side of the inequality is true. Therefore, by a direct application of Corollary 3.2, there exists one open interval such that (3.15) has at least two positive solutions for . Authors’ contributions All authors contributed equally to the manuscript and read and approved the final manuscript. The authors would like to thank anonymous referees very much for helpful comments and suggestions which led to the improvement of presentation and quality of work. This research was partially supported by the NNSF of China (No. 11001274, 11171085) and the Postdoctoral Science Foundation of Central South University and China (No. 2011M501280). Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2013/1/3?fmt_view=classic","timestamp":"2014-04-17T04:11:03Z","content_type":null,"content_length":"228766","record_id":"<urn:uuid:a9b6ef3d-1da1-407c-a143-51b4a5198f2c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
vector angle May 10th 2009, 09:58 AM #1 Sep 2008 vector angle two forces $F_1$N and $F_2$ N are given by $F_1 = 24i + 32j - 42k$ $F_2 = -3i - 72j - 30k$ a) calculate, correct to the nearest degree, the angle between the directions of $F_1$ and $F_2$ so using the dot product formula $F_1 \cdot F_2 = |F_1||F_2|cos\theta$ I end up with $\frac{-1332}{58 * 3\surd677} = cos\theta$ $\therefore \theta = 72.89$ I think i have done this right however the angle i inverse cos is negative, so do I draw up a cast diagram to find the angle? if so is it the angle 180 - 72.98? I think that you made a mistake in the dot product. I get $-1116$. Recall that if the dot product is negative the angle is obtuse. yes i mis-calculated. i now get the angle 75.7. so I'm thinking I do 180-75.7 = 104.3 ? May 10th 2009, 10:10 AM #2 May 10th 2009, 10:22 AM #3 Sep 2008
{"url":"http://mathhelpforum.com/calculus/88406-vector-angle.html","timestamp":"2014-04-17T21:44:24Z","content_type":null,"content_length":"36360","record_id":"<urn:uuid:00408bf4-3236-4cff-ab75-65dd3612911e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
Volume of Gr(2,4) up vote 6 down vote favorite I was wondering if anybody can direct me to a paper or a book regarding the volume of $Gr(2,4) $ or generic complex Grassmanian manifolds of order $k$. My own heuristic method seems not to work! It is based on the adaption of the same procedure one has to follow for finding the volume of complex projective spaces $\mathbb{C}P^n$ using Hopf fibration $\mathbb{C}P^n\cong S^{2n+1}/S^1$. Here the volume can be roughly given by dividing the volume of $2n+1-$sphere by volume of $S^1$. Therefore in analogy with this example, we can estimate the volume of $Gr(k,n)$ by dividing the volume of $U(n) $ by that of $U(n-k) \times U(k)$ which gives me $12\pi^4 r^{16}$ for $Gr(2,4)$ where $r$ is the radius of $S^1$ and I don't like it because $Gr(2,4) $ is $8$ dimensional! Thanks in Advance AB add comment 2 Answers active oldest votes Check section 9.1.2 of these notes There I compute the volumes of real Grassmannians. A similar computation works in the complex case. Update Using the description $\mathrm{Gr}\;(k, N)\cong U(N/U(k)\times U(N-k)$ and a bi-invariant metric on $U(N)$, this induces bi-invaraint metrics on $U(k),U(N-k)\subset U(n)$ and an invariant metric on $\mathrm{Gr}(k,N)$. The volume of $\mathrm{Gr}(k,N)$ with respect to this metric is $$ {\rm vol} \mathrm{Gr}(k, N)= \frac{ {\rm vol}\; U(N)}{{\rm vol}\; U(k)\cdot {\rm vol}\; U(N-k)}. $$ The volume of a compact Lie group $G$ with respect to a bi-invariant metric $g$ was computed by I.G. Macdonald, up vote 9 down vote The volume of a compact Lie group, Invent. Math. 56(1980), no. 2, 93–95. For the Lie group $U(n)$ this takes the form $$ {\rm vol}\; U(n)=\frac{1}{(2P_n)^2(2\pi)^n}\times {\rm vol}\; T^n\times \prod_{k=1}^n {\rm vol}\;S^{2k-1}, $$ where ${\rm vol}\; T^n$ denotes the volume of the maximal torus of $U(n)$ equipped with the induced bi-invariant metric, and $P_n$ is the product of the lengths of the positive roots of Well I found out that I made a mistake in calculating the radius part and the correct result is $12\pi^4 r^8$. The method you follow leads to a formula in proposition 9.1.12 which is very similar to that of mine above for the complex case. But I derived it by following the methodology I explained and I want to make sure fast if it is true! Could you explain if there is a quick way to reach the result for the complex case out of your computation? Or I have no choice but to spend much time to calculate Haar measure and stuff? – Alireza Apr 27 '13 at 17:08 2 Using the invariance it suffices to compute only the volume of $U(n)$, but you have to do that consistently. Here Weyl integration formula helps. – Liviu Nicolaescu Apr 27 '13 at add comment The volume of a Grassmanian can be computed using Wirtinger's theorem: The volume of a $p$-dimensional complex submanifold $S$ of a complex Hermitian manifold $(X,\omega)$ is up vote 4 $$ \frac{1}{p\!}\int_S\omega^p. $$ down vote If $X=\mathbb{CP}^N$ the integral is equal to the degree of $S$ times the volume of $X$. Thus up to normalization factors, the volume of the Grassmanian $Gr(k,n)$ is its degree in the Plücker embedding $$Gr(k,n)\subset \mathbb{CP}^N, N=\binom{n}{k}-1.$$ Thanks for bringing this into play. Since for $ B_r= \{z \in \mathbb{C}^p,|z|<r}\}$ the formula $Vol(B_r)=\frac{1}{p!}\int_{B_r}\omega^p$ gives $r^{2p}/{p!}$ we should introduce in it the normalization factor ${\pi}^{-p}$ by hand. Does this mean that the same normalization factor can be applied for the volume of any Grassmanian as well? – Alireza Apr 29 '13 at 2:04 The thing that I don't get is that for $k=2$ and $n=4$ this gives $Gr(2,4) \subset X=\mathbb{CP}^5$! Well no problem if your $N=5$ was $N=4$ from the embedding point of view yet your argument that the deg. of $Gr(2,4)$ times $Vol(\mathbb{CP}^5)$ gives the volume of the grassmanian seems not rational because one is $5$ complex dim and the other $4$ dim and the deg. of a submanifold of a Kaehlarian manifold is dimensionless, if I'm not mistaken. – Alireza Apr 29 '13 at 16:29 add comment Not the answer you're looking for? Browse other questions tagged grassmannians or ask your own question.
{"url":"http://mathoverflow.net/questions/128922/volume-of-gr2-4/128928","timestamp":"2014-04-17T04:13:07Z","content_type":null,"content_length":"57173","record_id":"<urn:uuid:95416e43-8680-4d31-9e35-125f83f803f3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert kilograms per (meter cubed) to pounds per (foot cubed) - Conversion of Measurement Units ›› Convert kilogram/cubic metre to pound/cubic foot ›› More information from the unit converter How many kilograms per (meter cubed) in 1 pounds per (foot cubed)? The answer is 16.018463374. We assume you are converting between kilogram/cubic metre and pound/cubic foot. You can view more details on each measurement unit: kilograms per (meter cubed) or pounds per (foot cubed) The SI derived unit for density is the kilogram/cubic meter. 1 kilogram/cubic meter is equal to 0.0624279605761 pounds per (foot cubed). Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between kilograms/cubic meter and pounds/cubic foot. Type in your own numbers in the form to convert the units! ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0044 seconds.
{"url":"http://www.convertunits.com/from/kilograms+per+(meter+cubed)/to/pounds+per+(foot+cubed)","timestamp":"2014-04-19T04:20:19Z","content_type":null,"content_length":"20315","record_id":"<urn:uuid:c338423a-80d9-44c3-a9f2-92d88a9597b9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
SIAM Challenge Announcement A team of one undergraduate, two mathematics graduate students, one continuing education student and two faculty members from the Department of Mathematical Sciences at the University of Delaware scored a perfect 100 in Prof. Nick Trefethen's "100-Dollar, 100-Digit Challenge." The challenge first appeared in the January/February 2002 issue of SIAM News. The Challenge consisted of ten questions for which there was no known way to express the solution in terms of elementary quantities. There were ten questions, and the challenge was to find the first ten digits for each question. Prof. Trefethen's original posting stated that he would be impressed with any team that found 50 correct digits. A copy of the original article is posted on the SIAM web site. Prof. Trefethen's web site has a list of the other teams earning perfect scores as well as the five second place winners who score 99's. At the conclusion of the contest, twenty teams submitted perfect scores. The University of Delaware team was recognized as one of three to receive the $100 reward because the solution was the product of a unique collaboration between undergraduates, graduates and faculty. At the time of the original posting, Profs. Toby Driscoll and Lou Rossi were teaching graduate and undergraduate numerical analysis courses and sought to spark more interest in the topic by answering the challenge. Soon, other students in the department saw the problems and were drawn into the group. Thus, continuing education student Jonathan Leighton, undergraduate Eli Faulkner, and graduate students Carl DeVore, and Sven Reichard joined the core group. Interestingly, the only numerical analysts on the team were faculty advisors Driscoll and Rossi. Eli Faulkner is interested in topology. Graduate students DeVore and Reichard are candidates in the discrete mathematics group. Jon Leighton has a variety of interests in applied mathematics and solid mechanics. The team quickly found that direct numerical attacks on several of the problems would require prohibitive amounts of CPU time. Some problems featured very slowly converging series or very large matrices. Other problems were dangerously close or beyond the limits of double precision arithmetic. While the team made heavy use of mathematical software including Maple and Matlab, it was insight and craftiness that transformed the inaccessible into the routine in almost every problem. In the end, all of the team's solutions required at most a few minutes of CPU time. &COPY 2002, Department of Mathematical Sciences Last Updated: Comments? E-Mail Webmaster(www@math.udel.edu)
{"url":"http://www.math.udel.edu/news/siam_announce.html","timestamp":"2014-04-21T10:00:59Z","content_type":null,"content_length":"4825","record_id":"<urn:uuid:b81debcf-26ac-4c94-a2cf-006f59ac2480>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by Jake Total # Posts: 1,682 Ag2S is an insoluable black solid. Would more solid dissolve, or precipitate once the following are added to the solution. 1. KS-----I know [S] increases, shift to reactants side, precipitate 2. HClO 3. LiOH 4. NH4OH Twenty people apply for seven jobs that are available at fly by night aircraft company 12 are men and the rest are women. How many seven person groups have exactly three men? How many of these seven person groups have exactly three men or three women? Twenty people apply for seven jobs that are available at fly by night aircraft company In how many ways could such a group be composed? After the seven have been selected, in how many different ways could five of them be assigned to the assembly department? The boundary between 2 plates moving together is calle _______. write sin4xcos2 as the sum or difference of two functions. answers: 1/2(cos6x+cos2x), 1/2(cos2x-cos6x), 1/2(sin6x+sin2x), sin6x-sin2x find the angle between vector U=<2,3> and V=<1,-5> answers: 88 degrees, 45 degrees, 135 degrees, or 92 degrees...? A solution of formic acid 0.20 M has a pH of 5.0. What is its Ka value? a) 5.0 x 10-10 b) 1.0 x 10-5 c) 5.0 x 10-5 d) 25 Mols benzoic acid = 12.2g/122.12g = 0.0999 Int. Concentration of Benzoic acid = 0.0999 mol/ 0.500L = 0.1998 M Ka = 6.3 x 10-5 = (x)(x)/0.1998 x X = [H+] = * M pH = -log(*) = * Sorry, this question is just really confusing me.. Thanks so much, you're an awesome help. Mols benzoic acid = 12.2g/122.12g = 0.0999 Int. Concentration of Benzoic acid = 0.0999 mol/ 0.500L = 0.1998 M Ka = 6.3 x 10-5 = (x)(x)/0.1998 x X = [H+] = * M pH = -log(*) = * Sorry, this question is just really confusing me.. Calculate the pH of a solution prepared by dissolving 12.2 g of benzoic acid in enough water to produce a 500 mL solution. Ka = 6.3 x 10^-5 Can a solution of CrCl3 be stored in an aluminum container? Using the following spontaneous reactions, classify the 3 metals involved (Cr, Sn, Al) according to increasing reductant properties. a) 2Cr + 3Sn2+ 2Cr3+ + 3Sn b) Al + Cr3+ Al3+ + Cr Which pair of ions will be most likely to react at standard state conditions? 1. a) MnO4- and Sn2+ 2. b) Zn2+ and Cl- 3. c) I- and Cu 4. d) Fe3+ and Cu2+ If a piece of copper metal is dipped into a solution containing Cr3+ ions, what will happen? Explain with Ev values Pb(NO3)2(aq) 2 x 10-3 M Na2SO4(aq) 2 x 10-3 M Pb(NO3)2(aq) + NaSO4(aq) → PbSO4(s)+ 2NaNO3(aq) a) What are the concentrations of each ion within this solution? b) If 1L of each solution is mixed; will a precipitate form? why? Spanish-8th grade Help! Thank you- I think what I put is right- I've rechecked it but it seems so odd the way I have it.I know it's a reflexive so maybe it is correct. Soanish-8th grade Help! I'm trying to conjugate this in the past tense tú/presentarse would I conjugate it in past tense like this- (tú te presentaste) Something looks wrong with it what is the first step in solving the equation x/5 - 9=6? Physics Help So is there no shift? and if so why is that the case? Physics Help If we witness events taking place on the moon, where gravitation is weaker than on Earth, would we expect to see a gravitational red shift or a gravitational blue shift? and Explain please a)blue shift b)red shift c)no shift The following reaction has a Delta G value of 42.6 kJ/mol at 25oC: HB(aq) +H2O(l) --> H3O+(aq) +B-(aq) Calculate Ka for the acid HB. Physics Help!! Comparing Einstein's and Newton's theories of gravitation, can the correspondence principle be applied? and Why or Why not? a)Yes b)No Physics Help please If we witness events taking place on the moon, where gravitation is weaker than on Earth, would we expect to see a gravitational red shift or a gravitational blue shift? and Explain please a)blue shift b)red shift c)no shift A 100.0-mL sample of 0.250 M aniline (C6H5NH2) is titrated with 0.500 M HCl. What is the pH at the equivalence point? You take 326 g of a solid (melting point = 57.6oC, enthalpy of fusion = 346 J/g) and let it melt in 757 g of water, and the water temperature decreases from its initial temperature to 57.6oC. Calculate the initial temperature of the water. (Note that the specific heat ca... Physics Help!! A passenger on an interplanetary express bus traveling at v = 0.94c takes a seven-minute catnap by his watch. How long does the nap last from your vantage point on a fixed planet? Physics Help please A bus moving with speed 0.93c is 65 feet long according to its passengers and driver. What is its length from your vantage point on a fixed planet? Thanks it worked perfectly there isnt a kp or a kc Nitrogen gas (N2) reacts with hydrogen gas (H2) to form ammonia (NH3). At 200 oC in a closed container, 1.1 atm of nitrogen gas is mixed with 2.1 atm of hydrogen gas in an otherwise empty container. At equilibrium, the total pressure is 2.2 atm. Calculate the partial pressure ... Green light is emitted when electrons in a substance make a particular energy-level transition. If blue light were instead emitted from the same substance, would it correspond to a greater or lesser change of energy in the atom? and why? Gaseous butane will react with gaseous oxygen to produce gaseous carbon dioxide and gaseous water . Suppose 16.9 g of butane is mixed with 39. g of oxygen. Calculate the minimum mass of butane that could be left over by the chemical reaction. Round your answer to 2 significant... Gaseous butane will react with gaseous oxygen to produce gaseous carbon dioxide and gaseous water . Suppose 16.9 g of butane is mixed with 39. g of oxygen. Calculate the minimum mass of butane that could be left over by the chemical reaction. Round your answer to 2 significant... if you invest 500 dollars in a savings account that pays 8% interest per year, compounded quarterly, how much will you have in the account at the end of 11.6 years? What type of statistical information would I include in my report to determine our year-to-date performance of our division; example, mean, median, or standard deviation. Life orientation It damages communities In an election, ethan got 5 fewer votes than christohper,who got 3 more votes than olivia, who got 4 fewer votes than Ava. How many more votes did ava get than ethan ? Given P(x) = x^3 - 3x^2 - 5x + 10 a evaluate P9x) for each integer value of x from -3 through 5 b. Prove that 5 is an upper bound on the zeros of P(x) Two masses, m1=3kg and m2=2kg, are suspended with a massless rope over a pulley of mass M = 10kg. The pulley turns without friction and may be modeled as a uniform disk of radius R=.1m. You may neglect the size of the masses. The rope does not slip on the pulley. The system be... Suppose vectors a and b are vectors such that a x b =(3,1,4). What is the cross product of twice of a with twice of b? Calculus/Physics Please help!! An intravenous line provides a continuous flow of drug directly into the blood. Assuming no initial drug in the blood, the amount of drug in the blood t hours after the dosing begins in m(t) - (a/k) (1-e^-kt), for t (=>)0, where k is the rate constant (again related to half ... suppose m(0) milligrams of a drug are put in the blood of an injection. The amount of drug t hours after the injection is given by m(t)=m(o)e^-kt, for t (=>) 0, where k is the rate constant, which is related to the half life. we also treat oral administration of drugs as an... Could someone please explain on how to do this question step-by-step. Thank you! A bolt is being loosened by a 40 cm wrench. The torque in this situation has a magnitude of 12 J and the force makes an angle of 60 degree with the wrench. What is the magnitude of the force used ... suppose m(0) milligrams of a drug are put in the blood of an injection. The amount of drug t hours after the injection is given by m(t)=m(o)e^-kt, for t (=>) 0, where k is the rate constant, which is related to the half life. we also treat oral administration of drugs as an... suppose m(0) milligrams of a drug are put in the blood of an injection. The amount of drug t hours after the injection is given by m(t)=m(o)e^-kt, for t (=>) 0, where k is the rate constant, which is related to the half life. we also treat oral administration of drugs as an... I checked in the book answer and the site u gave me matched with the answer. How did the website get solutions as: x= -3.28183 x= -1.863 Use the algorithm for curve sketching to sketch the graph of each function. a) f(x) 4x^3+6x^2-24x-2 1. First to find intercepts y=0 0=4x^3+6x^2-24x-2 =2(2x^3+3x^3-12x-1) I don't know how to find the x-intercept. I can't use quadratic formula or synthetic division :S Math help So do I draw lines through the vertices? Math help Sketch the graph: (x+2)^2/25-(y+4)^2/25=1 I don't understand this at all! When I tried graphing it I just had two lines going through the center, which is (-2,-4). But it should be a hyperbola, right? Could anyone help me? World History During the Columbian Exchange what diseases were brought from the new world to Europe? Thank you!!! How much does it cost to operate a 100 W lamp continuously for 1 week if the power utility rate is 8/kWh? answer in $ Physics Need Help PLEASE okay great!!! Thank you again!!!! you do a wonderful job!!!! Physics Need Help PLEASE its looking for the N/C I got 4*10^-21 but I'm not getting it correct Physics Need Help PLEASE Thank you so much Elena that helped me out a lot! I was wondering if you could also help me with another question. A droplet of ink in an industrial ink-jet printer carries a charge of 6 10-13 C and is deflected onto paper by a force of 2.4 10-7 N. Find the strength of the ele... Physics Need Help PLEASE Two point charges are separated by 8 cm. The attractive force between them is 14 N. If the two charges attracting each other have equal magnitude, what is the magnitude of each charge? Which plane goes through the origin and is perpendicular to the line r=(2,-2,1) + s(2,3,-4), seR? a) 2x-2y+z=0 b) 2x+3y-4z=0 c) 2x+3y+z-4=0 d) none of the above. I got D, none of the above. I have substituted and couldn't find the answer equal to zero. Are the points P(1,2), Q(7,3) and R(-2,1) o the line with vector equation r=(19,5) +s(6,1), SeR which pair of lines are perpendicular? a) 2x + 3y - 7=0, -3x - 2y + 2=0 b) 2x - 3y + 2=0, 5x - 7y + 2=0 c) -4x + y - 3=0, x + 4y=0 d) -5x + y + 1=0, 5x + y + 1=0 Please explain. I know we have to use the dot product :3 engineering mechanics 14 degrees engineering mechanics 14 degrees A rusted bolt requires 30 J of torque to be loosened. If only 40 N of force is applied to a wrench with which it makes a 90° angle, how long is wrench? Determine the value(s) of k such that the area of the parallelogram formed by vectors a = (k+1, 1,-2) and b =(k,3,0) is [sqrt(41)] For which value(s) of k will the dot product of the vectors (k,2k-1, 3) and (k,5,-4) be 7? I did this so far, 2k-1=5 6/2 k=3 College Chemistry Write three equations (including ΔH values) that represent the heats of formation for each C6H12O6(s), CO2(g), and H2O(l). (Hint: look up heats of formation for each participant) Two force vectors act on an object and the dot product of the two vectors is 20. If both of the force vectors are doubled in magnitude, what is their new dot product? Sorry for the confusion. May I ask why my answer does not work? :S its asking for the vertices, not equal to Point P. Why add? I am bit of confused from this question. I have shown the work below and to subtract, am I supposed to be X2 - X1, Y2-Y1, Z2-Z1? or X1-X2, Y1-Y2, Z1-Z2? ------------------------------------ A triangle has sides formed by the vectors PA=(2,7,3) and PB=(6,2,2). The point P=(1,3,... A box of mass m=1.5 kg is attached to a spring with force constant k=12 and suspended on a frictionless incline that makes a 30 degree angle with respect to the horizontal. With the spring in its unstretched length, the box is released from rest at x=0. The box slides down the... I am bit of confused from this question. I have shown the work below and to subtract, am I supposed to be X2 - X1, Y2-Y1, Z2-Z1? or X1-X2, Y1-Y2, Z1-Z2? ------------------------------------ A triangle has sides formed by the vectors PA=(2,7,3) and PB=(6,2,2). The point P=(1,3,... 1. Which of the following sets of vectors spans [r^2] ? a) {(1,1), (-2,-2)} b) {(1,1), (1,2)} c) {(1,2), (1/2,2)} d) {(-1,1), (1,-1)} 2. Which of the following sets of vectors spans [r^3] ? a) {(1,1,1), (2,2,2)} b) {(1,3,1), (2,2,2)} c) {(1,2,1), (1/2,1,1/2)} d) {(1,3,2), (-1,... Point A=(1,3,4) and point B=(-2,2,0). Determine AB. a) (3,1,4) b) (-3,-1,-4) c) (-1,5,4) d) (1,5,4) I chose answer C. 2. A goes from (2,1) to (4,-1) Determine the components of A. a) (6,0) b) (-2,2) c) (2,-2) d) (0,6) Vector D represents: URL is here imgur dot com/AX4EN a) A+B b) A-B c) C+B d) A+C I chose answer A. Is that the correct answer? How many moles of nitrogen are needed to produce 1.52 moles of nitrogen (ll) oxide ? N2 + O2 > FeCL3 + 3H20 Use the substitution method to solve -x + 3y=24 5x +8y=-5 Calculus and vectors |C|=5 and |D|=8. The angle formed by vectors C and D is 35 degrees, and the angle formed by vectors A and C is 40 degrees. Determine |B|. Could someone explain this question 6/8 pizza left, which can be reduced to 3/4. Calculus and vectors-Help~David Thank you very much! Calculus and vectors-Help~David Rectangular prism is defined by vectors (2,0,0)(0,9,5) and (0,0,3). Find the volume. I have done this so far, would anyone please verify this. (2,0,0)(0,9,6) (0-2, 9-0, 6-0) =(-2,9,6) (-2,9,6)(0,0,3) =(2,-9,-3) (2)(-9)(-3)=54 cm^3 Therefore volume is 54 cm^3 Is B=(2,-1,-6)? If A=(1,2,4) and Vector A+B =(3,1,-2). What is B? Calculus and vectors @ Bob Purs I multipled by 2 since its a rectangle, there is an example of similiar to this question but its a triangle. Calculus and vectors @ Bob Purs Rectangle side is represented by (2,3) and (-6,4). Find the perimeter of rectangle. Is this correct? (-6-2), 4-3) sqrt(8^2+1^2) sqrt(65) =8.062257748(2) =16.124? Is this correct? ((-6-2), 4-3) sqrt(8^2+1^2) sqrt(65) =8.062257748(2) =16.124? Rectangle side is represented by (2,3) and (-6,4). Find the perimeter of rectangle. Do I have to find the magnitude of (2,3) and (-6,4) and square root the answer? Thank you for answering. Don't I have to find the magnitude then use square root of the answer and multiply by 2? If so, could you show it. Rectangle side is represented by (2,3) and (-6,4). Find the perimeter of rectangle. Could someone please help~ Calculus and vectors Could someone explain to me how I can find the area/perimeter/volume of a rectangular prism if I am given R^3 points. For example find area of rectangular prism given pts (2,3,4)(5,3,2), similar to Calculus and vectors Bob can swim at a rate of 5km/h. He is in a river that is flowing at a rate of 9 km/h. a) if bob swims upstream, what is his relative velocity to the ground? b)if bob swims downstream, what is his relative velocity to the ground? c) Someone decides to help Bob out of the water... Bob can swim at a rate of 5km/h. He is in a river that is flowing at a rate of 9 km/h. a) if bob swims upstream, what is his relative velocity to the ground? b)if bob swims downstream, what is his relative velocity to the ground? c) Someone decides to help Bob out of the water... Bob can swim at a rate of 5km/h. He is in a river that is flowing at a rate of 9 km/h. a) if bob swims upstream, what is his relative velocity to the ground? b)if bob swims downstream, what is his relative velocity to the ground? c) Someone decides to help Bob out of the water... Why doesn't water undergo electrolysis? Why does Na2SO4 undergo electrolysis? verify that 2/1+cos theta - tan squared (theta/2) = 1 Multiplying a vector by a scalar results in: a) a scalar b) a perpendicular vector c) a collinear vector c) a parallel scalar Sabrina and John cut a pie into 4 slices. Sabrina eats 2 slices. John eats the rest. What fraction names the part of the pie that John eats? Among humans, increased interest in food intake normally occurs _____. only after the production of glucose in the liver can no longer meet metabolic needs when fewer calories are taken in than are expended, but only after the body depletes its reserves of fat in the liver via... SCIENCE/ earth Thanks Ms Sue!!! SCIENCE/ earth plate tectonics crossword puzzle what Qs and As can i use??????? PLZZZZZ HELP ME!!!!!!!!!!!!! Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jake&page=4","timestamp":"2014-04-19T01:58:24Z","content_type":null,"content_length":"29890","record_id":"<urn:uuid:283b30e0-030b-4574-953b-5ccbfcfbb6ac>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
G is a group with |G|=mp where p is a prime and 1<m<p. Prove that G is not simple. February 28th 2011, 02:30 PM G is a group with |G|=mp where p is a prime and 1<m<p. Prove that G is not simple. Suppose that G is a group with $|G| = mp$ where $p$ is a prime and $1 < m < p$. Prove that $G$ is not simple. February 28th 2011, 05:17 PM How many Sylow p-subgroups can G have? February 28th 2011, 05:44 PM February 28th 2011, 05:54 PM There are always Sylow p-subgroups. In this case, it follows immediately from Sylow's theorems that there is a unique one. Unique Sylow subgroups are normal. But as you say, you haven't covered that yet, so I'm sure you're expected to make a more direct argument. February 28th 2011, 07:01 PM Since $p|mp$, there is a a subgroup $H$ of order $p$. A (sub)group of order $p$ is cyclic, thus $H$ is Abelian thus normal. Since we have a normal subgroup $He\{e\}$ and $He G$, $G$ is not Is this argument correct? February 28th 2011, 07:04 PM Abelian subgroups need not be normal. But I think you're on the right track considering the cyclic subgroup generated by an order-p element. March 1st 2011, 11:32 AM March 1st 2011, 10:26 PM Let $H\leqslant G$ be such that $|H|=p$. It is trivial that there is a homomorphism $\phi:G\to \text{Sym}\left(G/H\right)$ by having $\phi_g(aH)=gaH$. Moreover, one can prove that $\ker\phi\ subseteq H$. Now, since $p$ is prime and $\ker\phi\leqslant H$ we must have that $\ker\phi=\{e\}$ or $\ker\phi=H$. Suppose that $\ker\phi=\{e\}$ then $\text{im}(\phi)$ is a subgroup of $\text {Sym}\left(G/H\right)$ of order $mp$ and so $mp\mid m!$ but since $p$ is prime and $m<p$ this is impossible. Thus, $H=\ker\phi$ and so $\{e\}\triangleleft H\triangleleft G$ so that $G$ isn't
{"url":"http://mathhelpforum.com/advanced-algebra/172956-g-group-g-mp-where-p-prime-1-m-p-prove-g-not-simple-print.html","timestamp":"2014-04-20T21:03:25Z","content_type":null,"content_length":"14217","record_id":"<urn:uuid:79e044cf-e001-4781-a7e5-bec9b610b8f8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Avondale Estates Precalculus Tutor ...I understand that not every student is the same, and that we all learn at a different pace; therefore, I do my best to accommodate lessons to suit the student's needs.I enjoy tutoring Math. I have taken and successfully passed Math up to Calculus 2. In the past, I have tutored Middle School Alg... 29 Subjects: including precalculus, chemistry, reading, Spanish ...I am certified to teach in both PA and GA. I have teaching experience at both the Middle School and High School level in both private and public schools. I have chosen to leave the classroom to tutor from home so that I can be a stay at home mom. 10 Subjects: including precalculus, geometry, algebra 1, algebra 2 ...My name is Jessica Coates and I am currently a graduate student at Emory University. I am working to complete my PhD in Microbiology and Molecular Genetics. I received my undergraduate degree (BSc) in Biology with a minor in Mathematics.I am very well versed in several science and math subjects and have had several tutoring experiences. 18 Subjects: including precalculus, reading, geometry, biology ...After earning my B.S. in Mathematics at Georgia State University I was offered the position of Mathematics and Science Lab Supervisor. In that position I continued to tutor students and to train other tutors as well. I love math, and helping others to learn it. 15 Subjects: including precalculus, chemistry, calculus, geometry ...In my experience, working one on one with students is the key to success. I have a masters degree in education from Georgia State University. I have taught science for 27 years at many different levels including physiology. 7 Subjects: including precalculus, chemistry, biology, algebra 1
{"url":"http://www.purplemath.com/Avondale_Estates_Precalculus_tutors.php","timestamp":"2014-04-18T23:17:10Z","content_type":null,"content_length":"24343","record_id":"<urn:uuid:00e86608-8ea1-46ef-b050-3761a1f402ca>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
The sum of digits of prime numbers is evenly distributed (PhysOrg.com) -- On average, there are as many prime numbers for which the sum of decimal digits is even as prime numbers for which it is odd. This hypothesis, first made in 1968, has recently been proven by French researchers from the Institut de Mathematiques de Luminy. A prime number is an integer greater than or equal to 2 that has exactly two distinct natural number divisors, 1 and itself. For example, 2, 3, 5, 7, 11,..., 1789, etc. are prime numbers, whereas 9, divisible by 3, is not a prime number. Numerous arithmetical problems concern prime numbers and most of them still remain unresolved, sometimes even after several centuries. For example, it has been known since Euclid that the sequence of prime numbers is infinite, but it is still not known if an infinity of prime numbers p exists such that p+2 is also a prime number (problem of twin prime numbers). In the same way, it is not known if there exists an infinity of prime numbers, the decimal representation of which does not use the digit 7. Two researchers from the Institut de Mathématiques de Luminy have recently made an important breakthrough regarding a conjecture formulated in 1968 by the Russian mathematician Alexandre Gelfond concerning the sum of digits of prime numbers. In particular, they have demonstrated that, on average, there are as many prime numbers for which the sum of decimal digits is even as prime numbers for which it is odd. The methods employed to arrive at this result, derived from combinatorial mathematics, the analytical theory of numbers and harmonic analysis, are highly groundbreaking and should pave the way to the resolution of other difficult questions concerning the representation of certain sequences of integers. Quite apart from their theoretical interest, these questions are directly linked to the construction of sequences of pseudo-random numbers and have important applications in digital simulation and More information: Sur un problčme de Gelfond : la somme des chiffres des nombres premiers, (On a Gelfond problem: the sum of digits of prime numbers) C. Mauduit, J. Rivat, Annals of Mathematics, Vol. 171 (2010), No. 3, 1591-1646, May 2010, annals.princeton.edu/annals/2010/171-3/p04.xhtml not rated yet May 12, 2010 "but it is still not known if an infinity of prime numbers p exists such that p+2 s also a prime number (problem of twin prime numbers)" if that would not be the case than this conjecture: The sum of digits of prime numbers is evenly distributed would no longer be true, because symmetry/evenness would break right? So, does that lend enough weight to say that indeed an infinity of primes exist? 1 / 5 (1) May 13, 2010 Thank you for including the citation for the original article. Unfortunately, those of us too poor to afford the high cost of subscription are prevented from knowledge. The "Haves" win out over the "Have Not's" once again.
{"url":"http://phys.org/news192907929.html","timestamp":"2014-04-18T01:59:01Z","content_type":null,"content_length":"69970","record_id":"<urn:uuid:ceeea0f7-ffd4-46ea-9d26-a653b85f5b43>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
How To Learn Multiplication Tables Fast 2012 Leave a Comment Written by Clint Coping With Math Anxiety My palms get sweaty, I breathe too fast, and often I can’t even make my eyes focus on the paper. “I’ve hated math ever since I was nine years old, when my father grounded me for a week because I couldn’t learn my multiplication tables.” … Visit Document Overview Of Learning Fischer/Eden 3 DLC, 2004 Some Claims about Learning people learn best when engrossed in the topic expensive and restricted cheap specialization low high change within a human life time slow fast step 1: ignore the existence of the gadget step 2: make people learn arithmetic, multiplication tables, long … Read Here Learn Multiplication Tables In Less Than 2 Minutes … 0:31 Add to Opening to Rock ‘N Learn: Multiplication Rock 2 by chazmanization 3,772 views 1:35 Add to How to Learn Times Tables Fast by FranciscoSCTan 22,649 views … View Video Curriculum â Key Vocabulary For This Half Term fast. only. many. laughed. its. green. different. let. girl. which. inside. run any. under . hat . snow Numeracy â How to Learn Multiplication Tables. When we talk about learning multiplication facts we mean more than just learning 9 x 7 = 63! … Read Full Source Learn Multiplication – Table 4 – YouTube 3:12 Add to Learn the Times Tables Fast with 7 year old Joshua by brickschool 14,376 views 2:01 Add to 3 TIMES TABLE MULTIPLICATION SONG – FROM “THE N by phoolholy 27,737 views … View Video This carnival math game is ideal for kids looking to perfect their addition and subtraction skills as well as learn multiplication tables. The objective of the game is to pop the balloons with the correct answer before time runs out! Fast paced and fun! … Document Retrieval Faro (card Game) – Wikipedia, The Free Encyclopedia Although not a direct relative of poker, faro was played by the masses alongside its other popular counterpart, due to its fast action, easy-to-learn rules, and better odds than most games of chance. … Read Article Slide Rule – Wikipedia, The Free Encyclopedia The dual cursor versions perform multiplication and division by holding a fast angle between the cursors as they are rotated around the dial. Most people find slide rules difficult to learn and use. Even during their heyday, they never caught on with the general public. … Read Article Kenpo As A Method Of Self-defence And As A Method To Defend … Our children no longer learn simple addition and multiplication tables because they are encouraged to use the same calculators. of 30 second sound bites from CNN, and our books come in video form, which we can fast … Content Retrieval Ivy Hill School 4) knowing all multiplication facts, including the turn-around facts (first the 0s, 1s, 2s, 3s, 4s, 5s, and 10s, then the 6s, 7s, 8s, and 9s). As we begin the final eight weeks of third grade, we have lots left to learn! … Access Doc Gamma Function – Wikipedia, The Free Encyclopedia The only fast algorithm for calculation of the Euler gamma function for any algebraic argument Gauss also proved the multiplication theorem of the gamma function and investigated the connection between This approach was used by the Bourbaki group. Reference tables and software … Read Article Keep Them Learning While On Vacation From Sue Watson Practice multiplication tables in the car Select a criteria for counting, count all yellow cars or graph the Make sure you bring maps with you to learn about the area stretches it: ‘I see a fast, rusty, yellow car.’ The child stretches it: I … Read More How To Learn Your Times Tables Fast – YouTube Learn times tables fast by watching this 2 minute video! Go to http://www.mymathsblog.co.uk to see 1:34 Add to How to Learn Times Tables Fast and Easy (Daddy by FranciscoSCTan 4,190 views 6:57 Add to Fastest way to learn the Multiplication facts! by DVLearning 35,169 views … View Video Language Arts-4 Students will solve problems involving multiplication of 2-3 digit numbers by 1-2 digit numbers. 18 : The Best of Times: Math Strategies that Multiply: Greg Tang: If you’re looking for a fun and fast way to learn multiplication (instead of memorizing times tables), look no further! … Read Here Machine Learning, Data Mining Learn multiplication tables; Supervised Learning; Examples are used to help a program identify a concept Highly accurate and fast when applied to large databases; Some links: … Content Retrieval Fortune: Iâ m Not Turning 40, Itâ s 30-ten So what is the very last column I write in my 30s supposed to be about, anyway? All the stuff Iâ ve learned up to now? All the mistakes Iâ ve made and how Iâ ve grown from them? Bleah. Thatâ s so boring. … Read News An Overview Of Learning And Memory As Applied To Learning And … Experts are fast and complex thinkers not because they hold more things in working memory, but We all achieve fluent recall, for example, in middle stages of practicing multiplication tables. Just as when you learn a concept for the first time, study time is best when studying is … Doc Viewer GET ON YOUR FEET!ITâ S TIME TO LEARN! Dance Matâ Activities fun, fast-paced and highly educational. GET ON YOUR FEET!ITâ S TIME TO LEARN! multiples, properties of shapes, multiplication tables â ¢ English Language Arts: Word definitions, … View This Document â ¢ These tried and tested brightly coloured multiplication Practice Cards make learning tables fun. Packing * £ CHEQUES ARE PAYABLE TO: Mary Peters TOTAL TO PAY £ Send this order form with your cheque to: Learn Fast Cards, 16 Caledon … Fetch Full Source Math Formulas And Math Tables Formulas, quatratic formula, midpoint formula distance formula, mathematical formulae, tables and Learn how to calculate the area and volume for a variety of shapes. Multiplication Tricks/Resources; Find Area, Perimeter & Volume … Read Article Binary Numbers They are simple to work with –no big addition tables and multiplication tables to learn, just do the same things over and over, very fast. They just use two values of voltage, magnetism, or other signal, which makes the hardware easier to design and more noise resistant. … Read Document Formulas And Reference Math formulas, reference, calculators and tools. Algebra Formulas; Geometry Formulas; Financial and Business Formulas; Calculus Rules, Functions and Formulas … Read Article Posted in Times Tables - Tagged addition and subtraction, learning multiplication facts, math anxiety, multiplication rock, subtraction skills, video curriculum
{"url":"http://multiplicationtimestablestimes.com/how-to-learn-multiplication-tables-fast","timestamp":"2014-04-19T04:22:26Z","content_type":null,"content_length":"46845","record_id":"<urn:uuid:7a478595-fb83-4409-af3e-6ee8c508865e>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Magnetospheric Multiscale EPO 6-8 Lesson Plans 6-8 Lesson Plans iMAGINETICspace – MMS Transmedia Book Enjoy the MMS Transmedia book (Tbook) designed to help students learn about the NASA MMS mission through a variety of inquiry and engineering design methods including the use of digital + View iMAGINETICspace Book MMS Math Guide The new MMS Math guide will examples from the MMS Mission to introduce mathematics in a real-world context to fifth through eighth graders. The main area of mathematics covered in this guide is + View MMS Math Guide Mapping Magnetic Influence This is a complete teachers guide on magnetism it is designed for students to explore magnets and to develop an operational definition of a magnetic “field” and an operational definition for magnetic “pole.” + Mapping Magnetic Influence Activity | PDF | 31 Pages | 876KB Exploring Magnetism – Grades 7-9 Students will act as scientists discovering magnetic fields and electromagnetism through inquiry and measurement. Included at the beginning of each session is a summary of the session, a list of national education standards that the session covers, and a list of materials required for the session. Each session is broken into several activities, with each activity outlined for the teacher. In the Background Material section, you can find science background for the lessons. A glossary can be found after the background section. At the end we recommend different resources to help you teach and learn more about magnetism. + Go to this lesson Magnetic Math (2009) This 188-page book, produced by Dr. Sten Odenwald (Space Math @ NASA), contains six hands-on exercises, plus 37 math problems, which allow students to explore magnetism and magnetic fields. The activities include drawing and geometric construction, and introduce students in the use of simple algebra to quantitatively examine magnetic forces, energy, and magnetic field lines and their mathematical structure. + Magnetic Math book | PDF | 114 Pages | 7.2MB Magnetosphere Graph Tutorial! This short tutorial shows you how to quickly read a graph that explains the solar wind’s effects on Earth’s magnetosphere. When the solar wind magnetic field is opposite the Earth’s, it is called a southward field and is considered to be negative in sign. You can monitor the ACE data in this graph to identify times when this happens. + Magnetosphere Graph Tutorial! KP Index Tutorial Every three hours throughout the day, magnetic observatories around the world measure the largest magnetic change that their instruments recorded during this time. The result is averaged together and place in a chart called the Kp scale. This brief tutorial shows you how to quickly interpret that data! + KP Index Tutorial How Are Magnetic Fields Related To Sunspots? Students discover that sunspots are the result of intense magnetic forces on the photosphere of the Sun using images from the SOHO spacecraft. + How Are Magnetic Fields Related To Sunspots? Exploring Magnetic Field Lines When discussing space weather or how Earth’s magnetosphere protects us, we often see diagrams with lines wrapping around the globe. What are these lines? Can we see these lines if we were in space looking back at Earth? This activity lets us explore the magnetic field of a bar magnet and serves as a good introduction to understanding Earth’s magnetic field. It is also a good way to demonstrate why prominences are always “loops”. + Download this File | PDF | 2 Pages | 668KB
{"url":"http://mms.gsfc.nasa.gov/epo_6_8_lesson_plans.html","timestamp":"2014-04-17T18:52:56Z","content_type":null,"content_length":"13343","record_id":"<urn:uuid:e56bce0c-1fcd-40db-b2f0-8d03f3eec7de>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Which naturals better? Date: Feb 7, 2013 9:43 PM Author: Brian Q. Hutchings Subject: Re: Which naturals better? Descartes gave us the screw-up on Snell's law, apparently copied by fignewton in his "theory of light;" this is in RD's correspondence with Fermat, the true author of the theory of numbers (or, modular arithmetic) -- hint to the OP. anyway, the real question is, What is the canonical digital representation for base-one accounting? (prove by induction, please; thank you .-) > Ever since Rene de Carte,
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8264395","timestamp":"2014-04-21T15:49:51Z","content_type":null,"content_length":"1427","record_id":"<urn:uuid:30f655fd-88d5-499d-88e6-e1b838dd1367>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Find Number Given Divisor and Remainder Information Date: 01/21/2009 at 00:06:19 From: Mark Subject: Problem solving A four digit number N leaves remainder 10 when divided by 21, remainder 11 when divided by 23 and remainder 12 when divided by 25. What is the sum of the digits of N? The given possible answers are 7, 13, 16, 19 and 22. I began with 7 and tried to make possible combinations like 1, 6; 2,5; 3,4. With 1 and 6 the four digit number combinations could be 1006, 6001, 1060, 1600, etc. Thus there are innumerable possible combinations, and it will take a long time to try them all. Is there a faster way to solve a problem like this? Date: 01/21/2009 at 01:24:04 From: Doctor Greenie Subject: Re: Problem solving Hi, Mark - Working backwards from the given answer choices does indeed give far too many possibilities to be practical.... There are formal mathematical methods for solving general problems like this (certain remainders with certain divisors); but I have only passing knowledge of them. But often problems like this contain patterns that make it relatively easy to solve the problem. Your example is such a problem. The divisors and the remainders we get when we divide our number N are divisor remainder (when dividing "N") The divisors increase by 2 from one to the next; and the remainders increase by 1. So let's double our number N and see what happens when we divide 2N by these same divisors: divisor remainder (when dividing "2N") Now the remainder in every case is 1 less than the divisor. But that means the number 2N+1 is evenly divisible by 21, 23, and 25. These divisors have no common factors; so we must have 2N+1 = 21*23*25 This easily leads us to the answer to the question. I hope this helps. Please write back if you have any further questions about any of this. - Doctor Greenie, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/72944.html","timestamp":"2014-04-21T12:34:30Z","content_type":null,"content_length":"7094","record_id":"<urn:uuid:72e0eb49-9eaa-4d3f-bd15-cd23cf58acce>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Use Net Present Value in Your Real Estate Analysis Net Present Value (NPV) is a real estate investing measure widely used for investment real estate analysis as a base to determine if the future cash flows expected to be generated by a rental property have a present value larger than the amount of cash required to invest in the rental property. Simply put, net present value tells the real estate investor whether his or her target rate of return will be achieved, and thus, whether the property should attract the investor’s capital into that The Model The net present value model is based on a decision rule that states if the discounted present value of future benefits is equal to or greater than the cost of those benefits it is a profitable opportunity. Whereas, if the present value of the future benefits is less than the cost for those benefits, the rate of return will not be achieved and chances are good that the investor should take another look. How It Serves Investors Let’s consider a simple illustration to help frame the idea. When you place your money into a savings account (i.e., invest your capital) you expect it to earn interest (i.e., provide future benefits). The bank dictates the return and you are either willing or unwilling to tie up your capital based upon your acceptance of that return. For example, whereas you might deposit $10,000 to earn 3.8% interest, you might not make the investment for 1.2% interest. Okay, but suppose a bank doesn’t quote an interest rate and you’re only told what amount of money you’ll collect in the future. For example, you’re told that you’ll collect $10,300 next year for a deposit of $10,000 made today with no mention of interest rate. How would you know what yield your investment is earning, and whether or not to make the investment? That’s the dilemma real estate investors face when evaluating and investment decision. Though there’s a projection for a future benefit, there’s no mention of yield. Therefore the real estate investor has no idea what rate of return he or she may achieve and therefore no way to compare it to other potential investment opportunities adequately. That’s where net present value comes in. It takes your desired rate of return and essentially tells you if the future cash flows (benefits) from a property achieve that yield on your capital investment or not. In other words, you plug in the yield you want and NPV provides a result that will inform you whether that target yield is achieved. How It Works NPV discounts all future cash flows by the desired rate of return to arrive at a present value of those future cash flows, and then it deducts that amount from the initial equity (capital invested). Let’s assume three separate investment opportunities each requiring an investment of $100,000. But based upon the present value of each of their future cash flows the result is $105,000, $100,000, $95,000. The NPV for each of the properties would individually be • -$5,000 (100,000 – 105,000) • zero (100,000 – 100,000) • $5,000 (100,000 – 95,000) The interpretation, • The negative dollar amount means that the present value of future benefits is less than the amount invested and that the specified rate of return is not met. In other words, you might want to keep looking. • The zero dollar amount signifies that the desired yield is perfectly met. • The positive dollar amount signifies that the desired rate of return is met with room to spare. In other words, this property could be a winner. So You Know ProAPOD automatically computes net present value in two of its real estate investing software solutions: Investor 8 and Executive 10. iCalculator also includes an NPV calculation, see it at online real estate calculator.
{"url":"http://realestateinvestmentsoftwareblog.com/net-present-value-real-estate-analysis/","timestamp":"2014-04-19T06:57:03Z","content_type":null,"content_length":"30309","record_id":"<urn:uuid:47ffd6c8-4543-4ba9-92d8-44eea51e20f0>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Severance, CO Math Tutor Find a Severance, CO Math Tutor Welcome students! I believe that everyone can succeed with practice and a little guidance. Let me help you look at math and science differently, and learn techniques that will make a variety of subjects easier to understand. 13 Subjects: including geometry, precalculus, trigonometry, differential equations ...The absolute proudest moment (and the biggest surprise) of my career so far came when one of these students ran to me from across a dance floor and threw her arms around my neck, saying she'd scored a 32! As a multidisciplinary learner, I believe adamantly in the educational power of the arts, a... 31 Subjects: including algebra 1, probability, TOEFL, grammar ...As part of my previous job algorithm design for target tracking was done with MATLAB using Kalman filtering techniques (and other newer algorithm designs). In my current job, MATLAB is an essential component of our IDE software for which Simulink and many DSP toolboxes provided by MATLAB are use... 47 Subjects: including SAT math, discrete math, electrical engineering, MATLAB ...I am tutoring because I am as passionate about learning as I am about math and statistics. Doing research is continual learning, plus I advised graduate students and I tutored economics while earning my doctorate. Also, like you, I am now a student, taking classes in advanced statistics, R, and data mining. 9 Subjects: including algebra 2, SQL, SAS, computer programming ...A language cannot be taught well by requiring students to memorize long lists of words or rules, which are quickly forgotten. This is not to say that it isn’t occasionally necessary to drop back and focus on a point of grammar and spend some time practicing syntax or discussing semantics, but fo... 24 Subjects: including geometry, prealgebra, calculus, statistics Related Severance, CO Tutors Severance, CO Accounting Tutors Severance, CO ACT Tutors Severance, CO Algebra Tutors Severance, CO Algebra 2 Tutors Severance, CO Calculus Tutors Severance, CO Geometry Tutors Severance, CO Math Tutors Severance, CO Prealgebra Tutors Severance, CO Precalculus Tutors Severance, CO SAT Tutors Severance, CO SAT Math Tutors Severance, CO Science Tutors Severance, CO Statistics Tutors Severance, CO Trigonometry Tutors Nearby Cities With Math Tutor Ault Math Tutors Edgewater, CO Math Tutors Erie, CO Math Tutors Evans, CO Math Tutors Galeton, CO Math Tutors Greeley, CO Math Tutors Johnstown, CO Math Tutors La Salle, CO Math Tutors Lafayette, CO Math Tutors Loveland, CO Math Tutors Lucerne, CO Math Tutors Nunn Math Tutors Pierce, CO Math Tutors Wellington, CO Math Tutors Windsor, CO Math Tutors
{"url":"http://www.purplemath.com/severance_co_math_tutors.php","timestamp":"2014-04-17T04:16:34Z","content_type":null,"content_length":"23877","record_id":"<urn:uuid:824188fd-5935-4717-a042-8a508980952e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. 1. A force does work on an object if a component of the force a. is perpendicular to the displacement of the object. b. is parallel to the displacement of the object. c. perpendicular to the displacement of the object moves the object along a path that returns the object to its starting position. d. parallel to the displacement of the object moves the object along a path that returns the object to its starting position. 2. Work is done when a. the displacement is not zero. b. the displacement is zero. c. the force is zero. d. the force and displacement are perpendicular. 3. A 1.00 ´ 10^3 kg sports car accelerates from rest to 25.0 m/s in 7.50 s. What is the average power output of the automobile engine? a. 20.8 kW c. 41.7 kW b. 30.3 kW d. 52.4 kW 4. The magnitude of the component of the force that does the work is 43.0 N. How much work is done on a bookshelf being pulled 5.00 m at an angle of 37.0° from the horizontal? a. 172 J c. 129 J b. 215 J d. 792 J 5. A worker pushes a wheelbarrow with a horizontal force of 50.0 N over a level distance of 5.0 m. If a frictional force of 43 N acts on the wheelbarrow in a direction opposite to that of the worker, what net work is done on the wheelbarrow? a. 250 J c. 35 J b. 0.0 J d. 10.0 J 6. A hill is 100 m long and makes an angle of 12° with the horizontal. As a 50 kg jogger runs up the hill, how much work does gravity do on the jogger? a. 50 000 J c. 10 000 J b. 10 000 J d. 0.0 J 7. A child moving at constant velocity carries a 2 N ice-cream cone 1 m across a level surface. What is the net work done on the ice-cream cone? 8. A construction worker pushes a wheelbarrow 5.0 m with a horizontal force of 50.0 N. How much work is done by the worker on the wheelbarrow? a. 10 J c. 250 J b. 1250 J d. 55 J 9. A horizontal force of 200 N is applied to move a 55 kg television set across a 10 m level surface. What is the work done by the 200 N force on the television set? a. 4000 J c. 2000 J b. 5000 J d. 6000 J 10. A flight attendant pulls a 50.0 N flight bag a distance of 250.0 m along a level airport floor at a constant speed. A 30.0 N force is exerted on the bag at an angle of 50.0° above the horizontal. How much work is done on the flight bag? a. 12 500 J c. 4820 J b. 7510 J d. 8040 J 11. Which of the following energy forms is involved in winding a pocket watch? a. electrical energy c. gravitational potential energy b. nonmechanical energy d. elastic potential energy 12. Which of the following energy forms is NOT involved in hitting a tennis ball? a. kinetic energy c. gravitational potential energy b. chemical potential energy d. elastic potential energy 13. A 3.00 kg toy falls from a height of 10.0 m. Just before hitting the ground, what will be its kinetic energy? (Disregard air resistance. g = 9.81 m/s^2.) a. 98.0 J c. 29.4 J b. 0.98 J d. 294 J 14. If the only force acting on an object is friction during a given physical process, which of the following assumptions must be made in regard to the object s kinetic energy? a. The kinetic energy decreases. b. The kinetic energy increases. c. The kinetic energy remains constant. d. The kinetic energy decreases and then increases. 15. What is the kinetic energy of a 0.135 kg baseball thrown at 40.0 m/s? a. 54.0 J c. 108 J b. 87.0 J d. 216 J 16. If both the mass and the velocity of a ball are tripled, the kinetic energy of the ball is increased by a factor of 17. Which of the following energy forms is associated with an object in motion? a. potential energy c. nonmechanical energy b. elastic potential energy d. kinetic energy 18. Which of the following energy forms is associated with an object due to its position? a. potential c. total b. positional d. kinetic 19. The main difference between kinetic energy and potential energy is that a. kinetic energy involves position and potential energy involves motion. b. kinetic energy involves motion and potential energy involves position. c. although both energies involve motion, only kinetic involves position. d. although both energies involve position, only potential involves motion. 20. Which of the following energy forms is associated with an object due to its position relative to Earth? a. potential energy c. gravitational potential energy b. elastic potential energy d. kinetic energy 21. Which of the following energy forms is stored in any compressed or stretched object? a. nonmechanical energy c. gravitational potential energy b. elastic potential energy d. kinetic energy 22. The equation for determining gravitational potential energy is PE[g] = mgh. Which factor(s) in this equation is (are) NOT a property of an object? 23. Which form of energy is involved in weighing fruit on a spring scale? a. kinetic energy c. gravitational potential energy b. nonmechanical energy d. elastic potential energy 24. As an object is lowered into a deep hole in the ground, which of the following assumptions must be made in regard to the object s potential energy? a. The potential energy increases. b. The potential energy decreases. c. The potential energy remains constant. d. The potential energy increases and then decreases. 25. A 40.0 N crate is pulled up a 5.0 m inclined plane at a constant velocity. If the plane is inclined at an angle of 37° to the horizontal and there is a constant force of friction of 10.0 N between the crate and the surface, what is the net gain in potential energy by the crate? a. 120 J c. 210 J b. 120 J d. 210 J 26. A 0.002 kg coin, which has zero potential energy at rest, is dropped into a 10.0 m well. After the coin comes to a stop in the mud, what is its potential energy? a. 0.000 J c. 0.196 J b. 0.196 J d. 0.020 J 27. A 5.00 ´ 10^2 N crate is at the top of a 5.00 m ramp, which is inclined at 20.0° with the horizontal. What is its potential energy? (g = 9.81 m/s^2.) a. 855 J c. 815 J b. 2350 J d. 8390 J 28. Why doesn t the principle of mechanical energy conservation hold in situations when frictional forces are present? a. Kinetic energy is not simply converted to a form of potential energy. b. Potential energy is simply converted to a form of gravitational energy. c. Chemical energy is not simply converted to electrical energy. d. Kinetic energy is simply converted to a form of gravitational energy. 29. A 16.0 kg child on roller skates, initially at rest, rolls 2.0 m down an incline at an angle of 20.0° with the horizontal. If there is no friction between incline and skates, what is the kinetic energy of the child at the bottom of the incline? (g = 9.81 m/s^2.) a. 210 J c. 11 J b. 610 J d. 110 J 30. A pole vaulter clears 6.00 m. With what velocity does the vaulter strike the mat in the landing area? (Disregard air resistance. g = 9.81 m/s^2.) a. 2.70 m/s c. 10.8 m/s b. 5.40 m/s d. 21.6 m/s 31. A bobsled zips down an ice track starting at 150 m vertical distance up the hill. Disregarding friction, what is the velocity of the bobsled at the bottom of the hill? (g = 9.81 m/s^2.) a. 27 m/s c. 45 m/s b. 36 m/s d. 54 m/s 32. A professional skier starts from rest and reaches a speed of 56 m/s on a ski slope 30.0° above the horizontal. Using the work kinetic energy theorem and disregarding friction, find the minimum distance along the slope the skier would have to travel in order to reach this speed. a. 110 m c. 320 m b. 160 m d. 640 m 33. A 40.0 N crate starting at rest slides down a rough 6.0 m long ramp inclined at 30.0° with the horizontal. The force of friction between the crate and ramp is 6.0 N. Using the work kinetic energy theorem, find the velocity of the crate at the bottom of the incline. a. 8.7 m/s c. 4.5 m/s b. 3.3 m/s d. 6.4 m/s 34. A 15.0 kg crate, initially at rest, slides down a ramp 2.0 m long and inclined at an angle of 20.0° with the horizontal. Using the work kinetic energy theorem and disregarding friction, find the velocity of the crate at the bottom of the ramp. (g = 9.81 m/s^2.) a. 6.1 m/s c. 9.7 m/s b. 3.7 m/s d. 8.3 m/s 35. A parachutist with a mass of 50.0 kg jumps out of an airplane at an altitude of 1.00 ´ 10^3 m. After the parachute deploys, the parachutist lands with a velocity of 5.00 m/s. Using the work kinetic energy theorem, find the energy that was lost to air resistance during this jump. (g = 9.81 m/s^2.) a. 49 300 J c. 198 000 J b. 98 800 J d. 489 000 J 36. A horizontal force of 2.00 ´ 10^2 N is applied to a 55.0 kg cart across a 10.0 m level surface, accelerating it 2.00 m/s^2. Using the work kinetic energy theorem, find the force of friction that slows the motion of the cart? (Disregard air resistance. g = 9.81 m/s^2.) a. 110 N c. 80.0 N b. 90.0 N d. 70.0 N 37. Which of the following is the rate at which energy is transferred? a. potential energy c. mechanical energy b. kinetic energy d. power 38. Which of the following equations is NOT an equation for power? 39. What is the average power supplied by a 60.0 kg secretary running up a flight of stairs rising vertically 4.0 m in 4.2 s? a. 380 W c. 610 W b. 560 W d. 670 W 40. What is the average power output of a weight lifter who can lift 250 kg 2.0 m in 2.0 s? a. 5.0 ´ 10^2 W c. 4.9 kW b. 2.5 kW d. 9.8 kW 41. Water flows over a section of Niagara Falls at a rate of 1.20 ´ 10^6 kg/s and falls 50.0 m. What is the power of the waterfall? a. 589 MW c. 147 MW b. 294 MW d. 60.0 MW 42. Which of the following has the greatest momentum? a. truck with a mass of 2250 kg moving at a velocity of 25 m/s b. car with a mass of 1210 kg moving at a velocity of 51 m/s c. truck with a mass of 6120 kg moving at a velocity of 10 m/s d. car with a mass of 1540 kg moving at a velocity of 38 m/s 43. Which of the following has the greatest momentum? a. tortoise with a mass of 270 kg moving at a velocity of 0.5 m/s b. hare with a mass of 2.7 kg moving at a velocity of 7 m/s c. turtle with a mass of 91 kg moving at a velocity of 1.4 m/s d. roadrunner with a mass of 1.8 kg moving at a velocity of 6.7 m/s 44. What velocity must a 1340 kg car have in order to have the same momentum as a 2680 kg truck traveling at a velocity of 15 m/s to the west? a. 6.0 ´ 10^1 m/s to the west c. 3.0 ´ 10^1 m/s to the west b. 6.0 ´ 10^1 m/s to the east d. 3.0 ´ 10^1 m/s to the east 45. A child with a mass of 23 kg rides a bike with a mass of 5.5 kg at a velocity of 4.5 m/s to the south. Compare the momentum of the child with the momentum of the bike. a. Both the child and the bike have the same momentum. b. The bike has a greater momentum than the child. c. The child has a greater momentum than the bike. d. Neither the child nor the bike has momentum. 46. When comparing the momentum of two moving objects, which of the following is correct? a. The object with the higher velocity will have less momentum if the masses are equal. b. The more massive object will have less momentum if its velocity is greater. c. The less massive object will have less momentum if the velocities are the same. d. The more massive object will have less momentum if the velocities are the same. 47. A baseball is pitched very fast. Another baseball of equal mass is pitched very slowly. Which of the following statements is correct? a. The fast-moving baseball is harder to stop because it has more momentum. b. The slow-moving baseball is harder to stop because it has more momentum. c. The fast-moving baseball is easier to stop because it has more momentum. d. The slow-moving baseball is easier to stop because it has more momentum. 48. A roller coaster climbs up a hill at 4 m/s and then zips down the hill at 30 m/s. The momentum of the roller coaster a. is greater up the hill than down the hill. c. remains the same throughout the ride. b. is greater down the hill than up the hill. d. is zero throughout the ride. 49. A person sitting in a chair with wheels stands, causing the chair to roll backward across the floor. The momentum of the chair a. was zero while stationary and increased when the person stood. b. was greatest while the person sat in the chair. c. remained the same. d. was zero when the person got out of the chair and increased while the person sat. 50. A student walks to class at a velocity of 3 m/s. To avoid walking into a door as it opens, the student slows to a velocity of 0.5 m/s. Now late for class, the student runs down the corridor at a velocity of 7 m/s. The student had the least momentum a. while walking at a velocity of 3 m/s. b. while dodging the opening door. c. immediately after the door opened. d. while running to class at a velocity of 7 m/s. 51. An ice skater initially skating at a velocity of 3 m/s speeds up to a velocity of 5 m/s. The momentum of the skater a. decreases. c. remains the same. b. increases. d. becomes zero. 52. If a force is exerted on an object, which statement is true? a. A large force always produces a large change in the object s momentum. b. A large force produces a large change in the object s momentum only if the force is applied over a very short time interval. c. A small force applied over a long time interval can produce a large change in the object s momentum. d. A small force produces a large change in the object s momentum. 53. The change in an object s momentum is equal to a. the product of the mass of the object and the time interval. b. the product of the force applied to the object and the time interval. c. the time interval divided by the net external force. d. the net external force divided by the time interval. 54. A force is applied to stop a moving shopping cart. Increasing the time interval over which the force is applied a. requires a greater force. c. requires a smaller force. b. has no effect on the force needed. d. requires the same force. 55. Which of the following situations is an example of a visible change in momentum? a. A hiker walks through a spider s web. c. A volleyball hits a mosquito in the air. b. A car drives over a pebble. d. A baseball is hit by a bat. 56. Which of the following situations is an example of change in momentum? a. A tennis ball is hit into a net. b. A helium-filled balloon rises upward into the sky. c. An airplane flies into some scattered white clouds. d. A bicyclist rides over a leaf on the pavement. 57. A 6.0 ´ 10^ 2 kg tennis ball moves at a velocity of 12 m/s. The ball is struck by a racket, causing it to rebound in the opposite direction at a speed of 18 m/s. What is the change in the ball s momentum? a. 0.38 kg·m/s c. 1.1 kg·m/s b. 0.72 kg·m/s d. 1.8 kg·m/s 58. A rubber ball with a mass of 0.30 kg is dropped onto a steel plate. The ball s velocity just before impact is 4.5 m/s and just after impact is 4.2 m/s. What is the change in the ball s a. 0.09 kg·m/s c. 4.0 kg·m/s b. 2.6 kg·m/s d. 12 kg·m/s 59. A ball with a momentum of 4.0 kg·m/s hits a wall and bounces straight back without losing any kinetic energy. What is the change in the ball s momentum? a. 0.0 kg·m/s c. 8.0 kg·m/s b. 4.0 kg·m/s d. 8.0 kg·m/s 60. A ball with a mass of 0.15 kg and a velocity of 5.0 m/s strikes a wall and bounces straight back with a velocity of 3.0 m/s. What is the change in momentum of the ball? a. 0.30 kg·m/s c. 0.15 kg·m/s b. 1.20 kg·m/s d. 7.50 kg·m/s 61. The impulse experienced by a body is equivalent to the body s change in a. velocity. c. momentum. b. kinetic energy. d. force. 62. A moderate force will break an egg. However, an egg dropped on the road usually breaks, while one dropped on the grass usually does not break because for the egg dropped on the grass, a. the change in momentum is greater. c. the time interval for stopping is greater. b. the change in momentum is less. d. the time interval for stopping is less. 63. Which of the following statements properly relates the variables in the equation FDt = Dp? a. A large constant force changes an object s momentum over a long time interval. b. A large constant force acting over a long time interval causes a large change in momentum. c. A large constant force changes an object s momentum at various time intervals. d. A large constant force does not necessarily cause a change in an object s momentum. 64. A large moving ball collides with a small stationary ball. The momentum a. of the large ball decreases, and the momentum of the small ball increases. b. of the small ball decreases, and the momentum of the large ball increases. c. of the large ball increases, and the momentum of the small ball decreases. d. does not change for either ball. 65. A rubber ball moving at a speed of 5 m/s hit a flat wall and returned to the thrower at 5 m/s. The magnitude of the momentum of the rubber ball a. increased. c. remained the same. b. decreased. d. was not conserved. 66. Two objects with different masses collide and bounce back after an elastic collision. Before the collision, the two objects were moving at velocities equal in magnitude but opposite in direction. After the collision, a. the less massive object had gained momentum. b. the more massive object had gained momentum. c. both objects had the same momentum. d. both objects lost momentum. 67. Two skaters stand facing each other. One skater s mass is 60 kg, and the other s mass is 72 kg. If the skaters push away from each other without spinning, a. the 60 kg skater travels at a lower momentum. b. their momenta are equal but opposite. c. their total momentum doubles. d. their total momentum decreases. 68. Two swimmers relax close together on air mattresses in a pool. One swimmer s mass is 48 kg, and the other s mass is 55 kg. If the swimmers push away from each other, a. their total momentum triples. c. their total momentum doubles. b. their momenta are equal but opposite. d. their total momentum decreases. 69. A soccer ball collides with another soccer ball at rest. The total momentum of the balls a. is zero. c. remains constant. b. increases. d. decreases. 70. In a two-body collision, a. momentum is conserved. b. kinetic energy is conserved. c. neither momentum nor kinetic energy is conserved. d. both momentum and kinetic energy are conserved. 71. The law of conservation of momentum states that a. the total initial momentum of all objects interacting with one another usually equals the total final momentum. b. the total initial momentum of all objects interacting with one another does not equal the total final momentum. c. the total momentum of all objects interacting with one another is zero. d. the total momentum of all objects interacting with one another remains constant regardless of the nature of the forces between the objects. 72. Which of the following statements about the conservation of momentum is NOT correct? a. Momentum is conserved for a system of objects pushing away from each other. b. Momentum is not conserved for a system of objects in a head-on collision. c. Momentum is conserved when two or more interacting objects push away from each other. d. The total momentum of a system of interacting objects remains constant regardless of forces between the objects. 73. A swimmer with a mass of 75 kg dives off a raft with a mass of 500 kg. If the swimmer s speed is 4 m/s immediately after leaving the raft, what is the speed of the raft? a. 0.2 m/s c. 0.6 m/s b. 0.5 m/s d. 4.0 m/s 74. A bullet with a mass of 5.00 ´ 10^ 3 kg is loaded into a gun. The loaded gun has a mass of 0.52 kg. The bullet is fired, causing the empty gun to recoil at a speed of 2.1 m/s. What is the speed of the bullet? a. 48 m/s c. 120 m/s b. 220 m/s d. 360 m/s 75. A 65.0 kg ice skater standing on frictionless ice throws a 0.15 kg snowball horizontally at a speed of 32.0 m/s. At what velocity does the skater move backward? a. 0.07 m/s c. 0.15 m/s b. 0.30 m/s d. 1.20 m/s 76. Two skaters, each with a mass of 50 kg, are stationary on a frictionless ice pond. One skater throws a 0.2 kg ball at 5 m/s to the other skater, who catches it. What are the velocities of the skaters when the ball is caught? a. 0.02 m/s moving apart c. 0.02 m/s moving toward each other b. 0.04 m/s moving apart d. 0.04 m/s moving toward each other 77. Two carts with masses of 1.5 kg and 0.7 kg, respectively, are held together by a compressed spring. When released, the 1.5 kg cart moves to the left with a velocity of 7 m/s. What is the velocity of the 0.7 kg cart? (Disregard the mass of the spring.) a. 15 m/s to the right c. 7 m/s to the right b. 15 m/s to the left d. 7 m/s to the left 78. Each croquet ball in a set has a mass of 0.50 kg. The green ball travels at 10.5 m/s and strikes a stationary red ball. If the green ball stops moving, what is the final speed of the red ball after the collision? a. 10.5 m/s c. 12.0 m/s b. 6.0 m/s d. 9.6 m/s 79. A diver with a mass of 80.0 kg jumps from a dock into a 130.0 kg boat at rest on the west side of the dock. If the velocity of the diver in the air is 4.10 m/s to the west, what is the final velocity of the diver after landing in the boat? a. 2.52 m/s to the west c. 1.56 m/s to the west b. 2.52 m/s to the east d. 1.56 m/s to the east 80. Two objects move separately after colliding, and both the total momentum and total kinetic energy remain constant. Identify the type of collision. a. elastic c. inelastic b. perfectly elastic d. perfectly inelastic 81. Two objects stick together and move with the same velocity after colliding. Identify the type of collision. a. elastic c. inelastic b. perfectly elastic d. perfectly inelastic 82. After colliding, objects are deformed and lose some kinetic energy. Identify the type of collision. a. elastic c. inelastic b. perfectly elastic d. perfectly inelastic 83. Two balls of dough collide and stick together. Identify the type of collision. a. elastic c. inelastic b. perfectly elastic d. perfectly inelastic 84. Two snowballs with masses of 0.40 kg and 0.60 kg, respectively, collide head-on and combine to form a single snowball. The initial speed for each is 15 m/s. If the velocity of the snowball with a mass of 1.0 kg is 3.0 m/s after the collision, what is the decrease in kinetic energy? a. zero c. 60 J b. 110 J d. 90 J 85. A 1.5 ´ 10^3 kg truck moving at 15 m/s strikes a 7.5 ´ 10^2 kg automobile stopped at a traffic light. The vehicles hook bumpers and skid together at 10.0 m/s. What is the decrease in kinetic a. 1.1 ´ 10^5 J c. 1.7 ´ 10^5 J b. 1.2 ´ 10^4 J d. 6.0 ´ 10^4 J 86. A clay ball with a mass of 0.35 kg has an initial speed of 4.2 m/s. It strikes a 3.5 kg clay ball at rest, and the two balls stick together and remain stationary. What is the decrease in kinetic energy of the 0.35 kg ball? a. 1.6 J c. 3.1 J b. 4.8 J d. 6.4 J 87. An infant throws 5 g of applesauce at a velocity of 0.2 m/s. All of the applesauce collides with a nearby wall and sticks. What is the decrease in kinetic energy of the applesauce? a. 2 ´ 10^ 4 J c. 1 ´ 10^ 3 J b. 0.5 ´ 10^ 4 J d. 1 ´ 10^ 4 J 88. In an elastic collision between two objects with unequal masses, a. the total momentum of the system will increase. b. the total momentum of the system will decrease. c. the kinetic energy of one object will increase by the amount that the kinetic energy of the other object decreases. d. the momentum of one object will increase by the amount that the momentum of the other object decreases. 89. A billiard ball collides with a stationary identical billiard ball in an elastic head-on collision. After the collision, which is true of the first ball? a. It maintains its initial velocity. c. It comes to rest. b. It has one-half its initial velocity. d. It moves in the opposite direction. 90. A billiard ball collides with a second identical ball in an elastic head-on collision. What is the kinetic energy of the system after the collision compared with the kinetic energy before the a. unchanged c. two times as great b. one-fourth as great d. four times as great 91. Which of the following best describes the kinetic energy of each object after a two-body collision if the momentum of the system is conserved? a. must be less c. might also be conserved b. must also be conserved d. is doubled in value 92. Which of the following best describes the momenta of two bodies after a two-body collision if the kinetic energy of the system is conserved? a. must be less c. might also be conserved b. must also be conserved d. is doubled in value 93. An object with a mass of 0.10 kg makes an elastic head-on collision with a stationary object with a mass of 0.15 kg. The final velocity of the 0.10 kg object after the collision is 0.045 m/s and the final velocity of the 0.15 kg object after the collision is 0.16 m/s. What was the initial velocity of the 0.10 kg object? a. 0.16 m/s c. 0.20 m/s b. 1.06 m/s d. 0.20 m/s 94. A 90 kg halfback runs north and is tackled by a 120 kg opponent running south at 4 m/s. The collision is perfectly inelastic. Just after the tackle, both players move at a velocity of 2 m/s north. Calculate the velocity of the 90 kg player just before the tackle. a. 3 m/s south c. 10 m/s north b. 4 m/s south d. 12 m/s north 95. A clay ball with a mass of 0.35 kg strikes another 0.35 kg clay ball at rest, and the two balls stick together. The final velocity of the balls is 2.1 m/s north. What was the first ball s initial velocity? a. 4.2 m/s to the north c. 2.1 m/s to the north b. 2.1 m/s to the south d. 4.2 m/s to the south 96. A 2 kg mass moving to the right makes an elastic head-on collision with a 4 kg mass moving to the left at 4 m/s. The 2 kg mass reverses direction after the collision and moves at 3 m/s. The 4 kg mass moves to the left at 1 m/s. What was the initial velocity of the 2 kg mass? a. 3 m/s to the right c. 4 m/s to the left b. 1 m/s to the left d. 4 m/s to the right 97. Which of the following angles equals 2p rad? 98. One radian is equal to a. 60°. c. 57.3°. b. 58°. d. 56°. 99. How would an angle in radians be converted to an angle in degrees? a. The angle in radians would be multiplied by 180°/p. b. The angle in radians would be multiplied by 360°/p. c. The angle in radians would be multiplied by 180°/2p. d. The angle in radians would be multiplied by 2p/360°. 100. How would you convert an angle in degrees to an angle in radians? a. multiply the angle measured in degrees by 2p/180° b. multiply the angle measured in degrees by 2p/360° c. multiply the angle measured in degrees by p/360° d. multiply the angle measured in degrees by 2pr° 101. A cave dweller rotates a pebble in a sling with a radius of 0.30 m counterclockwise through an arc length of 0.96 m. What is the angular displacement of the pebble? a. 1.6 rad c. 3.2 rad b. 1.6 rad d. 3.2 rad 102. Earth has an equatorial radius of approximately 6380 km, and it rotates 360° every 24 h. What is the angular displacement of a person standing at the equator for 3.0 h? a. 0.26 rad c. 0.78 rad b. 0.52 rad d. 0.39 rad 103. A child sits on a carousel at a distance of 3.5 m from the center and rotates through an arc length of 6.5 m. What is the angular displacement of the child? a. 1.9 rad c. 3.0 rad b. 0.93 rad d. 5.0 rad 104. A bucket on the circumference of a water wheel travels an arc length of 18 m. If the radius of the wheel is 4.1 m, what is the angular displacement of the bucket? a. 1.0 rad c. 3.7 rad b. 4.4 rad d. 2.3 rad 105. What is the approximate angular speed of a wheel rotating at the rate of 5.0 rev/s? a. 3.2 rad/s c. 16 rad/s b. 1.6 rad/s d. 31 rad/s 106. A grinding wheel initially at rest with a radius of 0.15 m rotates until it reaches an angular speed of 12.0 rad/s in 4.0 s. What is the wheel's average angular acceleration? a. 96 rad/s^2 c. 3.0 rad/s^2 b. 48 rad/s^2 d. 0.33 rad/s^2 107. A potter's wheel moves from rest to an angular speed of 0.54 rad/s in 30.0 s. What is the angular acceleration of the wheel? a. 16 rad/s^2 c. 0.018 rad/s^2 b. 1.3 rad/s^2 d. 0.042 rad/s^2 108. A Ferris wheel initially at rest accelerates to a final angular speed of 0.70 rad/s and rotates through an angular displacement of 4.90 rad. What is the Ferris wheel's average angular a. 0.10 rad/s^2 c. 1.80 rad/s^2 b. 0.05 rad/s^2 d. 0.60 rad/s^2 109. A Ferris wheel rotates with an initial angular speed of 0.50 rad/s and accelerates over a 7.00 s interval at a rate of 4.0 ´ 10^ 2 rad/s^2. What is its angular speed? a. 0.20 rad/s c. 0.46 rad/s b. 0.30 rad/s d. 0.78 rad/s 110. An automobile tire with a radius of 0.30 m starts at rest and accelerates at a constant angular acceleration of 2.0 rad/s^2 for 5.0 s. What is the angular displacement of the tire? a. 12 rad c. 2.0 rad b. 25 rad d. 0.50 rad 111. A bicycle wheel rotates with a constant angular acceleration of 3.0 rad/s^2. If the initial angular speed of the wheel is 1.5 rad/s, what is the angular displacement of the wheel after 4.0 s? a. 6.0 rad c. 3.0 ´ 10^1 rad b. 24 rad d. 36 rad 112. A gear in a machine accelerates at 11.2 rad/s^2. If the wheel's initial angular speed is 5.40 rad/s, what is the wheel's angular speed after exactly 3.0 seconds? a. 39.0 rad/s c. 209 rad/s b. 13.6 rad/s d. 28.2 rad/s 113. A ball rolls downhill with an angular speed of 2.5 rad/s and has a constant angular acceleration of 2.0 rad/s^2. If the ball takes 11.5 s to reach the bottom of the hill, what is the final angular speed of the ball? a. 13 rad c. 33 rad/s b. 31 rad/s d. 25.5 rad/s 114. A helicopter has 3.0 m long rotor blades that are rotating at an angular speed of 63 rad/s. What is the tangential speed of each blade tip? a. 99 m/s c. 21 m/s b. 190 m/s d. 66 m/s 115. A point on the rim of a 0.30 m radius rotating wheel has a tangential speed of 4.0 m/s. What is the tangential speed of a point 0.20 m from the center of the same wheel? a. 0.8 m/s c. 2.6 m/s b. 1.3 m/s d. 8.0 m/s 116. A cylinder with a diameter of 0.150 m rotates in a lathe at a constant angular speed of 35.6 rad/s. What is the tangential speed of the surface of the cylinder? a. 2.67 m/s c. 2.37 ´ 10^2 m/s b. 5.34 m/s d. 4.75 ´ 10^2 m/s 117. An automobile tire with a radius of 0.3 m accelerates from rest at a constant 2 rad/s^2 over a 5 s interval. What is the tangential component of acceleration for a point on the outer edge of the tire? a. 30 m/s^2 c. 0.6 m/s^2 b. 7 m/s^2 d. 0.3 m/s^2 118. A hamster gets on a stationary wheel with a radius of 0.15 m and runs until the wheel rotates at an angular speed of 12.0 rad/s in 4.0 s. What is the tangential acceleration of the wheel's a. 0.45 rad/s^2 c. 0.65 rad/s^2 b. 0.6 rad/s^2 d. 1.30 rad/s^2 119. A contestant in a game show spins a stationary wheel with a radius of 0.50 m so that it has a constant angular acceleration of 0.40 rad/s^2. What is the tangential acceleration of a point on the edge of the wheel? a. 0.20 m/s^2 c. 1.3 m/s^2 b. 0.60 m/s^2 d. 0.73 m/s^2 120. A stone on the edge of the tire of a unicycle wheel with a radius of 0.25 m has a centripetal acceleration of 4.0 m/s^2. What is the tire's angular speed? a. 1.0 rad/s c. 3.2 rad/s b. 2.0 rad/s d. 4.0 rad/s 121. A point on the rim of a rotating wheel with a 0.37 m radius has a centripetal acceleration of 19.0 m/s^2. What is the angular speed of the wheel? a. 0.89 m/s c. 3.2 rad/s b. 1.6 rad/s d. 7.2 rad/s 122. If the distance from the center of a merry-go-round to the edge is 1.2 m, what centripetal acceleration does a passenger experience when the merry-go-round rotates at an angular speed of 0.5 a. 1.7 m/s^2 c. 0.3 m/s^2 b. 0.9 m/s^2 d. 0.6 m/s^2 123. A 0.40 kg ball on a 0.50 m string rotates in a circular path in a vertical plane. If the angular speed of the ball at the bottom of the circle is 8.0 rad/s, what is the force that maintains circular motion? a. 5.6 N c. 13 N b. 11 N d. 20.0 N 124. A 0.40 kg ball on a 0.50 m string rotates in a circular path in a vertical plane. If a constant angular speed of 8.0 rad/s is maintained, what is the tension in the string when the ball is at the top of the circle? a. 9.0 N c. 13 N b. 11 N d. 10.0 N 125. A roller coaster loaded with passengers has a mass of 2.0 ´ 10^3 kg; the radius of curvature of the track at the lowest point of the track is 24 m. If the vehicle has a tangential speed of 18 m /s at this point, what force is exerted on the vehicle by the track? a. 2.3 ´ 10^4 N c. 3.0 ´ 10^4 N b. 4.7 ´ 10^4 N d. 2.7 ´ 10^4 N
{"url":"http://ww2.valdosta.edu/~pbaskin/ch4-8_review.htm","timestamp":"2014-04-18T18:27:14Z","content_type":null,"content_length":"544972","record_id":"<urn:uuid:05d3fa8a-e777-4491-92d9-912ce9988843>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Setting Up a Division Calculation Correctly Date: 12/16/2005 at 22:57:34 From: Pat Subject: division help please I need math help, please. What exactly does "divided by" mean? I never know what number to divide by. I get the two numbers mixed up. Is there an easy way to remember what number goes where? So when they say divide this number by that number, what number goes in the calculator first? And what number goes on the outside or the inside if I am doing it on paper? What does the word "divide" mean? Maybe that will help with the problem? Thank you for your time in advance. Date: 12/16/2005 at 23:29:43 From: Doctor Peterson Subject: Re: division help please Hi, Pat. This is a very common problem! Let's pick a simple example to work with, that involves whole numbers. We know that 2*3=6 (I'm using "*" for the multiplication symbol.) So if we divide the product, 6, BY either of the factors (2 or 3) we get the other one: 6 / 2 = 3 6 / 3 = 2 (I'm using "/" as the division symbol; that's what it actually means in a fraction.) You can see that division is basically "undoing" a multiplication: starting with the result of the multiplication and one of the two numbers you started with, you can find the other. When we enter the problem into a calculator, we put the numbers in the same order we write on paper: 6 / 2, "this" "divided by" "that". "This" is called the dividend (the number being divided), and "that" is called the divisor (the number it is divided BY). When we do a division by hand, we put the divisor on the left, and the dividend under the bar: 2 ) 6 This is what confuses a lot of students, who read division problems the wrong way, as if this were 2 divided by 6. You might find it helpful to think of the bar as an operating table, and the patient lying on it is being "divided" by the surgeon, who stands next to it. (I suppose the organs being taken out are put on top.) The 6 is divided BY the 2. A division can be interpreted in several different ways. The most general way to talk about it is what I just did: division means finding what number you would have to multiply the divisor by (the 2) in order to get the dividend (the 6). In practical problems, we might divide 6 by 2 to find out how many piles I can make from 6 objects if I put 2 objects in each pile; or how many objects will be in each pile if I "divide" 6 objects into 2 piles. The basic idea here (and the origin of the word) is that we are "dividing" a group into parts. Incidentally, the word "quotient", which is used for the answer to a division, is essentially the Latin word for "how many?". Of course, once you stop talking only about whole numbers, some of these ideas disappear; we can't really talk about "dividing 2 objects into 3 piles", and it gets even worse if we want to divide a fraction by a fraction. But these images are the original ideas that motivate division; for deeper understanding of division, we move on from there to the idea of undoing a multiplication, which I started with. I hope that helps. If you have any further questions, feel free to write back. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/68380.html","timestamp":"2014-04-20T01:20:48Z","content_type":null,"content_length":"8428","record_id":"<urn:uuid:53b174d4-65da-48bd-a7d0-c83c6533d902>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Influence of number type and analysis of errors in computational estimation tasks De Castro Hernández, Carlos and Castro Martínez, Enrique and Segovia Álex, Isidoro (2002) Influence of number type and analysis of errors in computational estimation tasks. In Proceedings of the 26th Conference of the International group for the Psychology of Mathematics Education. University of East Anglia, Norwich, UK, pp. 201-208. ISBN 0-9539983-6-3 In this study we analyze the difficulty of computational estimation tasks –with operations without context– in function of the operation type –multiplication and division– and number type –whole, decimal greater than one and decimal less than one– that appears in them. Errors made in estimating with decimal numbers less than one are also analyzed. The research counts with the participation of 53 preservice elementary teachers. An estimation test is administered to the teachers and some of them are selected to accomplish interviews. The conclusion is that estimating with decimals less than one is more difficult than with whole numbers or decimals greater than one, and most of the errors –but not all– produced in estimation processes is due to teachers’ misconceptions about operations of multiplication and division. Item Type: Book Section Uncontrolled Mathematics Education, Computational Estimation, Decimals, Errors, Teacher Education, Educación Matemática, Estimación en cálculo, Decimales, Errores, Formación de Maestros Subjects: Humanities > Education > Mathematics study and teaching ID Code: 12633 References: Bestgen, B., Reys, R., Rybolt, J., & Wyatt, J. W. (1980). Effectiveness of systematic instruction on attitudes and computational estimation skills of preservice elementary teachers. Journal for Research in Mathematics Education, 11, 124-136. De Corte, E., & Verschaffel, L. (1996). An empirical test of the impact of primitive intuitive models of operations on solving word problems with a multiplicative structure. Learning and Instruction, 6, 219-243. Greer, B. (1992). Multiplication and division as models of situations. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 276-295). New York: Macmillan Publishing Company. Goodman, T. (1991). Computational estimation skills of pre-service elementary teachers. International Journal of Mathematical Education in Science and Technology, 22, 259-272. Levine, D. R. (1980). Computational estimation ability and the use of estimation strategies among college students. Doctoral dissertation. New York University. Morgan, C. (1990). Factors affecting children’s strategies and success in estimation. In G. Booker, P. Cobb, & T. N. de Mendicuti (Eds.), Proceedings of the Fourteenth Psychology of Mathematics Education Conference (pp. 265-272). Mexico. Reys, R. E., Bestgen, B. J., Rybolt, J., & Wyatt, J. (1982). Processes used by good computational estimators. Journal for Research in Mathematics Education, 12, 183-201. Rubenstein, R. (1985). Computational estimation and related mathematical skills. Journal for Research in Mathematics Education, 16, 106-119. Segovia, I., Castro, E., Castro, E., & Rico, L. (1989). Estimación en cálculo y medida [Computational and measurement estimation]. Madrid: Síntesis. Tirosh, D., & Graeber, A. O. (1989). Preservice elementary teachers’ explicit beliefs about multiplication and division. Educational Studies in Mathematics, 20, 79-96. Deposited On: 04 May 2011 11:19 Last Modified: 06 Feb 2014 09:29 Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/12633/","timestamp":"2014-04-17T10:00:49Z","content_type":null,"content_length":"32285","record_id":"<urn:uuid:d9daa6c1-0da8-4a1f-bb77-09e63315c13c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Turing Machine for a full description, or for the original paper. One of the , a , (i.e. they don't really exist), a is an abstract computing device, traditionally a (finite state) machine reading and writing marks on an infinite paper tape. • Not so fast: Turing machines have been implemented as toys, and KarlScherer? at http://www.mathematik.uni-heidelberg.de/index_en.html built a TuringMachine from wood and metal, using ballBearings to record states on its "tape." • If it can be built, it probably doesn't have an infinite tape. If it doesn't have an infinite tape, it's not really a Turing machine. Turing went on to show that you could create a which could take input allowing it to simulate any other (i.e. a program - on a 'Universal' ). The is essentially that anything we could reasonably call computable can be expressed as input to a universal Turing Machine, and indeed and Turing's Machines are equivalent in this way. Didn't somebody else discover the thesis independently of Church and Turing? According to an American-Polish logician called made an important independent contribution, and Stephen Kleene had a key role in helping Church. I remember studying in a book with a chapter on PostMachine s , other in Godels or Church work, other with . Can someone name this book? Real Software Engineers admire Turing machines for the clarity and orthogonality of their instruction set. It's just too bad they're so poor at I/O. Hmm, of what other paradigm does this remind us? Whaddya mean? They do *LOTS* of I/O! ;-> Actually a kind of Programmable Logic Array plugged into I/O See also There are some key things about s that make them interesting. • There exists a way of representing any TuringMachine (TM) as data for a special purpose UniversalTuringMachine. This UTM then acts as an interpreter and simulates the TM perfectly. • There exist simple-to-state requirements that no TuringMachine can meet without error or endlessly looping. • Computable numbers are defined by there existing a TM that when it is given a description of the desired accuracy it will calculate the number to that accuracy. This allows us to claim that pi is computable (but has infinite digits) for example. On the other hand it indicates that there is an uncountable set of numbers that can not be computed by any algorithm/program/TM (as Turing machines are countable). • Turing Machines are surprisingly simple, despite their power, and many other systems are equivalent to them. Conclusions about Turing Machines can be applied to those systems. For example... • TuringMachines and their properties can be encoded as formulae and equations. Hence there are equations that can not be solved by any TuringMachine. • Turing Machines can be encoded in GameOfLife positions. Hence there are questions about Life which cannot be solved by any Turing Machine (eg whether an arbitrary position will ever "settle Here's a simple Turing Machine in Python:- def go(t, s, p): if s == 0 and t[p] == 0: t[p] = 1 go(t, s, p) if s == 0 and t[p] == 1: t[p] = 0 go(t, s, p) t, s, p = [0], 0, 0 go(t, s, p) Where "t" is the tape, s is the status, and p is the position. All it does is alternate a number between "0" and "1" continually. -- Can anyone write a UTM in Python? How about a TM in RDF? No but here's a TM in XSLT http://www.unidex.com/turing/utm.htm also Prolog http://www.donotenter.com/resume/pub/tm2.htm and one in JavaScript that can be run online http://www.turing.org.uk/turing/ You mean "No-one has written one (and that's probably wrong as well). What Turing originally invented was a machine consisting of an infinitely long tape divided into cells. On each cell one of a finite number of symbols can be written. The head that reads and writes on the tape moves one cell to the left or to the right in each time step. The machine itself is in one of a finite number of states. The state the machine is in determines what the machine should do in each time step via a state transition table. see: http://mathworld.wolfram.com/TuringMachine.html (as Turing machines are countable). That doesn't sound right to me. My understanding of a Turing machine is that it can be encoded as tape data, and therefore the set of Turing machines maps to the set of all possible tape data, which is uncountable (more than one symbol ^ unbounded length). -- KarlKnechtel As you say, any Turing machine may be encoded by a string of symbols taken from a finite alphabet. The set of all these strings is countable since you can enumerate all strings: There are only finitely many strings of a given length. Now you first write down all strings of length zero, then those of length one, then those of length two, etc. So the strings are in one-to-one correspondence to the natural numbers and therefore countable. See CountablyInfinite. Formally a is a quintuple M = (Q, Sigma, Tau, Epsilon, q0) where: • Q is a finite set of states, q0 being the starting state • Tau is a finite set (called the tape alphabet) containing a special symbol called B (blank) • Sigma is a subset of Tau-{B} called the input alphabet • Epsilon is a partial function from (Q x Tau) to (Q x Tau x {L, R}). In other words this is the transition table which, given an internal state in Q and the symbol under the reading head, selects a new state, a new symbol to write at the head, and a 'move left' or 'move right' command. A transition is written as Epsilon(qi,x) = [qj,y,d] where d belongs to {L,R}. It can be drawn as a state diagram with edges looking like: qi---x/y d--->qj (The edges can loop back to the same node ie when qi=qj). A transition can also be written as a list qi x y d qj Some books write this as two instructions qi x y qi, qi x d qj only allowing one operation per step (either change symbol or move L or R per step). Epsilon can also be thought of as a set of instructions. Each time through the loop, the machine reads a symbol on the tape and compares the current (state, symbol) pair with the instruction set. If a match is found the machine transitions into a new state specified by the right hand side of the function. If no match is found the machine simply halts. It halts when no match is found. An example computation might that accepts a language (a union)*aa(a union b)* below. TM not specified it would be 5 instructions long but to give an idea of what "running one" looks like: Numeric functions can be specified using a base 1 (unary) representation ie to compute f(2,1) start with Bq0111B11B. B separates arguments instead of "," which is not in the machine's alphabet. An example TM for the succ function s(n)=n+1 (compare the one in ) is q0 B B R q1 q1 1 1 R q1 q1 B 1 L qf qf 1 1 L qf To run it on s(2) we would start with tape Bq0111B. It would terminate with Bqf1111B = 3 base 1. Non numeric functions can also be specified. A TM T2 can be simulated on a TM T1 also by encodings similar to above. Let's see if I understand... The succ function didn't set the element at the right of (q1, B) blank, so I assume that every cell whose value is not specified has the value "blank". If their content was simply "undefined", then some s may not work, although they could probably be rewritten in a way which would work. But in the way the definition stands now, the tape's content is defined not by explicitly writing its content but by a rule: "all cells are blank except for those, which have such and such values". Could we use another rule, like "every odd cell has such value", which would use an infinite portion of the tape, or is this forbidden? discusses the possibility of feeding an infinite input program to the . However that very same machine works by writing down the "complete configuration" of the machine it emulates, and this configuration includes the content of the tape. There are machines which could halt even given infinite non-blank input, but the won't halt trying to simulate it. Since the is supposed to be able to handle all cases, then we cannot use infinite non-blank input. A lot of skepticism here. I guess s don't believe in s, or at least consider them to be a theoretical . A if you will. It's just self-selection bias: a page like this will be of lesser interest to many who consider it an old known topic, but will attract skeptics, so naturally you see skeptical posts. I was kind of staggered a few years ago, working with some junior programmers with CS degrees, who had never heard of Turing nor Turing machines. Turns out their bachelor's programs, in the country they came from, were primarily general engineering, and included only 3 actual CS courses. The rest of the CS courses were to be taken by Master's candidates. Different system. But it made me realize why such a large percentage of people from that country that I'd previously worked with usually have a Master's degree, whether from back home or from the mid-western U.S. may be an example of how TechnologyEnablesTheory : could Turing have envisaged it if the tickertape had not been invented? Come to that, could NewtonianMechanics have been envisaged if clockwork had not been invented? - Actually AlanTuring s impetus for envisioning TMs was to investigate GoedelsIncompletenessTheorem. No real machine at the time was adequate but he imagined a hypothetical one with infinite tape. Obviously influenced by what was around him at the time (TickerTape?) but also workings of biological cells and other phenomena, he was motivated by something completely abstract. Later on when he worked on decrypting the Enigma codes he was able to create real machines to assist with the calculations, so inventions were the result of his envisionings as much as the other way around. Thereby saving lives although in some cases Churchill had to let bombings take place so as not to divulge the results of Turings real Machines to the enemy
{"url":"http://c2.com/cgi/wiki?TuringMachine","timestamp":"2014-04-20T22:01:27Z","content_type":null,"content_length":"14665","record_id":"<urn:uuid:f145fa64-586f-4c6d-a27b-9c8fc7802525>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
The Official Kobe Bryantichrist Thread I know that a couple of posts have been made about this already, but I decided to make one with all the facts compiled including some that were left out in the others. - Kobe Bryantichrist stands 6'6 - Kobe Bryantichrist scores 81 points in his 666th game - 46 FG's + 20 FT's = 66 shots at the basket - 6 rebounds, 2 assists and 3 turnovers for a 0.66666666667 ast/to ratio, 6 missed 3's - 66% of his teams points - 6 minutes on the bench - After the 122-104 victory over the Raptors, the Lakers are 3 over .500 after 41 games... double that rate and they are 6 games over .500 at the end of the season - Chris Bosh got his 6th foul on Kobe Bryantichrist at the 2:36 mark (2 x 3 = 6) numbers are funny...you could pretty much do that for any game and any player and come up with a bunch of such numerological findings... numbers are funny...you could pretty much do that for any game and any player and come up with a bunch of such numerological findings... you forgot that 81 points is 3 to the 3rd power and 3 + 3 = 6... haha calidoc75 wrote:numbers are funny...you could pretty much do that for any game and any player and come up with a bunch of such numerological findings... you forgot that 81 points is 3 to the 3rd power and 3 + 3 = 6... haha Only it's 3 to the 4th power. So 3 times 4 is 12. Kobe has 2 legs, so divide by 2 and you get 6. you guys forgot that 62 pts vs Dallas 81 pts vs Tor 17 A Bynum our savior! what's the most resilient parasite? according to the season splits section in kobes bio on nba.com hes scored 666 points at away games up to this point in the season YouTube | Twitter | Facebook | Highlights | Instagram Future PS3 owners club: Lakerfan 36, lakaboy 42, Hitemup, scheven, KrazyIvan909, flipogb, LUUUKE, Kobe The Dagger, kobe4ever
{"url":"http://www.clublakers.com/kobe/the-official-kobe-bryantichrist-thread-t48075.html","timestamp":"2014-04-18T03:00:50Z","content_type":null,"content_length":"34304","record_id":"<urn:uuid:07431b9a-8b3a-45a8-83fc-67e2d024e8b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Circuit and architecture tradeoffs for high-speed multiplication Results 1 - 10 of 28 - in International Symposium on Microarchitecture (MICRO35), Nov. 2002. Selected as one of the four Best IBM Research Papers in Computer Science, Electrical Engineering and Math published in , 2002 "... During the concept phase and definition of next generation high-end processors, power and performance will need to be weighted appropriately to deliver competitive cost/performance. It is not enough to adopt a CPI-centric view alone in early-stage definition studies. One of the fundamental issues co ..." Cited by 41 (3 self) Add to MetaCart During the concept phase and definition of next generation high-end processors, power and performance will need to be weighted appropriately to deliver competitive cost/performance. It is not enough to adopt a CPI-centric view alone in early-stage definition studies. One of the fundamental issues confronting the architect at this stage is the choice of pipeline depth and target frequency. In this paper we present an optimization methodology that starts with an analytical power-performance model to derive optimal pipeline depth for a superscalar processor. The results are validated and further refined using detailed simulation based analysis. As part of the power-modeling methodology, we have developed equations that model the variation of energy as a function of pipeline depth. Our results using a set of SPEC2000 applications show that when both power and performance are considered for optimization, the optimal clock period is around 18 FO4. We also provide a detailed sensitivity analysis of the optimal pipeline depth against key assumptions of these energy models. 1 - In Proceedings of the 24 th Euromicro Conference "... In this paper we investigate the Sum Absolute Difference (SAD) operation, an operation frequently used by a number of algorithms for digital motion estimation. For such operation, we propose a single vector instruction that can be performed (in hardware) on an entire block of data in parallel. We in ..." Cited by 23 (15 self) Add to MetaCart In this paper we investigate the Sum Absolute Difference (SAD) operation, an operation frequently used by a number of algorithms for digital motion estimation. For such operation, we propose a single vector instruction that can be performed (in hardware) on an entire block of data in parallel. We investigate possible implementations for such an instruction. Assuming a machine cycle comparable to the cycle of a two cycle multiply, we show that for a block of 16x1 or 16x16, the SAD operation can be performed in 3 or 4 machine cycles respectively. The proposed implementation operates as follows: first we determine in parallel which of the operands is the smallest in a pair of operands. Second we compute the absolute value of the difference of each pairs by subtracting the smallest value from the largest and finally we compute the accumulation. The operations associated with the second and the third step are performed in parallel resulting in a multiply (accumulate) type of operation. Our approach covers also the Mean Absolute Difference (MAD) operation at the exclusion of a shifting (division) operation. - IEEE Transactions on Computers , 2004 "... been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be ..." Cited by 20 (8 self) Add to MetaCart been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). Copies may be requested from IBM T. J. Watson Research Center, P. - in IEEE Alessandro Volta Memorial Workshop on Low Power Design , 1999 "... Reducing the power dissipation of parallel multipliers is important in the design of digital signal processing systems. In many of these systems, the products of parallel multipliers are rounded to avoid growth in word size. The power dissipation and area of rounded parallel multipliers can be signi ..." Cited by 19 (5 self) Add to MetaCart Reducing the power dissipation of parallel multipliers is important in the design of digital signal processing systems. In many of these systems, the products of parallel multipliers are rounded to avoid growth in word size. The power dissipation and area of rounded parallel multipliers can be significantly reduced by a technique known as truncated multiplication. With this technique, the least significant columns of the multiplication matrix are not used. Instead, the carries generated by these columns are estimated. This estimate is added with the most significant columns to produce the rounded product. This paper presents the design and implementation of parallel truncated multipliers. Simulations indicate that truncated parallel multipliers dissipate between 29 and 40 percent less power than standard parallel multipliers for operand sizes of 16 and 32 bits. 1: Introduction High-speed parallel multipliers are fundamental building blocks in digital signal processing systems [1]. , 1991 "... The Stanford Nanosecond Arithmetic Project is targeted at realizing an arithmetic processor with performance approximately an order of magnitude faster than currently available technology. The realization of SNAP is predicated on an interdisciplinary approach and effort spanning research in algor ..." Cited by 18 (1 self) Add to MetaCart The Stanford Nanosecond Arithmetic Project is targeted at realizing an arithmetic processor with performance approximately an order of magnitude faster than currently available technology. The realization of SNAP is predicated on an interdisciplinary approach and effort spanning research in algorithms, data representation, CAD, circuits and devices, and packaging. SNAP is visualized as an arithmetic coprocessor implemented on an active substrate containing several chips, each of which realize a particular arithmetic function. This year's report highlights recent results in the area of wave pipelining. We have fabricated a number of prototype die, implementing a multiplier slice. Cycle times below 5 ns were realized. - IEEE Journal of Solid-State Circuits , 1996 "... A novel high-speed circuit implementation of the (31,5)-parallel counter (i.e., population counter) based on capacitive threshold logic (CTL) is presented. The circuit consists of 20 threshold logic gates arranged in two stages, i.e., the parallel counter described here has an effective logic depth ..." Cited by 8 (0 self) Add to MetaCart A novel high-speed circuit implementation of the (31,5)-parallel counter (i.e., population counter) based on capacitive threshold logic (CTL) is presented. The circuit consists of 20 threshold logic gates arranged in two stages, i.e., the parallel counter described here has an effective logic depth of two. The charge-based CTL gates are essentially dynamic circuits which require a periodic refresh or precharge cycle, but unlike conventional dynamic CMOS gates, the circuit can be operated in synchronous as well as in asynchronous mode. The counter circuit is implemented using conventional 1.2 ¯m double-poly CMOS technology, and it occupies a silicon area of about 0.08 mm 2 : Extensive post-layout simulations indicate that the circuit has a typical input-to-output propagation delay of less than 3 ns, and the test circuit is shown to operate reliably when consecutive 31-b input vectors are applied at a rate of up to 16 Mvectors/s. With its demonstrated data processing capability of abou... - Design Automation and Test in Europe (DATE ’07 "... Despite the progress of the last decades in electronic design automation, arithmetic circuits have always received way less attention than other classes of digital circuits. Logic synthesisers, which play a fundamental role in design today, play a minor role on most arithmetic circuits, performing s ..." Cited by 8 (7 self) Add to MetaCart Despite the progress of the last decades in electronic design automation, arithmetic circuits have always received way less attention than other classes of digital circuits. Logic synthesisers, which play a fundamental role in design today, play a minor role on most arithmetic circuits, performing some local optimisations but hardly improving the overall structure of arithmetic components. Architectural optimisations have been often studied manually, and only in the case of very common building blocks such as fast adders and multi-input adders, ad-hoc techniques have been developed. A notable case is multi-input addition, which is the core of many circuits such as multipliers, etc. The most common technique to implement multi-input addition is using compressor trees, which are often composed of carry-save adders (based on (3: 2) counters, i.e., full adders). A large body of literature exists to implement compressor trees using large counters. However, all the large counters were built by using full and half adders recursively. In this paper we give some definite answers to issues related to the use of large counters. We present a general technique to implement large counters whose performance is much better than the ones composed of full and half adders. Also we show that it is not always useful to use larger optimised counters and sometimes a combination of various size counters gives the best performance. Our results show 15 % improvement in the critical path delay. In some cases even hardware area is reduced by using our counters. 1. - IEEE 30 TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS , 1996 "... In this paper we propose new, threshold logic based, 7|2 counters. In particular we show that 7|2 counters can be implemented with threshold logic gates in three levels of gates with explicit computation of the outputs. Consequently, we improve the delay by showing that 7|2 counters can be designed ..." Cited by 7 (6 self) Add to MetaCart In this paper we propose new, threshold logic based, 7|2 counters. In particular we show that 7|2 counters can be implemented with threshold logic gates in three levels of gates with explicit computation of the outputs. Consequently, we improve the delay by showing that 7|2 counters can be designed with two levels of gates and implicit computation of the sum. Further, we investigate multiplication schemes using such counters, in combination with Kautz's networks for symmetric Boolean functions. Using a 32X32 direct multiplication scheme based on 7|2 implicit output computation counters and the Kautz's networks we show that our scheme outperforms in terms of area requirements known proposals for multiplications using threshold logic. - IEEE Transactions on Computers , 2000 "... AbstractÐHigh-speed multiplication is frequently used in general-purpose and application-specific computer systems. These systems often support integer multiplication, where two n-bit integers are multiplied to produce a 2n-bit product. To prevent growth in word length, processors typically return t ..." Cited by 7 (2 self) Add to MetaCart AbstractÐHigh-speed multiplication is frequently used in general-purpose and application-specific computer systems. These systems often support integer multiplication, where two n-bit integers are multiplied to produce a 2n-bit product. To prevent growth in word length, processors typically return the n least significant bits of the product and a flag that indicates whether or not overflow has occurred. Alternatively, some processors saturate results that overflow to the most positive or most negative representable number. This paper presents efficient methods for performing unsigned or two's complement integer multiplication with overflow detection or saturation. These methods have significantly less area and delay than conventional methods for integer multiplication with overflow detection or saturation. - in Proc. 2000 IEEE Int. Symp. Circuits and Systems , 2000 "... This paper describes the design and implementation of a high-speed low-power 16 by 16 two's complement parallel multiplier. The multiplier uses optimized radix-4 Booth encoders to generate the partial products, and an array of strategically placed (3,2), (5,3), and (7,4) counters to reduce the parti ..." Cited by 7 (0 self) Add to MetaCart This paper describes the design and implementation of a high-speed low-power 16 by 16 two's complement parallel multiplier. The multiplier uses optimized radix-4 Booth encoders to generate the partial products, and an array of strategically placed (3,2), (5,3), and (7,4) counters to reduce the partial products to sum and carry vectors. The more significant bits of the product are computed from left to right using a modified Ercegovac-Lang converter. An implementation of the multiplier in 0.25- m static CMOS technology has an area of 0.126 mm 2 , a measured delay of 4.39 ns, and a average power dissipation of 0.110 mW/MHz at 2.5 Volts and 100 ffi C. I.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=109484","timestamp":"2014-04-19T19:46:34Z","content_type":null,"content_length":"42129","record_id":"<urn:uuid:b42f0f62-8c9e-4ced-b305-aa3a27facc76>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2011/629 Near-Linear Unconditionally-Secure Multiparty Computation with a Dishonest MinorityEli Ben-Sasson and Serge Fehr and Rafail OstrovskyAbstract: Secure multiparty computation (MPC) allows a set of n players to compute any public function, given as an arithmetic circuit, on private inputs, so that privacy of the inputs as well as correctness of the output are guaranteed. Of special importance both in cryptography and in complexity theory is the setting of information-theoretic MPC, where (dishonest) players are unbounded, and no cryptographic assumptions are used. In this setting, it was known since the 1980's that an honest majority of players is both necessary and sufficient to achieve privacy and correctness. The main open question that was left in this area is to establish the exact communication complexity of MPC protocols that can tolerate malicious behavior of a minority of dishonest players. In all works, there was a large gap between the communication complexity of the best known protocols in the malicious setting and the ``honest-but-curious'' setting, where players do not deviate from the protocol. In this paper, we show, for the first time, an MPC protocol that can tolerate dishonest minority of malicious players that matches the communication complexity of the best known MPC protocol in the honest-but-curious setting. More specifically, we present a new n-player MPC protocol that is secure against a computationally-unbounded active and malicious adversary that can adaptively corrupt up to a minority t < n/2 of the players. For polynomially-large binary circuits that are not too unshaped, our protocol has an amortized communication complexity of O(n log n + k/n^c) bits per multiplication (i.e. AND) gate, where k denotes the security parameter and c is an arbitrary non-negative constant. This improves on the previously most efficient protocol with the same security guarantee, which offers an amortized communication complexity of O(n^2 k) bits per multiplication gate. For any k polynomial in n, the amortized communication complexity of our protocol matches the best known O(n log n) communication complexity of passive security MPC protocol. Thus, our result gives the first near-linear complexity of MPC (instead of quadratic) in the dishonest-minority setting and settles the question of the difference in communication complexity between the honest-but-curious and fully malicious settings. For sufficiently large circuits, our protocol can be improved only if the honest-but-curious protocol can be improved. We introduce several novel techniques for reducing communication complexity of MPC in the malicious setting that are of independent interest and we believe will have wider applicability. One is a novel idea of computing authentication tags by means of a mini MPC, which allows us to avoid expensive double-sharings when dealing with malicious players; the other is a batch-wise multiplication verification that allows us to speedup Beaver's ``multiplication triples''. The techniques draw from the PCP world and this infusion of new techniques from other domains of computational complexity may find further uses in the context of MPC. Category / Keywords: cryptographic protocols / MPCPublication Info: Full version of CRYPTO 2012 articleDate: received 21 Nov 2011, last revised 17 Aug 2012Contact author: serge fehr at cwi nl Available format(s): PDF | BibTeX Citation Version: 20120817:191745 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2011/629/20120817:191745","timestamp":"2014-04-20T05:44:51Z","content_type":null,"content_length":"4903","record_id":"<urn:uuid:c0ce14fa-f03d-489e-a791-f1849e01c5f6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Puyallup Calculus Tutor Find a Puyallup Calculus Tutor ...Core content includes the acquisition of skills and the application of concepts in linear inequalities, polynomial functions, composite and inverse functions. Learning outcomes include exponential and logarithmic functions, domain, range, asymptotes, probability, and trigonometric functions. I have help many students acquire and master these skills. 11 Subjects: including calculus, geometry, probability, algebra 1 I have tutored math for over 3 years. I am happy to help with many different math classes, from Elementary math to Calculus. I have helped my former classmates and my younger brother many times with Physics. 16 Subjects: including calculus, chemistry, French, geometry ...Previously, I tutored physics, mathematics, chemistry, and English to three students who later graduated as high school valedictorians. I hold a PhD in Aeronautical and Astronautical Engineering from the University of Washington, and the coursework for my Ph.D. included an extensive amount of mathematics. Also, I have used mathematics throughout my career in science and 21 Subjects: including calculus, chemistry, English, physics ...Thank you for your interest!4.0 in Differential, Integral, Vector, and Multi-variable Calculus At the college level, I have taken a differential equations course and earned a 4.0. I also have taken single variable, multi-variable, and vector calculus. I have been tutoring on campus for nearly two years as well, in both mathematics and logic. 10 Subjects: including calculus, physics, algebra 1, algebra 2 ...As a result, I ask for 4 hours notice of a need to cancel or reschedule. Your satisfaction is what matters to me. I look forward to hearing from you so that we can work together to achieve the success you desire. 8 Subjects: including calculus, geometry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Puyallup_Calculus_tutors.php","timestamp":"2014-04-17T13:38:53Z","content_type":null,"content_length":"23792","record_id":"<urn:uuid:8428f0b2-0dc8-444b-9833-15df9640fcdf>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] A newbie question: How to get the "rank" of an 1-d array CL anewgene at gmail.com Mon Mar 27 08:37:04 CST 2006 Hi, group, I need to get the "rank" of an 1-D array (ie. a vector). Note that "rank" here is not the value returned from "rank(a_array)". It is the order of the item in its sorted arrray. For example, I have a python function called "listrank" to return the "rank" as below: In [19]: x Out[19]: array([1, 2, 5, 3, 3, 2]) In [20]: listrank(x) Out[20]: [6, 4, 1, 2, 2, 4] Somebody suggested me to use "argsort(argsort(x))". But the problem is it does not handle ties. See the output: In [21]: argsort(argsort(x)) Out[21]: array([0, 1, 5, 3, 4, 2]) I am wondering if there is a solution in numpy/numarray/numeric to get this done nicely. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-March/007141.html","timestamp":"2014-04-18T00:23:40Z","content_type":null,"content_length":"3492","record_id":"<urn:uuid:086d121a-d3b9-4e5d-bad3-55614938f05a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
Galois Group, generators March 15th 2009, 07:30 PM Galois Group, generators Suppose $G$ is the Galois Group over $\mathbb{Q}$ of one of the polynomials: i) $x^{11}-1$ ii) $x^4-2$ For your choice of polynomial, find the generators of the splitting field $E$ over $\mathbb{Q}$ and describe the elements of G by their effect on these generators. Finally, describe the group by generators and relations. I just need to do one of the polynomials above and I am just beginning to learn Galois Theory, so I don't understand how to do this. I guess I need a good head start on how to solve this problem. March 15th 2009, 08:10 PM Suppose $G$ is the Galois Group over $\mathbb{Q}$ of one of the polynomials: i) $x^{11}-1$ ii) $x^4-2$ For your choice of polynomial, find the generators of the splitting field $E$ over $\mathbb{Q}$ and describe the elements of G by their effect on these generators. Finally, describe the group by generators and relations. I just need to do one of the polynomials above and I am just beginning to learn Galois Theory, so I don't understand how to do this. I guess I need a good head start on how to solve this problem. Let me do it for $x^{11}-1$ and you think about $x^4-2$. The polynomial $x^{11} - 1$ can be factored as $\prod_{j=0}^{10}(x - \zeta^j)$ where $\zeta = e^{2\pi i/11}$. Therefore, the splitting field is $\mathbb{Q}(1,\zeta,\zeta^2,...,\zeta^{10}) = \mathbb{Q}(\zeta) = K$. Now $[K:\mathbb{Q}]$ is equal to the degree of the minimal polynomial of $\zeta$ over $\mathbb{Q}$. Notice $f(x) = x ^{10}+x^9+...+x+1$ is irreducible over $\mathbb{Q}$ (I am assuming you know this result) so $[K:\mathbb{Q}] = 10$. Let $G = \text{Gal}(K/\mathbb{Q})$ and thus $|G| = [K:\mathbb{Q}] = 10$. This tells us that we are looking at a Galois group that has $10$ elements. If $\theta \in G$ is an automorphism of $K$ then it is completely determined by its value of $\theta (\zeta)$. Remember that automorphisms of $K$ permute the zeros of polynomials, this means that $\theta(\zeta)$ must be one of the zeros of $f(x)$, thus, $\theta (\zeta) = \zeta,\zeta^2,...,\zeta^{10}$. We see that there are at most $10$ automorphism of $K$, however we know that $|G|=10$ which forces us to conclude that there is an automorphism $\sigma_k$ such that $\sigma_k (\zeta) = \zeta^k$ for each $k= 0,1,2,..,10$. Therefore, $G = \{ \sigma_0,\sigma_1,...,\sigma_{10}\}$. It is easy to see that $G$ behaves like the group $\mathbb{Z}_{10}^{\times}$ i.e. $\sigma_a \sigma_b = \sigma_{ab(\bmod 11)} $. Thus, $G$ has a generator for example $\sigma_2$ i.e. $G = \left< \sigma_2\right>$.
{"url":"http://mathhelpforum.com/advanced-algebra/78915-galois-group-generators-print.html","timestamp":"2014-04-20T01:18:53Z","content_type":null,"content_length":"14424","record_id":"<urn:uuid:50d12810-5530-48fc-81ce-85d5c73e9f0c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Introducing Routemaster RouteMaster is the engine that powers point to point path finding for Strava routes in our newly released Route Builder. The killer feature of RouteMaster is data drawn from real Strava athletes. Instead of merely knowing which paths are designated for cycling/running, we analyze tens of billions of GPS points from millions of Strava activities. This data gives aggregate statistics about how athletes use every road and path section in the world. This post is an in depth engineering overview of routing and the design behind RouteMaster. Building a routing engine from the ground up was a fantastic experience, and I wanted to share some of the more interesting components and problems that I solved along the way. A word of warning: this post gets very technical and assumes a good deal of background with data-structures and algorithms. Routing Goals A simple approach to road routing is to minimize total distance along the path between the origin and destination. This can be formulated as the single pair shortest path problem over a graph with intersections as nodes, road sections as edges, and edge weights or costs as geographic distance. More sophisticated routing engines for driving often use a more sophisticated notion of cost. The goal of a driver is almost always to minimize travel time. Knowledge about speed limits, current traffic, and even turning costs can be used to construct a cost function that is an estimate of the expected time of the journey. RouteMaster’s dynamic cost function takes into account factors that aim to optimize routes according to the preferences of athletes. For example, when the route popularity toggle is turned on, edge popularity attributes are considered in the cost function (as well as length). RouteMaster increases the cost of roads that are seldom traveled. This leads to routes that are more likely to use popular cycling routes, and that might be longer (but hopefully more enjoyable!) than the shortest route. The elevation toggle penalizes vertical elevation gain and allows routes with minimal climbing. We hope to add more routing controls in the future. Graph Datastore A graph store is required to encode the topological structure of road networks. Graph edges correspond to road sections, and graph nodes correspond to road intersections. For example, a four way intersection would be represented as a node with four connecting edges. The graph store supports a query that returns the edges adjacent to any node. We experimented with existing solutions such as neo4j and Graphhopper before deciding to use a custom graph store implementation. Graphhopper is a great project, but the codebase was at the time too specialized to automobile routing. Neo4j is ultimately designed to support mutable graphs, and thus can’t hope to do things as efficiently as a custom immutable implementation. We don’t need to support live editing of route data– updates are batched and the database is regenerated every few weeks. Because our graph is immutable, implementing a custom data store wasn’t too difficult. We use a adjacency list representation of the graph modified so that everything fits in a single array indexed by edge_id. Graphhopper gives a good overview of a similar data structure. The graph store only uses 16 bytes per edge, and 4 bytes per node. Both edges and nodes also have attributes. Node attributes include lat/lon coordinates and elevation. Edge attributes include information about elevation gain, gradient, length, popularity, and type of road. Constant sized node and edge attributes are stored in a binary array flatfile. We also use a blob storage data-structure that keeps variably sized edge attributes. These attributes include road names and compressed elevation and path geometry. These variable sized attributes are not accessed in the graph traversal stage– they are only used to annotate the final route with metadata. Geospatial Index A geospatial index determines the geographically closest nodes to the user’s origin and destination points. The index supports a nearest neighbor query. Tree based structures (kd-tree, r-tree) are most commonly used to solve this problem. However, tree based solutions typically have large memory requirements, especially if the tree is built using 64 bit pointers. Our index solution uses the clever geohash algorithm and stresses storage efficiency over high performance. Geohash is a hashing algorithm for lat/lng coordinates. Geohashes can also be represented as an N bit integer. An N bit geohash represents a rectangle in lat/lng coordinate space. A subsequent bit of precision divides the rectangle in two. For example, all points in the western hemisphere have ‘0’ as the first bit, and points in the eastern hemisphere begin with ‘1’. A prefix of a geohash (fewer bits of precision) represents a larger box that contains the smaller box corresponding to the longer geohash. If you want to learn more about geohash, you can read a longer introduction or play with Our index consists of a sorted array of tuples containing (geohash(node.latlng), node.id) where the geohashes are stored to a full 64 bits of precision. We can use this array to efficiently lookup all records of nodes that fall into the rectangle for a geohash of any precision. Since the array is sorted by geohash, we just need to find the indices of the first and last records that begin with the geohash prefix of interest. These indices can be efficiently determined using a binary search like procedure. The subarray between these two indices contains all of the records of interest. Nearest neighbor queries can be executed as follows: A procedure converts the input coordinate into a set of geohash prefixes that cover a circle of some small radius (40 meters) about the query point. (There is a lot of complexity in this procedure, checkout this presentation for a good overview of a similar implementation). The index records for each prefix are fetched. Geohash also happens to be reversible– you can reconstruct the original lat/lng point to some accuracy. We can thus order the set of returned nodes based on exact distance to our query point. If no points are returned on the initial query, the search radius is exponentially increased up to a limit of 5 km until a point is found. The radius limit is to prevent too many points being found at once. Since the points inserted into the index are from real road geometry, there is a nice limit on the density of points– this means that there isn’t a worst case where millions of points need to be read from the index when doing a circle query. In practice, query times are typically well under 1 millisecond. This isn’t fast compared to a kd-tree, but is small compared to the time spent in graph traversal and route construction. Each index record is only 8 bytes for the geohash and 4 bytes for the node reference. Edge awareness It is convenient to think of a route as a list of edges that yield a path from one node to another node. However this simple model is not always the case. Some of the longer edges are over 100 miles. With such long edges, it isn’t ideal to force the user to start and end routes only at nodes. Routes might even begin and end on the same edge. The geospatial index for RouteMaster thus contains additional points corresponding to points spaced out along long edges. These index entries refer to an edge_id rather than a node_id. They are added at points spaced approximately every 50 meters for all edges longer than 50 meters. However, it isn’t enough just to know that the user wants to start their route in the middle of an edge. The ideal route might proceed in either direction along the starting edge. Since an edge is not a node, the search algorithm can’t handle this case. To solve this problem, we add a ‘virtual’ node to the graph. The virtual node has two virtual edges that connect to the corresponding real node of the underlying edge where the route begins. These nodes and edges are virtual in the sense that they are only part of the graph within the scope of a single routing request, and thus don’t require any persistent memory. The search algorithm is free to consider routes that proceed in either direction by routing over the virtual edges. In the tricky corner case where the route begins and ends on the same edge, the routing problem is normally trivial and can be handled separately. Partition Awareness It isn’t always possible to get from point A to point B. If no path exists, a naive search algorithm must traverse every reachable edge before giving up. RouteMaster is aware of connected components of the graph and can efficiently reason about connectedness. All routable edges are inserted into a disjoint-set data structure. This data structure generates a labeling of nodes according to which partition they belong. A path exists between two nodes if and only if they are in the same partition. Using this property and the partition labeling, RouteMaster can decide in constant time whether a route exists. Furthermore, RouteMaster attempts to pick roads that are connected to each other. If the nearest nodes to the route start/end points belong to different partitions, more neighbor nodes can be examined until a pair of nodes with matching partitions are found. There are hundreds of thousands of disconnected single edges which are avoided by this method. Route waypoints will often magically snap to nodes which are connected, avoiding isolated networks. A* Search and Bounded Relaxation RouteMaster uses the A* algorithm to find routes. To greatly summarize, A* is a heuristic based algorithm that efficiently finds the shortest path between a single pair of nodes. Nodes are explored according to a known current cost + heuristic estimate of remaining that ranks ‘open’ nodes on the frontier of the search. The known cost is the (optimal) path cost from the origin to the current node, and the heuristic is a function that estimates the remaining cost from the current node to the goal node. It can be shown that A* is optimal as long as the heuristic function never overestimates the remaining cost. For road routing, the standard heuristic function is typically distance as the crow flies between the current node and the goal node. This heuristic is optimal because no route between two points is shorter than a straight line. In most cases the heuristic will lead to fewer explored nodes (compared to Djikstra’s algorithm). A useful speed/correctness tradeoff technique (known as Bounded Relaxation) is to multiply the the heuristic function by a relaxation factor greater than one. This makes the search more greedy– as the heuristic is more important. The algorithm becomes biased towards exploring nodes closer to the straight line between the origin and goal. Correctness of the algorithm is sacrificed– the first path found might not be the shortest. However, it can be shown that the path found is never worse than the relaxation factor times the cost of the correct path. Through experimentation I found extremely promising results from this trade off. In some ideal cases, the number of nodes explored during a search was reduced by a factor of 20 with a negligible increase in path length (using ~1.2 times the correct heuristic). This trade off doesn’t work well when the shortest path is much longer than the distance as the crow flies. A common example of this is routing around a convex body of water. RouteMaster is an efficient solution to a specialized problem. All data structures including geospatial index, graph representation, and node and edge properties use up less than 16 gigabytes for the entire world (140 million edges). Searches can traverse around 1 million edges per second. Routes under 50 miles can be typically found in a few hundred milliseconds. All data structures are immutable and are stored off the JVM heap in memory mapped files. Server startup time is nearly instant, and we don’t experience the ordeals of being bound by garbage collection. The server is implemented in scala using a finagle/thrift interface. Zookeeper is used for service discovery and load balancing. I hope you enjoy building routes and exploring the world with RouteMaster! Please leave any questions or feedback in the comments below.
{"url":"http://engineering.strava.com/routemaster/","timestamp":"2014-04-18T18:17:34Z","content_type":null,"content_length":"31526","record_id":"<urn:uuid:b40b46ff-b5cb-45ec-81b4-8d7e074db592>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Can any one explain how the kinetic energy of this magnetic marble gauss rifle works? Originally posted by Cyberice Any one? Okay, First off, I'd think it it be easier to think of it in terms of momentum than KE, but to each his own. When you let go of the first marble (A), it is pulled towards the first magnet (1), accelerating as it does so. On the other side of magnet 1 are two marbles(B&C). The momentum of marble A is transfered through magnet 1 and marble B to marble C. This is enough to break the magnetic grip magnet 1 has on C and still have some momentum left over. (Marble C will not have quite the same KE as A at this point because some of it was used to break away from magnet 1 . One purpose of marble B is the increase the distance between marble C and magnet 1 and reduce the amount of energy needed for this. ) Marble C is attracted to magnet 2 so it accelerates towards it. (this is the other purpose of marble B, so that marble C starts out far enough away from magnet 1 such that it gains more momentum falling towards magnet 2 than it loses pulling away from magnet 1 ) Marble C will strike magnet 2 with the combined momemtum that it got from Marble A plus the momentum gained by acceleration due to its attraction to magnet 2. It tranfers this momentum to marble E through magnet 2 and marble D, Which in turn gains additonal momentum by accelerating towards magnet 3 , etc. In such a way, each marble in the sequence has more than the previous unitl you get to the last marble.
{"url":"http://www.physicsforums.com/showthread.php?t=4647","timestamp":"2014-04-17T15:40:18Z","content_type":null,"content_length":"30591","record_id":"<urn:uuid:ab8a791e-dd06-415d-8ac8-857b93b58d37>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: SUPERCHARGED CODES Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A system and method is provided for encoding k input symbols into a longer stream of n output symbols for transmission over an erasure channel such that the original k input symbols can be recovered from a subset of the n output symbols without the need for any retransmission. A symbol is a generic data unit, consisting of one or more bits, that can be, for example, a packet. The system and method utilize a network of erasure codes, including block codes and parallel filter codes to achieve performance very close to the ideal MDS code with low encoding and decoding computational complexity for both small and large encoding block sizes. This network of erasure codes is referred to as a supercharged code. The supercharged code can be used to provide packet-level protection at, for example, the network, application, or transport layers of the Internet protocol suite. A method for erasure coding of input symbols that form messages, comprising: implementing at least three block coding operations that respectively provide a first, second, and third set of code words based on the messages; implementing at least two filter coding operations that respectively provide a fourth and fifth set of code words based on the first set of code words; modifying an order in which bits of the first set of code words are taken into account for at least one of the two filter coding operations; and parallel concatenating the second, third, fourth, and fifth sets of code words to form encoded symbols for transmission over an erasure channel. The method of claim 1, further comprising: implementing a repetition coding operation that respectively repeats the second and third sets of code words some number of times before the second and third sets of code words are parallel concatenated with the fourth and fifth sets of code words. The method of claim 1, wherein the second, third, fourth, and fifth sets of code words are parallel concatenated using an exclusive or operation. The method of claim 1, further comprising: multiplexing the fourth and fifth sets of code words together in an irregular manner before parallel concatenating the second, third, fourth, and fifth sets of code words. The method of claim 1, wherein the one of the block coding operations that provides the first set of code words implements a binary block code. The method of claim 1, wherein the one of the block coding operations that provides the second set of code words implements a non-binary block code over a finite field. The method of claim 6, wherein the non-binary block code is a Reed-Solomon block code. The method of claim 1, wherein the one of the block coding operations that provides the third set of code words implements a binary block code. The method of claim 1, wherein at least one of the two filter coding operations uses a tailbiting filter. An encoder for erasure coding of input symbols that form messages, comprising: three block coding modules configured to respectively provide a first, second, and third set of code words based on the messages; two filter coding modules configured to respectively provide a fourth and fifth set of code words based on the first set of code words; an interleaver configured to modify an order in which bits of the first set of code words are taken into account for at least one of the two filter coding modules; and a concatenation module configured to parallel concatenate the second, third, fourth, and fifth sets of code words to form encoded symbols for transmission over an erasure channel. The encoder of claim 10, further comprising: a repetition coding module configured to repeat the second and third sets of code words some number of times before the second and third sets of code words are parallel concatenated with the fourth and fifth sets of code words by the concatenation module. The encoder of claim 10, further comprising: a multiplexer configured to multiplex the fourth and fifth sets of code words together in an irregular manner before the second, third, fourth, and fifth sets of code words are parallel concatenated by the concatenation module. The encoder of claim 10, wherein the one of the three block coding modules configured to provide the second set of code words implements a non-binary block code over a finite field. The method of claim 13, wherein the non-binary block code is a Reed-Solomon block code. The encoder of claim 10, wherein at least one of the two filter coding modules comprises a tailbiting filter. The encoder of claim 10, wherein at least one of the two filter coding modules comprises a finite impulse response (FIR) filter. The encoder of claim 10, wherein the concatenation module implements an exclusive or operation. The encoder of claim 10, wherein the encoder is implemented in a desktop computer, a laptop computer, a tablet computer, a mobile phone, a set-top box, or a router. An encoder for erasure coding of input symbols that form messages, comprising: a block coding module configured to provide a first set of code words based on the messages; two filter coding modules separated by an interleaver and configured to respectively provide a second and third set of code words based on the messages; and a concatenation module configured to parallel concatenate the first, second, and third sets of code words to form encoded symbols for transmission over an erasure channel. A decoder comprising: a processor; and a memory, wherein the processor is configured to decode symbols encoded by: implementing a block coding operation to provide a first set of code words based on messages formed by the symbols; implementing at least two filter coding operations, separated by an interleaver, to provide a second and third set of code words based on the messages; and concatenating the first, second, and third sets of code words. CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application claims the benefit of U.S. Provisional Patent Application No. 61/592,202, filed Jan. 30, 2012, U.S. Provisional Patent Application No. 61/622,223, filed Apr. 10, 2012, U.S. Provisional Patent Application No. 61/646,037, filed May 11, 2012, and U.S. Provisional Patent Application No. 61/706,045, filed Sep. 26, 2012, all of which are incorporated herein by reference. TECHNICAL FIELD [0002] This application relates generally to coding of symbols for transmission over an erasure channel and, more particularly, to coding of packets for transmission over a packet erasure channel. BACKGROUND [0003] The packet erasure channel is a communication channel model where transmitted packets are either received or lost, and the location of any lost packet is known. The Internet usually can be modeled as a packet erasure channel. This is because packets transmitted over the Internet can be lost due to corruption or congestion, and the location of any lost packet can be inferred from a sequence number included in a header or payload of each received packet. Depending on the type of data carried by a stream of packets, a lost packet can reduce the quality of the data or even render the data unusable at a receiver. Therefore, recovery schemes are typically used to provide some level of reliability that packets transmitted over an erasure channel will be received. For example, retransmission schemes are used to recover lost packets in many packet-based networks, but retransmissions can result in long delays when, for example, there is a large distance between the transmitter and receiver or when the channel is heavily impaired. For this reason and others, forward error correction (FEC) using an erasure code is often implemented in place of, or in conjunction with, conventional retransmission schemes. An erasure code encodes a stream of k packets into a longer stream of n packets such that the original stream of k packets can be recovered at a receiver from a subset of the n packets without the need for any retransmission. The performance of an erasure code can be characterized based on its reception efficiency and the computational complexity associated with its encoding and decoding algorithms. The reception efficiency of an erasure code is given by the fraction k'/k, where k' is the minimum number of the n packets that need to be received in order to recover the original stream of k packets. Certain erasure codes have optimal reception efficiency (i.e., the highest obtainable reception efficiency) and can recover the original stream of k packets using any (and only) k packets out of the n packets transmitted. Such codes are said to be maximum distance separable (MDS) codes. The Reed-Solomon code is an MDS code with optimal reception efficiency, but the typical encoding and decoding algorithms used to implement the Reed-Solomon code have high associated computational complexities. Specifically, their computational complexities grow with the number of packets n and are of the order O(nlog(n)). This makes a pure Reed-Solomon solution impractical for many packet-based networks, including the Internet, that support the transmission of large files/streams segmented into many, potentially large, packets. The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the embodiments of the present disclosure and, together with the description, further serve to explain the principles of the embodiments and to enable a person skilled in the pertinent art to make and use the embodiments. FIG. 1 illustrates a block diagram of an encoder implementing the supercharged code in accordance with embodiments of the present disclosure. FIG. 2 illustrates an example parallel filter coding module that can be used by an encoder implementing the supercharged code in accordance with embodiments of the present disclosure. FIG. 3 illustrates an example finite impulse response (FIR) filter that can be used by a parallel filter code in accordance with embodiments of the present disclosure. FIG. 4 illustrates an encoder with the same implementation of the encoder in FIG. 1, with the exception of an additional systematic pre-processing module, in accordance with embodiments of the present disclosure. FIG. 5 illustrates a block diagram of an example computer system that can be used to implement aspects of the present disclosure. The embodiments of the present disclosure will be described with reference to the accompanying drawings. The drawing in which an element first appears is typically indicated by the leftmost digit(s) in the corresponding reference number. DETAILED DESCRIPTION [0014] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. However, it will be apparent to those skilled in the art that the embodiments, including structures, systems, and methods, may be practiced without these specific details. The description and representation herein are the common means used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art. In other instances, well-known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the invention. References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. 1. Overview The present disclosure is directed to a system and method for encoding k input symbols into a longer stream of n output symbols for transmission over an erasure channel such that the original k input symbols can be recovered from a subset of the n output symbols without the need for any retransmission. A symbol is a generic data unit, consisting of one or more bits, that can be, for example, a packet. The system and method of the present disclosure utilize a network of erasure codes, including block codes and parallel filter codes to achieve performance very close to the ideal MDS code with low encoding and decoding computational complexity for both small and large values of n. This network of erasure codes is referred to as a supercharged code. 2. Supercharged Code 2.1. Encoder FIG. 1 illustrates a block diagram of an encoder 100 implementing the supercharged code in accordance with embodiments of the present disclosure. Encoder 100 can be implemented in hardware, software, or any combination thereof to encode a matrix X of k input symbols into a longer length matrix Y of n output symbols for transmission over an erasure channel such that the original k input symbols can be recovered from a subset of the n output symbols without the need for any retransmission. Each row of bits in matrix X forms a different one of the k input symbols, and each row in Y forms a different one of the n output symbols. For example, the first row of bits 116 in matrix X forms a first one of the k input symbols in matrix X, and the first row of bits 118 in matrix Y forms a first of the n output symbols in matrix Y. In addition, each column of bits in matrix X forms what is referred to as a message, and each corresponding column of bits in matrix Y forms what is referred to as a code word of the message. For example, the first column of bits 120 in matrix X forms one message, and the first column of bits 122 in matrix Y forms a code word of the message. Subsequent, corresponding columns of bits in matrices X and Y form additional pairs of messages and code words. It should be noted that each coding module in encoder 100 (to be described below) receives a matrix of input symbols/messages and generates a matrix of output symbols/code words of the same general form described above in regard to matrices X and Y. In some instances, the coding modules in encoder 100 are placed in series, such that the matrix of output symbols/code words generated by one coding module represent the matrix of input symbols/messages received by another coding module in encoder 100. The terms input symbols, output symbols, messages, and code words are used in a consistent manner throughout the disclosure below to describe these matrices. As shown in FIG. 1, the encoder 100 is constructed from a network of coding modules, including block coding modules 102, 104, and 106, repetition coding modules 108 and 110, and parallel filter coding module 112. In general, the output code words generated by block coding modules 102, 104, and 106 are informative and provide high reception efficiencies but are complex to decode, whereas the output code words generated by parallel filter coding module 112 are comparatively easier to decode but not as informative. Thus, encoder 100 uses repetition coding modules 108 and 110 to respectively repeat shorter-length output code words generated by block coding modules 102 and 106 and then parallel concatenates them, using exclusive or (XOR) operation 114 (or some other concatenation module such as a multiplexer or an XOR operating over a non-binary finite field), with longer-length output code words generated by parallel filter coding module 112 to produce a series of n supercharged encoded output symbols. This network of coding modules can achieve performance very close to the ideal MDS code with low encoding and decoding computational complexity for both small and large encoding block sizes (i.e., for both small and large values of k). It should be noted that in other embodiments of encoder 100, one of the two block coding modules 102 and 106, and/or block coding module 104, and/or the direct input of matrix X into parallel filter coding module 112 can be omitted. In one embodiment of encoder 100, block coding module 102 implements a binary linear block code that accepts as input the k input symbols in matrix X and generates n_b1 output symbols through the linear mapping: B1*X (1) where C _B1 is a matrix of the n_b1 output symbols and G_B1 is an n_b1×k generator matrix. Each column of bits in matrix X forms a message, and each corresponding column of bits in matrix C_B1 forms a code word of the message. The code words can take on 2 --b1 possible values corresponding to all possible combinations of the n_b1 binary bits. However, the binary linear block code implemented by block coding module 102 uses 2 code words from the 2 --b1 possibilities to form the code, where each k bit message is uniquely mapped to one of these 2 code words using generator matrix G_B1. In general, any unique subset of 2 code words, selected from the 2 --b1 possibilities, that provides sufficiently easy to decode outputs with sufficient error correction capabilities for a given application can be used to implement block coding module 102. In another embodiment of encoder 100, block coding module 104 similarly implements a binary linear block code that accepts as input the k input symbols in matrix X and generates n_b2 output symbols through the linear mapping: B2*X (2) where C _B2 is a matrix of the n_b2 output symbols and G_B2 is an n_b1×k generator matrix. Each column of bits in matrix X forms a message, and each corresponding column of bits in matrix C_B2 forms a code word of the message. The code words can take on 2 --b2 possible values corresponding to all possible combinations of the n_b2 binary bits. However, the binary linear block code implemented by block coding module 104 uses 2 code words from the 2 --b2 possibilities to form the code, where each k bit message is uniquely mapped to one of these 2 code words using generator matrix G_B2. In general, any unique subset of 2 code words, selected from the 2 --b2 possibilities, that provides sufficiently easy to decode outputs with sufficient error correction capabilities for a given application can be used to implement block coding module 104. In yet another embodiment of encoder 100, block coding module 106 implements a non-systematic Reed Solomon code that accepts as input the k input symbols in matrix X and generates n_b3 output symbols through the linear mapping: B3*X (3) where C _B3 is a matrix of the n_b3 output symbols and G_B3 is a n_b3×k Vandermonde generator matrix. The non-systematic Reed Solomon code can be implemented by block coding module 106 over the finite field GF(256). It should be noted that block coding module 106 can implement other non-binary block codes, including those not constructed over finite fields, in other embodiments. For example, in other embodiments, block coding module 106 can implement a systematic (as opposed to a non-systematic) Reed Solomon code or another type of cyclic block code. In yet another embodiment of encoder 100, parallel filter coding module 112 accepts as input the n_b2 symbols in matrix C_B2 and generates a longer length n_p matrix of output symbols C_P using a linear block code formed by the parallel concatenation of at least two constituent filter or convolution codes separated by an interleaver. The at least two constituent filter or convolution codes can be the same or different. A block diagram of an example parallel filter coding module 200 is illustrated in FIG. 2 in accordance with embodiments of the present disclosure. As shown, parallel filter coding module 200 includes interleavers 202 and 204, finite impulse response (FIR) filters 206 and 208, and multiplexer 210. Interleavers 202 and 204 each receive and process the messages in matrix C_B2. Interleaver 202 rearranges the order of the bits in each message in matrix C_B2 in an irregular but prescribed manner, and interleaver 204 rearranges the order of the bits in each message in matrix C_B2 in an irregular but prescribed manner that is different from the irregular manner implemented by interleaver 202. Because FIR filters 206 and 208 receive the bits of the messages in matrix C_B2 in different, respective orders, the code words in matrix C_F1 generated by FIR filter 206 will almost always be different than the code words in matrix C_F2 generated by FIR filter 208, even when the two filters are identically implemented. It should be noted that in other embodiments of parallel filter coding module 200, it may be possible to feed the message of matrix C_B2 into one of FIR filters 206 and 208 without first interleaving. It should be further noted that more than two interleavers and FIR filters can be implemented by parallel filter coding module 200. Specifically, one or more additional pairs of interleavers and FIR filters can be added to parallel filter coding module 200. In addition, it should be further noted that FIR filters 206 and 208 can be implemented as tailbiting FIR filters, where the states of FIR filters 206 and 208 are initialized with their respective final states to make them tailbiting. In general, a good linear code is one that uses mostly high-weight code words (where the weight of a code word, also known as its Hamming weight, is simply the number of ones that it contains) because they can be distinguished more easily by the decoder. While all linear codes have some low weight code words, the occurrence of these low weight code words should be minimized. Interleavers 202 and 204 help to reduce the number of low-weight code words generated by parallel filter coding module 200, where the weight of a code word generated by parallel filter coding module 200 is generally the sum of the weights of corresponding code words generated by FIR filters 206 and 208. More specifically, because the bits of the respective message inputs to FIR filters 206 and 208 have been reordered in different, irregular manners by interleavers 202 and 204, the probability that both FIR filters 206 and 208 simultaneously produce corresponding code words of low-weight is reduced. Thus, interleavers 202 and 204 help to reduce the number of low-weight code words generated by parallel filter coding module 200. As further shown in FIG. 2, the code words in matrices C_F1 and C_F2 are parallel concatenated using multiplexer 210 to generate the code words in matrix C_P. In one embodiment, multiplexer 210 parallel concatenates the code words in matrices C_F1 and C_F2 in an irregular but prescribed manner. FIG. 3 illustrates an example FIR filter 300 that can be used to implement one or both of FIR filters 206 and 208 in FIG. 2 in accordance with embodiments of the present disclosure. As shown in FIG. 3, bits from a message of matrix C_B2 enter FIR filter 300 from the left and are stored in a linear shift register comprising registers 302, 304, and 306 (T denotes a register). Each time a new message bit arrives, the message bits in registers 302, 304, and 306 are shifted to the right. FIR filter 300 computes each bit of the code word corresponding to the input message by exclusive or-ing a particular subset of the message bits stored in the shift register and, possibly, the current message bit at the input of the shift register. In the embodiment of FIR filter 300 shown in FIG. 3, the code word bits are specifically computed by exclusive or-ing each message bit stored in the shift register using XOR operation 308. The constraint length of FIR filter 300 is defined as the maximum number of message bits that a code word bit can depend on. In the embodiment of FIR filter 300 shown in FIG. 3, the constraint length is four because each code word bit can depend on up to four message bits (the three message bits in the shift register and the current message bit at the input of the shift register). It should be noted that in other embodiments of FIR filter 300, a different constraint length can be used, and the code word bits can be computed by exclusive or-ing a different subset of the message bits stored in the shift register. Referring back to FIG. 1, in yet another embodiment of encoder 100, repetition coding module 108 implements a binary linear block code that accepts as input the n_b1 symbols in matrix C_B1 and generates a longer length n matrix of output symbols C_R1 through the linear mapping: B1 (4) where G _R1 is an n×n_b1 generator matrix. In at least one embodiment, the repetition code described by the generator matrix G_R1 is designed to simply repeat the code words in C_B1 some number of times (either some integer or integer plus fractional number of times) such that the length n_b1 code words in C_B1 are transformed into longer length n code words in C_R1. Specifically, the generator matrix G_R1 can be implemented as an n×n_b1 stack of identity matrices, with floor(n/n_b1) copies of the identity matrix stacked vertically and a fractional identity matrix below that includes n mod n_b1 rows. In yet another embodiment of encoder 100, repetition coding module 110 implements a binary linear block code that accepts as input the n_b3 symbols in matrix C_B3 and generates a longer length n matrix of output symbols C_R2 through the linear mapping: B3 (5) where G _R2 is an n×n_b3 generator matrix. In at least one embodiment, the repetition code described by the generator matrix G_R2 is designed to simply repeat the code words in C_B3 some number of times (either some integer or integer plus fractional number of times) such that the length n_b3 code words in C_B3 are transformed into length n code words in C_R2. Specifically, the generator matrix G_R2 can be implemented as an n×n_b3 stack of identity matrices, with floor(n/n_b3) copies of the identity matrix stacked vertically and a fractional identity matrix below that includes n mod n_b3 rows. As described above, encoder 100 can be used to provide packet-level protection at various layers of a network architecture. For example, encoder 100 can be used to provide packet-level protection at the network, application, or transport layers of the Internet protocol suite, commonly known as TCP/IP. In one embodiment, encoder 100 is used at a server or client computer (e.g., a desktop computer, laptop computer, tablet computer, smart phone, router, set-top-box, or other portable communication devices) to encode k packets, segments, or datagrams of data formatted in accordance with some protocol, such as the File Delivery over Unidirectional Transport (FLUTE) protocol, for transmission to another computer over a packet based network, such as the Internet. 2.2. Matrix Representation Because all of the constituent block coding modules in encoder 100 are, in at least one embodiment, linear modules, the output matrix Y can be expressed through the linear mapping: S*X (6) where the generator matrix G _S describes the generic supercharged code implemented by encoder 100. The generator matrix G_S is specifically given by: k; G B3 (7) where G _P is the n×(k+n_b2) generator matrix of parallel filter coding module 112, I_k is a k×k identity matrix, G_B2 is the n_b2×k generator matrix of block coding module 104, G_R1 is the n×n_b1 generator matrix of repetition coding module 108, G_B1 is the n_b1×k generator matrix of block coding module 102, G_R2 is the n×n_b3 generator matrix of repetition coding module 110, and G_B3 is the n_b3×k generator matrix of block coding module 106. The notation [A; B] used above in equation (7) denotes the vertical stack of matrix A on B, and the operator `+` used above in equation (7) denotes the bitwise XOR operation. 2.3. Systematic Encoding The supercharged code is not an inherently systematic code. Nonsystematic codes are commonly transformed into an effective systematic code by pre-processing input data D before using it as the input to the encoder, Y=G_S*X. The encoder input X is calculated by decoding the desired input data D to be encoded and running the decoder to determine the encoder input vector X. Let matrix G_S_ENC be the k×k generator matrix corresponding to the first k elements of each code word in Y, the encoder input X can be computed using the following: ENC (-1)*D (8) where the operation G _S_ENC (-1) raises G_S_ENC to the power (-1). Now, X can be used to encode using equation (6) to generate Y, and the first k elements of each code word in Y will be equal to D. FIG. 4 illustrates an encoder with the same implementation as encoder 100 in FIG. 1, with the exception of an additional systematic pre-processing module 402, in accordance with embodiments of the present disclosure. Systematic pre-processing module 402 can be sued to perform the function defined by equation 8 and can be implemented in hardware, software, or any combination thereof. 2.4. Segmentation of Files for Encoding Before encoder 100 can be used to encode, for example, a source file for transmission over an erasure channel, the source file needs to be segmented into encoder input symbols and those encoder input symbols need to be grouped into source blocks that can be represented by the input matrix X to encoder 100 as shown in FIG. 1. Specifically, given a source file of f bytes and an encoder input symbol size of t bytes, the file can be divided into k_total=ceil(f/t) encoder input symbols. A source block is a collection of kl or ks of these encoder input symbols. kl and ks may be different if the total number source blocks does not evenly divide the number of encoder input symbols required to represent the source file. The number of source blocks with kl encoder input symbols and the number of source block with ks encoder input symbols can be communicated to the decoder. In one embodiment, the source blocks are ordered such that the first zl source blocks are encoded from source blocks of size kl encoder input symbols, and the remaining zs source blocks are encoded from source block of size ks encoder input symbols. In one embodiment, kl is chosen under the constraint that the selected value of kl is less than or equal to at least one of a finite number of possible values for the number of input symbols k in the matrix X that encoder 100 in FIG. 1 accepts as input. Assuming that kl is chosen to meet this constraint, then encoder 100 can be implemented, in at least one embodiment, to accept an input matrix X with the smallest number of input symbols k that still satisfies the (non-strict) inequality kl≦k. 2.5 Erasure Channel After encoding, the n output symbols of matrix Y are transmitted on the channel. Some of these output symbols are erased by the channel. Suppose that the n×r matrix E represents the erasure pattern of the channel in that it selects out the r received output symbols Y_R from the transmitted output symbols Y. If the i received symbol is the j transmit symbol, then E(i,j)=1. This results in R=E*Y (9) At the decoder , the effective generator matrix at the receiver is then G_S_R=E*G_S. 2.6 Decoding Decoding is the process of determining X given Y_R and G_S_R. Decoding can be implemented in several different ways, but each are equivalent to solving the least squares problem X=(G_S_R T*G_S_R) -1* G_S_R T*Y_R, where T denotes the transpose. Modem sparse matrix factorization techniques can be used to take advantage of the sparse structure imposed by the structure of parallel filter coding module 112 in FIG. 1 with (6) rewritten in appropriate form: A*W (10) with augmented generator matrix G _A defined as: B1; G B3; GB L]; [G R2]] (11) and where the augmented output vector Z =[zeros(L,1); Y], the augmented input vector W=[X; G_B2*X; G_B1*X; G_B3*X], and where L=n_b1+n_b2+b_b3. The bottom L elements of matrix W contain the outputs, before repetition, of the block codes. These L values are appended to matrix X to form the augmented input matrix W. The first L rows of G_A implement the block code and XOR the block code output with itself to generate the L zeros at the top of the matrix Z. The subsequent n rows of G_A implement the FIR structure and XOR the output with the output of the block codes. The notation [A; B] used above in equation (11) denotes the vertical stack of matrices A on B, and the notation A|B denotes the horizontal concatenation of matrices A and B. Once the encoder state matrix X, or equivalently the augmented encoder state matrix W, has been determined, the task remains to determine the data matrix D. For any symbols of D that are missing, they can be recovered by using appropriate rows of (6) or (10). 3. Example Computer System Implementation It will be apparent to persons skilled in the relevant art(s) that various elements and features of the present invention, as described herein, can be implemented in hardware using analog and/or digital circuits, in software, through the execution of instructions by one or more general purpose or special-purpose processors, or as a combination of hardware and software. The following description of a general purpose computer system is provided for the sake of completeness. Embodiments of the present invention can be implemented in hardware, or as a combination of software and hardware. Consequently, embodiments of the invention may be implemented in the environment of a computer system or other processing system. An example of such a computer system 500 is shown in FIG. 5. All of the modules depicted in FIGS. 1 and 4, for example, can execute on one or more distinct computer systems 500. Computer system 500 includes one or more processors, such as processor 504. Processor 504 can be a special purpose or a general purpose digital signal processor. Processor 504 can be connected to a communication infrastructure 502 (for example, a bus or network). Various software implementations are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement the invention using other computer systems and/or computer architectures. Computer system 500 also includes a main memory 506, preferably random access memory (RAM), and may also include a secondary memory 508. Secondary memory 508 may include, for example, a hard disk drive 510 and/or a removable storage drive 512, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, or the like. Removable storage drive 512 reads from and/or writes to a removable storage unit 516 in a well-known manner. Removable storage unit 516 represents a floppy disk, magnetic tape, optical disk, or the like, which is read by and written to by removable storage drive 512. As will be appreciated by persons skilled in the relevant art(s), removable storage unit 516 includes a computer usable storage medium having stored therein computer software and/or data. In alternative implementations, secondary memory 508 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 500. Such means may include, for example, a removable storage unit 518 and an interface 514. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, a thumb drive and USB port, and other removable storage units 518 and interfaces 514 which allow software and data to be transferred from removable storage unit 518 to computer system 500. Computer system 500 may also include a communications interface 520. Communications interface 520 allows software and data to be transferred between computer system 500 and external devices. Examples of communications interface 520 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via communications interface 520 are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 520. These signals are provided to communications interface 520 via a communications path 522. Communications path 522 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link and other communications channels. As used herein, the terms "computer program medium" and "computer readable medium" are used to generally refer to tangible storage media such as removable storage units 516 and 518 or a hard disk installed in hard disk drive 510. These computer program products are means for providing software to computer system 500. Computer programs (also called computer control logic) are stored in main memory 506 and/or secondary memory 508. Computer programs may also be received via communications interface 520. Such computer programs, when executed, enable the computer system 500 to implement the present invention as discussed herein. In particular, the computer programs, when executed, enable processor 504 to implement the processes of the present invention, such as any of the methods described herein. Accordingly, such computer programs represent controllers of the computer system 500. Where the invention is implemented using software, the software may be stored in a computer program product and loaded into computer system 500 using removable storage drive 512, interface 514, or communications interface 520. In another embodiment, features of the invention are implemented primarily in hardware using, for example, hardware components such as application-specific integrated circuits (ASICs) and gate arrays. Implementation of a hardware state machine so as to perform the functions described herein will also be apparent to persons skilled in the relevant art(s). CONCLUSION [0051] The present disclosure has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Patent applications by BROADCOM CORPORATION Patent applications in class Double encoding codes (e.g., product, concatenated) Patent applications in all subclasses Double encoding codes (e.g., product, concatenated) User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130198582","timestamp":"2014-04-21T16:07:25Z","content_type":null,"content_length":"66907","record_id":"<urn:uuid:024c0f32-271c-466b-b533-60d6534f3745>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
l'hospitals rule April 12th 2007, 08:35 PM #1 Junior Member Mar 2007 l'hospitals rule I have a problem xe^(1/x) and i'm supposed to use l'hospital's rule to explain its behavior as x -> 0. But L'hospital's rule is for division only I thought? Any help would be great thanks. we spoke about a problem similar to this just tonight. I'll find the post for you. but the short answer is: note that we can write x as 1/(1/x) so xe^(1/x) = 1/(1/x) * e^(1/x) = [e^(1/x)]/(1/x) ........this is a quotient, however, this goes to 1/infinity, which is not a condition to use l'hopital's, i think you left out an x somewhere or see http://www.mathhelpforum.com/math-he...e-inf-inf.html xe^(1/x) is the exact problem thats why I was having trouble and the problem says to specifically use l'hospital's rule ah yes, you can use l'hopital's on it. the lim is x-->0, i thought it was x-->infinity since that's what the problem we were doing was like. you can follow my original guidelines then. l'hopital's will work Last edited by Jhevon; April 12th 2007 at 09:37 PM. = lim{x-->oo}(1/(1/x))e^(1/x) = lim{x-->oo} [e^(1/x)]/(1/x) .......this goes to inf/inf as x-->0, we can use l'hopital's Apply L'hopital's => lim{x-->oo}[e^(1/x)]/(1/x) = lim{x-->oo}[(-x^-2)e^(1/x)]/(-x^-2) = lim{x-->oo} e^(1/x) = infinity UMStudent here is a good website I discovered earlier today. It goes through all of the possible scenarios. Just keep working with it and you will pick it up rather quickly. Lol why didn't you say you were a genius. But thanks for all the help from both of you appreciate it. April 12th 2007, 08:56 PM #2 April 12th 2007, 09:00 PM #3 Junior Member Mar 2007 April 12th 2007, 09:17 PM #4 April 12th 2007, 09:21 PM #5 April 12th 2007, 09:31 PM #6 April 12th 2007, 09:52 PM #7 Junior Member Mar 2007
{"url":"http://mathhelpforum.com/calculus/13639-l-hospitals-rule.html","timestamp":"2014-04-21T05:35:15Z","content_type":null,"content_length":"48573","record_id":"<urn:uuid:6f457383-163a-4d24-83fd-df6ce7ce8f8e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Semisimplicity of étale cohomology representations up vote 4 down vote favorite Let $K$ be a number field and $G=Gal(\overline{K}/K)$ the absolute Galois group of $K$. Let $\ell$ be a prime number. Let $A/K$ be an abelian variety. Then the representation of $G$ on $V_\ell(A)$ is semisimple. This is the famous theorem of Faltings (Invent. Math. 73). Now let $X/K$ be a smooth projective variety and $0\le q\le 2\dim(X)$, and define $\overline{X}=X_{\overline{K}}$. Question. Is it known that the representation of $G$ on $H^q(\overline{X}, \mathbb{Q}_\ell)$is semisimple? Remark. The answer is yes for $q=1$, because $H^1(\overline{X}, Q_\ell)$ is dual to $V_\ell(A)$ where $A$ is the Albanese variety of $X$. I would also be interested in the case where the number field $K$ is replaced by a global function field (say), and $\ell$ is assumed to be coprime to the characteristic. ag.algebraic-geometry nt.number-theory rt.representation-theory reference-request I strongly suspect that the answer is "no" in the number field case, and it is surely "no" over finite fields (already). – Mikhail Bondarko Feb 23 '11 at 10:54 Thx for your comment! I somehow expected a "not known" in the number field case as well. Why is the answer a definite "no" over finite fields? I do not know how to prove this. Can you give me a view details on that? – Sebastian Petersen Feb 23 '11 at 12:09 Sorry, just to be sure: Do you mean "not known" or "false" in the case of a finite ground field? – Sebastian Petersen Feb 23 '11 at 12:21 One more comment: My question is exactly conjecture $SS^i(X)$ in Tate's article "Conjectures on algebraic cycles on l-adic cohomology", Proceedings of Symposia in Pure Mathematics 55 (the motives volume I). So my question is, whether there has been progress on this conjecture since this article was written. – Sebastian Petersen Feb 23 '11 at 12:32 Joel Bellaiche's Hawaii notes people.brandeis.edu/~jbellaic/BKHawaii4.pdf (page 5) say: "This is sometimes called “conjecture of Grothendieck-Serre”. This is known for abelian varieties, by a 1 theorem that Faltings proved at the same times he proved the Mordell’s conjecture, and in a few other cases (some Shimura varieties, for example)." (But that's all he says.) – fherzig Feb 23 '11 at 15:56 show 1 more comment 1 Answer active oldest votes This semi-simplicity is a part of what is called the Tate conjecture. It is generally believed to be true, but little is known about it outside the case of $H^1$, in either the up vote 3 down finite field or global field case. Searching on mathscinet for "Tate conjecture" (or googling) should turn up the relevant literature. vote accepted This conjecture is discussed in the recent interview with John Tate (ams.org/notices/201103/rtx110300444p.pdf) in the Notices of the AMS. – Chandan Singh Dalawat Feb 24 '11 at add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry nt.number-theory rt.representation-theory reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/56378/semisimplicity-of-etale-cohomology-representations","timestamp":"2014-04-20T08:55:43Z","content_type":null,"content_length":"59754","record_id":"<urn:uuid:05b4eab9-a67f-4f45-b8c4-51b1c8fd58ce>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
st: two-stage mvprobit and ghk vs. sem algorithm questions Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: two-stage mvprobit and ghk vs. sem algorithm questions From Andrew <abrudevo@gmu.edu> To statalist@hsphsun2.harvard.edu Subject st: two-stage mvprobit and ghk vs. sem algorithm questions Date Mon, 16 Jul 2012 14:35:28 -0400 Hi Statalist, I have two questions: Question 1: I have been trying to confirm if the following two-stage mvprobit analysis is valid and would appreciate any thoughts/comments. I have a 3 equations of interest that I believe have correlated errors: W1 = aA + dW2 + e1 X1 = bB + eX2 + e2 Y1 = cC + fY2 + e3 W, X, Y are dichotomous variables A,B,C are exogenous variables a,b,c are exogenous variable coefficients W', X', Y' are endogenous dichotomous variables d, e, f are endogenous variable coefficients e1, e2, e3 are errors that are jointly normally distributed Each of these equations is itself part of a two equation system of the type described by Mallar (1977) and Maddala (1983, pg 246) such that: W1 = aA + dW2 + e1 W2 = a'A' + d'W1 + u1 with analogous equations defined for X and X', and Y and Y'. Mallar and Maddala solve this smaller system by estimating the reduced form equations for each of these two, obtaining fitted values, and then running further ml probits to obtain estimates of d/sigma1 and My hope is that I can estimate the reduced form for the endogenous variables (W2, X2, Y2): W2 = a'A' + aA + v1 obtain their predicted estimates (W2*, X2*, Y2*) and then use those in the original system to allow for the correlated errors among the W1,X1,Y1 equations using the mvprobit command. mvprobit (W1 = A W2*) (X1 = B X2*) (Y1 = C Y2*) This is based on the idea that the you could estimate the coefficients in each of the smaller systems by performing the 2 stage least squares to obtain consistent results but that then performing the mvprobit we are obtaining more efficient estimates that take into account the error correlations. This is analogous to estimating OLS equations one by one or by SUR. Question 2: The mvprobit command uses the GHK simulator. My understanding is that the GHK simulator is computationally efficient for systems of 4 or 5 equations but that for larger systems a stochastic EM algorithm is likely to be a better option. Is this correct? Thank you all in advance. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-07/msg00566.html","timestamp":"2014-04-16T07:37:14Z","content_type":null,"content_length":"9287","record_id":"<urn:uuid:9ab47600-d4ea-4864-be7e-fa779640667c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
100,556pages on this wiki This is an archive of Talk:Attributes. • Please do not edit this page! • If needed, direct any comments to the current talk page. Spell +Crit (effect of Int) I've modifed the page again to include this information: http://forums.worldofwarcraft.com/thread.aspx?FN=wow-general&T=8532087&P=1 I've inferred from other posts by Tseric that every class should have 5% int per crit at that expected value, and then worked backwards to get the base crit %. I believe this is correct - but feel free to debate it here if you disagree. Not sure what to do with shaman/paladins for which Tseric had no info available. --Mania 12:24, 31 May 2006 (GMT +8:00) I've just modified the page to include the latest information from Tseric, which can be seen on http://forums.worldofwarcraft.com/thread.aspx?FN=wow-mage&T=1009382&P=1. I did not include a link to this forum post, as in a matter of weeks it'll just return a blank page (it won't make the blizzard archive). If anyone has any problems with the mods, let me know :) The spellcrit formula on this page is wrong. I also assumed 100Int to be 1% Spellcrit up to now but this Thread including Blue Post #6 states that 59.5 Int = 1% Crit. --Ymihere 07:10, 7 Dec 2005 (EST) I know for a fact that shaman crit chance is not 20 int per crit. I have no idea where that number came from, but it is WAY off. The number is in fact 39.5 or 40 or 40.5 (hard to tell exactly which it is). But I did two tests of 1000 casts of healing wave (rank 1), each with different gear sets on as follows: Test 1: 282 INT, 3% crit gear, 5% crit from talents --- 149/1000 crits = 14.9% crit rate Test 2: 147 INT, 0% crit gear, 5% crit from talents --- 86/1000 crits = 8.6% crit rate DIFFERENCE: 14.9-8.6-3 = 3.3 (282-147)/3.3 = 40.9 int per crit Test 1: 14.9-8 = 6.9 282/6.9 = 40.86 Test 2: 8.6-5 = 3.6 147/3.6 = 40.83 Indeed, here is the quote from Tseric. "The basic mechanic of INT to Crit% is an increase of 1% every 59.5 points for mages. A mage is generally expected to have around 286 points of INT at 60. This works out to about 5% crit on average for mages. It is possible to go higher, as Crit% does go up incrementally. EDIT- The increase of 1%crit for 59.5 is for everyone, not just mages. However, mages tend to have more INT, thus my phrasing. When asked if that meant there was no Base Crit %: "Basically, yeah." --Finnias 05:31, 20 Dec 2005 (EST) Just a note, Tseric corrected himself later: "First off, there is an expected number of INT for each level for players of different classes. For continuing examples I will refer to the mage. Again, each class has different values on them and therefore scale differently, as we shall see. " Details on spell crit chance. --Tbannister 11:12, 23 Jan 2006 (EST) Another fine reason why spell crit &, and ranged crit % should be displayed, somewhere, anywhere. CJ 05:40, 20 Dec 2005 (EST) Tseric has previous said unequivocably that "there is no base crit chance", So if he did indeed post the numbers on the page somewhere, I suspect he simply can't do math, and the corrected numbers "Warlock 200 - 60.6 Druid 192 - 60 Shaman 160 - 59.2 Priest 250 - 59.5" are respectively 40, 39, 32, and 50 int/% crit. Also this page is terribly messy and needs a clean up, there is too much "discussion" on the formal page instead of the talk page. Citations should be collected at the bottom of the page, not inline and the horrible black on white formula bits needs to be made less painful to read. Those number would be be more inline with observed behaviour. --Tbannister 13:59, 11 October 2006 (EDT) This information was inferred from a post by Tseric, in which he revealed the expected amount of Intellect that Mages, Druids, Warlocks and Mages should have at 60, and their Intellect per Crit ratios. Knowing that at the expected amount you should have a 5% crit rate, the base crit rate can be worked out. If classes other then mages are not intended to have a 5% crit rate, this information will be incorrect. For Paladins crit chance is still unknown, thus the question marks. The estimates shown though assume 0 base crit, which in light of the new information is probably wrong. Tseric writes, "Here are some other numbers to that end: At level 60, these are expected numbers of INT and points per Crit% Warlock 200 - 60.6 Druid 192 - 60 Shaman 160 - 59.2 Priest 250 - 59.5" LumberLamer tested as a lvl 60 dwarf paladin. With 215 intellect and no crit items or talents, 566 flash of light rank 1 resulted in 35 critical heals (6.18%). Unequipped, with 76 intelligence, 20 of 576 spells were critical (3.47%). Extrapolating that would mean 0 intelligence would have 1.98% to crit, and each 51.92 INT would add 1% chance to crit. Bubblebee has also tested as a 60 dwarf paladin without crit gear and talent crit. With gear (235 intellect) on, 1000 flash of lights resulted in 77 crits. Without gear (75 intellect), 1000 flash of lights resulted in 48 crits. Additional tests w/ 153, 172, 210, and 270 intellect resulted in 6.2, 7, 6.3, and 7.5 crit respectively. Each crit test at a certain intellect was done with 1000 flash of lights. The best fit linear equation of the data suggests that 0 intellect equates to a 4% spell crit and approximately 72 intellect adds 1% spell crit. Semaj used 1200 flash of lights with no gear and 1200 with full gear. His results suggest that 0 INT would = 0% crit rate and every 29.5 INT = 1% crit for paladins. Aryxymaraki tested the ratio using 400 casts of HW with naked int, 400 casts with half-gear, and 400 casts with full gear. He arrived at 39.5 INT = 1% crit. Quoted from this General Forum post by Tseric on May 31st, 2006 Not exactly, but the numbers tend to hover around that mark for many casters, at least. Obviously, for melee the numbers are somewhat irrelevant. Sorry that I don't have the exact numbers for Paladins, but the trend is illustrated. Here are some other numbers to that end: At level 60, these are expected numbers of INT and points per Crit% Warlock 200 - 60.6 Druid 192 - 60 Shaman 160 - 59.2 Priest 250 - 59.5 These is still a disconnect between the previously discussed 5% crit base and these numbers, but Blue information is always worth capturing. Can Crits Miss ? Seems to some misconceptions here about crit/miss interactions. They are separate tests - crit rate and hit rate do not affect each other. Further critical rate is the percentage of *hits* that crit, not the percentage of *attacks* that crit. So if you have a 60% chance to crit and a 50% chance to miss, that doesn't give you a -10% chance to score a normal hit. It gives you a 50% chance to miss, a 30% chance to crit and a 20% chance to hit. --Danya 13:08, 4 Jan 2006 (GMT) Danya please explain where you got your information from? The explanation from the Tank Points Mod says it's actually a single roll not a double roll as you suggest. And again, here's the blue post from the forums that says the same thing: Thundgot Post --Tbannister 14:00, 4 Jan 2006 (EST) Thungdot's post seems very contradictory - he states that crit chance includes misses, then in part 2 has crit and miss chances shown separately. I can't decide if he's saying there are two rolls (hence the two parts and crits including misses) or one (as given in his example). My numbers are based on in-game observations FWIW. --Danya 21:12, 26 Jan 2006 (GMT) Part 1 of the blue post on +crit/+hit is perfectly clear. CRITS CAN MISS: "The way WoW calculates crit rate is over ALL attacks. Crit rate is not based on hits only." How can that possibly be misunderstood? Why would he use the phrasing "ALL attacks" and then go on to specify "not based on hits only" if he didn't mean "also misses"? Look at the sentence this way: "Crit rate is calculated over ALL attacks - not hits only". I don't believe this is a mangling of the original wording, and it certainly is much clearer. BUT!: in part 2, crit rate is deducted from hit rate! This seems to indicate misses can't be crits... If you ask me, though, part 1 is a lot clearer than part 2... To be honest, someone needs to drag a new answer out of this "motive" guy - OR - sit down and test your crit rate against something you have a noticeable miss chance against. Only way I can think of this being possible is by getting a friend on the other faction to join in (or use 2 accounts). One could also get some semi-decent results out of MC bosses, for example Magmadar and Golemagg (as a mage you are safe while fighting these, and they have enough HP to get off some frostbolts). --Asherett 06:39, 29 March 2006 (EST) What's the problem? Yes part 1 means that Crits can Miss. Now, why does part2's deducting of crit rate from hit rate mean that "misses can't be crits"? The fact that the statement "New miss chance - (Original miss%) - (toHit modifiers)" (just noticed he had a typo there, that first "-" should be a "=") doesn't mention crit at all doesn't in any way affect the fact that a Crit can Miss. In fact it is exactly stating that a Crit can Miss, because although Crits eat Hits, and Hits eat Misses, Crits eating Hits does not carry over into Crits eating Misses if the Crit chance is higher than the Hit chance. Thus indeed a Crit can Miss because it can hit a ceiling due to toCrit modifiers never affecting New miss chance. Athan 14:46, 13 July 2006 (EDT) read this if you think critical rate is the percentage of *hits* that crit. The miss chance is capped at 60%? Can you explain where you got your information from? Yes, please do, Tbannister. The last part in my original edit was actually a post on the official forums (from Eyonix, I think), that's why I ask. Don't take it personally, we only want to find out the truth ;) Stilpu 03:49, 5 Jan 2006 (EST) I can't find my original reference for the avoidance cap at 60%, there's a mention of it in the notes from the Titan Panel Combat Bench addon. --Tbannister 14:07, 10 Jan 2006 (EST) It doesn't have any credible source, so it is just a assumption. --Firefox As I understand, it was determined through experimentation. If you can show that it doesn't happen, please do so. --Tbannister 11:12, 23 Jan 2006 (EST) That mean you can not provide any credible source. Firefox, that link doesn't prove or disprove critical rates being the percentage of hits. If anything though it supports it since he was seeing below 40% of attacks critting, which suggests that his miss rate was affecting it. He doesn't state his miss rate which makes it more difficult to analyse, but if he has a say 80% hit rate (I'm assuming dual wield penalty is applying), then the expected number of crits would be 425. That's fairly close to his actual numbers... --Danya 21:27, 26 Jan 2006 (GMT) Did you read motive's post? He's a Blizzard Poster. Firefox 22:20, 26 Jan 2006 (EST) What lots of people are forgetting is that there are abilities which ASSURE a critical hit. They may your critical hit chance 100% for your next swing. Guess what, these swings can still miss. This seems to prove that an attack must hit before it can crit. I firmly believe that crit rate is based on your hit rate. If you hit 50% of the time, your crit rate is 50% lower than it is reported. But if you wanna test it then go attack a mob that is 10 levels over you and get your crit rate to 10-20%, having a priest heal you. Swing at it a few hundred swings and see if you crit 10-20% of your swings, or if 10-20% of your hits are crits (this will be the case). Also see if you notice that 60% miss rate cap, which you will notice is not actually real. -Shadar This ability (I think you mean "Cold Blood") does not assure a critical hit. It increases the critical hit chance by 100% (read the tooltip). Your chance to miss and the dodge/parry chance of the opponent are unaffected by this. The normal hit chance is 100%-dodge-parry-miss-crit (the rest of all your attacks so far). Crits and normal hits are different events! So, if you increase your critical hit chance by 100%, you consume all your normal hit chance. That means that "Cold Shot" normally assures that all your normal hits turn to be crits. The crit chance is STILL based on ALL SWINGS but can NEVER ignore the miss chance and the dodge/parry chance of the opponent. If you test this on a 10 level higher mob, you have to take into account the better dodge/parry chance of the mob (2% better each) and your increased miss chance due to level difference (10 level usally means 94% miss with single or two-hand weapons, but will be capped at 60% here). Of course, your crit chance is also decreased because of the difference between the mob's defense and your weapon skill (2% here). The 60% miss rate cap only means the base miss chance because of level difference. The overall miss chance can be higher due to the better defence skill of the mob (again 2%). "MISS the mob" and "The mob DODGES" are different events and don't depend on each other. Finally you miss at (60+2)%, the oppenent dodges at (basedodge+2)%, he parries at (baseparry+2)%. You crit with (basecrit-2)%. All of your other swings (if left) are normal hits. I tested this and it fits so far. But you have to deal a million swings on the same type of mob to be quite sure of chances and possibilities. --Morrh 14:47, 13 Mar 2006 (CET) But in the end, if you add 100% crit rate you are only adding a base of 95% dps (assuming a 5% base miss rate). If this is the case, 1% crit adds .95% dps, not 1% dps. On the other hand, 1% to hit increases your dps by 1% up to the point where you remove all chance to miss. If this is true, +hit gear provides a greater default bonus... before talents/procs are factored in. -Shadar You can't compare the "cold blood" ability with +hit items. If you add a 1% crit to your gear, you increase the dps bei 1%, since 1% of all your attacks deal the double damage. But your hit chance is decreased by 1% then. If you add 1% hit, you deal also 1% more damage, since 1% of your attacks that would normally miss will hit now. The "cold shot" will only assure that all your hits become crits. It is not possible to calculate the +dps% of this fact. It depends on miss, dodge and parry as you stated in your example above. The basic is to understand that +crit will do -hit and +hit will do -miss. If one doesn't understand this rule, it's quite convenient to think that crit% is the chance of hits that crit rather than attacks that crit. --Morrh 14:09, 12 Apr 2006 (CET) motive[blue] @rogue forum, Topic: BACKSTAB NERFED: □ Wait, motive can you verify some theoretical numbers for me? (assuming ideal distributions) □ 1) 1000 swings with a 50% chance to hit and a 50% chance to crit (against an equal level unarmored/no defense enemy) would result in □ a) 500 crits, 500 misses (chance of regular hit = to hit - crit) □ or □ b) 500 crits, 250 hits, 250 misses (chance of regular hit calculated independently after crit roll) □ or □ c) 250 crits, 250 hits, 500 misses (chance of crit calculated independently after to-hit roll) □ Sounds like you use option A as your calc, which is cool although not particularly intuitive. • Option a) is correct. But keep in mind that 50% hit AND 50% crit is not a possibility since in your example the % chance to miss stays the same. The way this works is that when your chance to crit increases, your chance to hit is consumed by that increase. So if I initially have a 25% chance to miss and a 50% chance to crit, then I have a 25% chance to hit. If I increase my crit chance to 55%, then my chance to hit (not crit) becomes 20%. □ As for attack skill vs defense skill, am I correct to assume that the formula is crit adjustment = (ATTACK SKILL - DEFENSE SKILL) * 0.04? □ So if rogue with 325 dagger skill and 50% crit attacks an enemy with 300 defense, the rogue's effective crit is 51%, and attacking a warrior with 425 defense the effective crit is 46%? • Yes, this is correct, applying the formula to a rogue with 325 dagger skill vs. a target with 300 defense increases the rogue’s crit chance to 51%. • (325 – 300)*0.04 = +1% • Your second assumption is also correct, applying the formula to attacking a warrior that has 425 defense yields a -5% to crit rate, dropping this rogue’s crit chance vs. the warrior to 46%. • (325 – 425)*0.04 = -4% □ As for the original poster, Blizz you might want to reverify the (ATTACK SKILL - DEFENSE SKILL)*0.04 formula, as if that suddenly become (0 - DEFENSE SKILL) * 0.04 while tweaking defense, it would explain the ~12% drop in crit percentage. • I can't make any assumptions about the original poster's data because the weapon skill and target defense ratings were not posted. As many people have stated above also though, probability distribution is not always perfect and there is still a chance that once can be "unlucky". It's not impossible to toss tails 50 times in a row, it's not impossible to only crit 40% of the time when the chance to do so is 50% Avoidance cap Tseric[blue] @warrior forum, Topic: 60% Avoidance cap? □ I've heard this mentioned a couple of times, is this true in regards of tanking? What specifically is it referring to regarding our abilities? • I have not seen any information which would lead me to believe there is a cap of these abilities. The only limitation I can see would be in the amount of gear you could stack with bonuses for each ability. □ [Obould]I tested this by gathering a group of 10 Kul Tiras mobs (levels 5 and 6) together and let them attack me for 20-25 minutes in my tank gear. I didn't use any abilities (I was AFK watching TV for most of the time). Upon returning, my life total was exactly the same as when I began. It was obvious from Scrolling Combat Text that every single attack was being dodged, blocked, parried, or missing. □ [Lomr] □ Block 25.70% □ Parry 25.27% □ Dodge 21.76% □ Miss 20.10% □ I think the discrepency in block is because i couldnt always keep the 7-8 mobs hitting me directly in front, and occasionally i'd have to reposition a new mob, however this testing clearly blows away a 60% or even 90% cap in damage mitigation. Basically the sky is the limit, and blizz will have to itemize properly to make sure 100% mitigation does not occur. □ [Greysen] □ Level 5 Wendigo □ 2429 Attacks □ 535 Missed (22.03%) □ 576 Dodged (23.71%) □ 716 Parried (29.48%) □ 602 Blocked (24.78%) Firefox 19:56, 29 March 2006 (EST) The [Obould] [Lomr] & [Greysen]'s test are very interesting. But the level difference between the player and mob level may result in a wrong way. So I decided to do a test with a highter opponment to confirm it. I've choosed a Winterspring's bear, cause he is level 56, he has no special ability and no cast. Context : • Winterspring 19 may 06 • Mob "Marteleur Crocs acérés" level 56 vs "Caiden" level 60 / Def 397 • My abilities for this test: dodge 25.38%, parry 15.88%, block 24.88%. No buff, only healing potion +12hp/5sec I aggro the mob in defensive stance, but I let him hit me, doing nothing. When I reach 2300 healpoints I cast my "fear" ability and heal myself with a bandage. The "fight" was run on 1000 attack and during approximately about 40 min. Results : □ [Caiden] □ Level 56 "Marteleur Crocs acérés" □ 1000 attacks □ 259 mob hit 25.90% □ 88 missed 8.80% □ 283 dodged 28.30% □ 172 parried 17.20% □ 198 blocked 19.80% So the 60% avoidance cap is effectively broken with 65.3% and 74.1% if included the miss rate.screen here There are some differences between my abilities and theses stats. First, I believe the difference level of the mob has increased my dodge/parry capacity. For the block, after a bandage, my shield was everytime in my back, so I've spent a few second each time to take it ready in my hand. Rogue Crits I've experimented with Rogue Crits until I was about to cry blood, and as best I can reckon, the formula on this page is wrong. The formula isn't [5+ AGL/29], it's simply [AGL/29]. The +5% comes from a Talent called Malice. Here's my evidence: I've got a 60 Rogue with 257 AGL, 5/5 Malice, and +11% crit from items. According to the formula above, this should equate to a 29.86% crit rate: [5+(257/29)]+5+11 = [5+8.86]+16 = 13.86 + 16 = 29.86. BUT... my tooltip crit rate is ACTUALLY 24.84. Now, I'm certain that the 0.02% difference is just calculator slop, but the whopping 5% difference HAS to be because of the added 5 in the formula. I respecced recently, and when I did, I lost exactly 5% from Malice, and regained exactly 5% when I re-purchased it. I've rotated out all of my +crit gear, and lost EXACTLY how much was listed on the item (none of my +crit stuff has AGL on it). So it HAS to be the formula that is messed up. Where did this figure come from? it should be Malice + calculation then. CJ 11:20, 17 Feb 2006 (EST) Druid Spellcrits I just finished a Undead Stratholme run. I was using the same gear the whole time, and together with buffs I had 276 int in total. My Healing Touches, various ranks, critted to 5.8% according to my damage meter, Recap. My Insighful Hood gained me a spellcrit chance bonus of 1%, so subtracting that I get 276 / 4.8 = 57.5 points of int per percent of spellcrit. Using the formula given in this article, I should have got critical hits on 276/30+1 = 10.2% of all Healing Touches, including the Insightful Hood bonus. Which sucks. Loriel 23:59, 3 Mar 2006 (EST) About +hit I've modified some information about chance to hit. If you dont agree, please whine here =) regards carve Example of the Three Outcomes interpretation Consider an orc with a knife, and that orc swings it at a dwarf who is sleeping (and cannot react defensively). There are exactly three possible things that can happen. 1. He can stab the dwarf in the arm, and normally damage him. 2. He can poke the dwarf in the eye, and critically damage him. 3. He can miss entirely, causing no damage. A crit cannot convert to a miss or hit: If his swing pokes the dwarf in the eye, there is no chance that the orc will miss entirely on that swing. There is also no chance that that swing will instead stab the dwarf in the arm. A hit cannot convert to a miss or a crit: If his swing stabs the dwarf in the arm, there is no chance that the orc will miss entirely on that swing. There is also no chance that that swing will instead poke the dwarf in the eye. A miss cannot convert to a hit or crit: If his swing misses entirely, there is no chance that the swing will instead stab the dwarf in the arm. There is also no chance that that swing will instead poke the dwarf in the eye. (This should be easy to verify with some testing on a Mage/Shaman with Combustion/Elemental Mastery as they will guarantee a crit. Unless ofc Blizzard have made some type of hack with the possibilities for miss on those abilities.) I removed the above section from the article, as I think that example is irrelevant, pointless and mathematically wrong. --Batox 06:09, 7 June 2006 (EDT) Can anyone source the cap on +hit? I've got one person telling me it's 5%, another 6% (or 5.6), and another saying there is no cap. --Morbid-o 10:19, 19 July 2006 (EDT) There is no cap. What they're talking about is anything except Dual-Wielding hits (base 76% hit chance), which all have base 95% hit chance. Therefore against same-level targets you only benefit from at most 5% hit. Except people thinking +5% is enough are failing to take into account what happens when the target is higher level than you. Firstly for players and mobs +3 levels you benefit from up to 5.6% due to your weapon skill vs. their defense skill, 0.04% per difference, 3 levels == 15 skill, 15 * 0.04 = +0.60% miss chance. The 6% is simply because +hit only comes in multiples of 1%. And note that in the case of players you will likely encounter warriors and possibly druids with significant +defense, so even though you have 300 weapon skill (maybe ~310 with the right items) they could easily have 350 or higher, depending on how PvP-tuned their gear is. So that's the 5.6% and 6% figures you've quoted explained, but ... ...that's only the weapon skill vs defense adjustment. There's also an adjustment simply for level, which is +1 levels = -1% hit, +2 levels = -2% hit, and +3 levels = -13% (mobs or -9% (players). So, "it depends what you're fighting, and how you fight it". At level 60 if you PvP primarily then you only need +5% hit, although you will still want more if you Dual-Wield. If you PvE in raids at all you can definitely benefite from more. --Athan 10:39, 19 July 2006 (EDT) Ah, thanks. I should've been more clear, I understood most of that already. The 'simply by level' adjustment is what I didn't know about. Specifically, I was wondering at what point +hit was redundant for hunters (not a class I've played much with). From the above, I'm thinking that a PvE hunter would need 5% for the same level +hit, 13% for the +3 effective boss level, and then another .6% additionally to account for the difference in boss defense and hunter bow skill (assuming no modifiers)? Also, is the %hit modifier for level derived from experimental data?--Morbid-o 13:02, 19 July 2006 (EDT) Unfortunately I've not seen a solid cite, either of Blizzard-provided info, or concrete experimental data, about the +1/2/13% miss chance on mobs levels above you. The rest is from Blizzard info Certainly there IS some such additional miss chance as I know that once a mob reaches +4 levels it gets much much harder to hit at all (probly something nasty like +25% miss or more then). I might be able to go parse my extensive logs and find data, if I limit it to MC bosses I know they're +3 levels for sure. --Athan 15:04, 19 July 2006 (EDT) Crit vs Blocking Ok, so we're pretty sure that "+Hit eats Miss" and "+Crit eats Hit" (but +Crit won't overflow to eat Miss). Also Dodge and Parry can be treated as eating first Hit and then Crit, simply because they're both 100% damage mitigation. Now, what about Block ? It's not necessarily 100% damage mitigation, so it applies *to* a melee swing that didn't miss, wasn't parried and wasn't dodged, rather than being instead of a hit/crit. If you look on Formulas:Weapon Skill you'll see some discussion of Crit Cap due to Glancing Blows (which are always normal hits, not crits). In some of the examples there is so little normal Hit left that it would, if there was no blocking, all be turned into Crit. That page implies that a Block cannot be against a Crit, it will instead stop some of the Hit turning into Crit. Is this correct? Does Block chance stop some of the Hit turning into Crit, instead reserving it for Block ? Or can a Critical hit indeed have some of its damage Blocked (heck, maybe all of it)? My own (extensive) combat logs never show a single crit hit with any amount blocked, *BUT* my own Crit is only 28% (a bit higher with raid buffs) so I'd never be running out of pure Hit Chance anyway. I don't use Cold Blood, so can't easily test via that either. Anyone got any combat logs with "You crit <mob> for XXX (YY blocked)." in them ? --Athan 15:05, 13 July 2006 (EDT) Critical Hit chance effect on overall damage The problem with this is what you're talking about when you say "+1% crit increases damage by 1%". Let's think in terms of how much damage we do versus the pure weapon damage if we hit 100% of the time and 100% of those hits were just normal hits. Now, we know that 100% hit is possible (when not dual wielding). We also know that any crit chance eats up hit chance, and those hits that become crits are then double damage (modulo talents increasing this). So the actual damage we will do per swing (and thus over time) is: Damage = BaseDamage * HitChanceNotIncCrit + BaseDamage * 2 * CritChance = BaseDamage * (HitChanceNotIncCrit + 2 * CritChance) Plugging in 100% hit (enough +hit% for 95% base), and 5% dodge (which can't ever be negated) this gives us: Damage = BaseDamage * ((1.00 - 0.05 - 0.25) + 2 * 0.25) = BaseDamage * 1.20 So due to the dodge chance our 25% crit chance is actually only worth 20% extra damage. Now, raise the crit chance by 1% Damage = BaseDamage * ((1.00 - 0.05 - 0.26) + 2 * 0.26) = BaseDamage * 1.21 Now our 26% crit chance is actually woth 21% extra damage. That is indeed an increase of 1%. But that is only an increase of 1% to the INCREASE in damage we get from crit. The last editor is correct that this is actually only: 1.21 / 1.20 = 1.00833 i.e. 0.833% extra damage overall from +1% crit. Now if we go to even more realistic: 1. No base miss (+5% hit or more) 2. 5% dodge 3. 5% parry 4. 5% block 5. 25% crit Damage = BaseDamage * ((1.00 - 0.05 - 0.05 - 0.05 - 0.25) + 2 * 0.25) = BaseDamage * 1.10 Damage = BaseDamage * ((1.00 - 0.05 - 0.05 - 0.05 - 0.26) + 2 * 0.26) = BaseDamage * 1.11 1.10 / 1.11 = 1.00909 So actually 0.0909% extra damage from +1% crit, i.e. the more you're not going to hit, the more +1% crit is worth. I'll go edit that whole section to reflect this :). --Athan 09:01, 18 July 2006 (EDT) I don't doubt that a simple calculation for increase in damage from 1% crit is valid for white damage, but for rogues at least it is more complicated first because they are always dual wielding and second because a significant portion of their total damage is special attacks, which follow different rules. Here are a couple examples, using a rogue with a base crit rate of 25%, base hit rate of 9% (15% miss rate), attacking a level 60 target that has a 5% dodge rate (striking from behind). The numbers for a level 63 opponent would be similar as long as the rogue had 310 weapon skill (which he should) - these crit and hit rates put him nowhere near the crit cap and the only difference would be a 0.4% difference in hit, crit, and dodge rates (not insignificant, but I'm lazy). WhiteDmg = BaseDmg * (CritRate * 2 + (1 - CritRate - MissRate - DodgeRate) SSDmg = BaseSSDmg * (CritRate * 2.3 + (1 - CritRate - DodgeRate) BSDmg = BaseBSDmg * ((CritRate + .30)*2.3 + (1 - (CritRate + .3) - DodgeRate)) Sinister Strike and Backstab have a 0% MissRate since our rogue has over 6% to hit, or that would be in their equations too. Backstab's crit rate is increased by 30% from Improved Backstab. Both Sinister Strike and Backstab have their crit damage bonuses increased to 2.3 from Lethality. Our rogue's damage would be: WhiteDmg = BaseDmg * (.25*2 + (1 - .25 - .15 - .05)) = BaseDmg * 1.05 SSDmg = BaseSSDmg * (.25*2.3 + (1 - .25 - .05)) = BaseSSDmg * 1.275 BSDmg = BaseBSDmg * (.55*2.3 + (1 - .55 - .05)) = BaseBSDmg * 1.665 Increase his crit rate by 1 and what do you get? WhiteDmg = BaseDmg * (.26*2 + (1 - .26 - .15 - .05)) = BaseDmg * 1.06 SSDmg = BaseSSDmg * (.26*2.3 + (1 - .26 - .05)) = BaseSSDmg * 1.288 BSDmg = BaseBSDmg * (.56*2.3 + (1 - .56 - .05)) = BaseBSDmg * 1.678 How much of an increase is this? WhiteDmg increase = 0.95% SSDmg increase = 1.02% BSDmg increase = 0.78% Sinister Strike gets a bigger increase in damage than white damge does, due to Lethality. Backstab gets a lot less of an increase even with Lethality, because increasing your crit rate from 55% to 56% is a much small increase than from 25% to 26%. How much of an increase in total damage does this translate into? This depends on how much of your damage comes from your special attack and how much comes from white damage, which will depend a lot on your talent build. Your Backstab damage can probably vary from 30% to 40% of your total damage while Sinister Strike is probably only 20-25%. So dagger rogues get hit double here - not only is their main attack less affected by an increase in crit rate, their main attack accounts for more of their damage. This means dagger rogues get less of an increase in damage than you would think by calculating their increase in white damage, while sword rogues get a little bit more of an increase than you would think. Total increase: Dagger rogue, 40% BS, 50% white, 10% misc (poisons, procs, etc): (.4*.78 + .5*.95 + .1) = 0.887% Dagger rogue, 30% BS, 60% white, 10% misc (poisons, procs, etc): (.3*.78 + .6*.95 + .1) = 0.904% Sword rogue, 20% SS, 70% white, 10% misc (poisons, procs, etc): (.2*1.02 + .7*.95 + .1) = 0.969% If we had started with a better-equipped rogue with a 30% crit rate and 14% to hit (which is close to what I have), these numbers get worse: WhiteDmg = BaseDmg * (.3*2 + (1 - .3 - .10 - .05)) = BaseDmg * 1.15 SSDmg = BaseSSDmg * (.3*2.3 + (1 - .3 - .05)) = BaseSSDmg * 1.34 BSDmg = BaseBSDmg * (.6*2.3 + (1 - .6 - .05)) = BaseBSDmg * 1.73 Add 1% crit: WhiteDmg = BaseDmg * (.31*2 + (1 - .31 - .10 - .05)) = BaseDmg * 1.16 SSDmg = BaseSSDmg * (.31*2.3 + (1 - .31 - .05)) = BaseSSDmg * 1.353 BSDmg = BaseBSDmg * (.61*2.3 + (1 - .61 - .05)) = BaseBSDmg * 1.743 WhiteDmg increase = 0.87% SSDmg increase = 0.97% BSDmg increase = 0.75% Total increase: Dagger rogue, 40% BS, 50% white, 10% misc (poisons, procs, etc): (.4*.75 + .5*.87 + .1) = 0.835% Dagger rogue, 30% BS, 60% white, 10% misc (poisons, procs, etc): (.3*.75 + .6*.87 + .1) = 0.847% Sword rogue, 20% SS, 70% white, 10% misc (poisons, procs, etc): (.2*.97 + .7*.87 + .1) = 0.903% Cirt hit formula error? I'm not really willing to "fixing things" that seem incorrect to me,that may be questionable with respect to authors intent of content. Thus, I'll post here and, hopefully, someone else can set me straight or fix the blow outlined error. In Base chance to crit in Melee The author uses a "template" style forumla for the basis of all following example formulas as follows: Damage = BaseDamage * HitChanceNotIncCrit + BaseDamage * 2 * CritChance = BaseDamage * (HitChanceNotIncCrit + 2 * CritChance) Example formula using the "template" from this article: Damage = BaseDamage * ((1.00 - 0.05 - 0.25) + 2 * 0.25) = BaseDamage * 1.20 If you look at the template and resulting "valued" formula, you can see that the template has an extra value and operation involved as in. Damage = BaseDamage * HitChanceNotIncCrit + BaseDamage * 2 * CritChance = BaseDamage * (HitChanceNotIncCrit + 2 * CritChance) As I said, there may be some purpose or reason for having BaseDamage in there and not utilized in the resulting formuals but I see no purpose behind having this extra operation in the definition -- it would invalidate results if it were used as published. In my opinion the formula should probably read as follows, based upon resulting formuals: HitChanceNotIncCrit = (HitChance - DodgeChance - CritChance) Damage = BaseDamage * HitChanceNotIncCrit + 2 * CritChance = BaseDamage * (HitChanceNotIncCrit + 2 * CritChance) The addition of a formula HitChanceNotIncCrit allows for more understandable viewing of the resulting formulas. Eleazaros 16:33, 2 September 2006 (EDT) Well your thinking is basically the same. However, your formula is a bit erroneous. HitChanceNotIncCrit should be: HitChanceNotIncCrit = 100% - DodgeChance - ParryChance - BlockChance - CritChance But again, this is very basic and should be assumed by any readers. HitChance is obviously already the chance that it'll hit, which takes into account any type of evasion chance. Secondly, there is most definitely a use for BaseDamage * 2 * CritChance. You screw up the fomula otherwise. Damage = BaseDamage * HitChanceNotIncCrit + 2 * CritChance = BaseDamage * (HitChanceNotIncCrit + 2 * CritChance) This is horridly bad. Note the parentheses. Do you remember your algebra? The two lines don't even equal eachother. The second BaseDamage is melded into the first one since: Damage = BaseDamage * (HitChance + 2* CritChance) = BaseDamage*HitChance + BaseDamage*2*CritChance. Also, if you think about it, 2 * CritChance by itself has no use at all in calculating damage. It's just the chance to do a crit, not how much damage you'll do with that crit. Pzychotix 19:22, 2 September 2006 (EDT) Well, I didnt want to say it... but to correctly "merge" probability values, you multiply or divide them, and you also will need the actual chance for your hit, not the chance you're not hitting. So it would be... HitChanceNotIncCrit = (1-DodgeChance) * (1-ParryChance) * (1-BlockChance) * (1-whateverotherchancesyoumightfind) and I dont really think that the crit chance should be in this value -watchout 18:59, 4 September 2006 (EDT) Oh dear, someone really doesn't have their head around this. From the top. The 'total' damage you will do is your normal hit damage plus your crit hit damage. For this we're still ignoring things like Glancing Blows. Now, your chance to get a normal hit is: NormalHitChance = 100% - MissChance - TargetDodgeChance - TargetBlockChance - TargetParryChance - AttackerCritChance i.e. it's what's left over when all the other possibilities (ignoring Glancing Blows for now) are taken into account. Note also that TargetBlockChance and TargetParryChance are often ignored on the assumption we're attacking from behind. So, our damage from normal hits is: NormalHitsDamage = BaseDamageofWeapon * NormalHitChance = BaseDamageofWeapon * (1.00 - MissChance - TargetDodgeChance - TargetBlockChance - TargetParryChance - AttackerCritChance) Yes, I did a switch from 100% to 1.00 notation there. Now work out the damage from a Crit 'hit'. In WoW Blizzard simply made this double the normal hit damage (if there are no talents or other abilities specifically increasing crit damage): CritHitsDamage = BaseDamageofWeapon * 2 * CritHitChance That's where the "* 2" comes from. Now add it together: TotalHitDamage = BaseDamageofWeapon * NormalHitChance + BaseDamageofWeapon * 2 * CritHitChance Which, given BaseDamageofWeapon is a common factor either side of the + sign can be re-written as: TotalHitDamage = BaseDamageofWeapon * (NormalHitChance + 2 * CritHitChance) You can of course then start factoring in any Crit damage modifiers: TotalHitDamage = BaseDamageofWeapon * (NormalHitChance + 2 * CritHitChance * CritDamageModifiers) Clearer now ? Athan 10:11, 13 September 2006 (EDT) Beaza's Changes Reverted the numbers to 1.12 until BC is released since values are subject to change. --Beaza 18:15, 30 November 2006 (PST)
{"url":"http://www.wowwiki.com/Talk:Attributes/Archive01","timestamp":"2014-04-20T16:13:58Z","content_type":null,"content_length":"99478","record_id":"<urn:uuid:642170f9-e006-4d5d-9e14-8e86c3487d8d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Unexpected subspaces of tensor products Villanueva, Ignacio and Pérez García, David and Cabello Sánchez, Félix (2006) Unexpected subspaces of tensor products. Journal of the London Mathematical Society. Second Series, 74 (2). pp. 512-526. ISSN 0024-6107 Official URL: http://jlms.oxfordjournals.org/content/74/2.toc We describe complemented Copies Of l(2) both in C(K-1)circle times C-pi(K-2) when at least one of the compact spaces K-i is not scattered and in L-1(mu(1))circle times L-is an element of(1)(mu(2)) when at least one of the measures is not atomic. The corresponding local construction gives uniformly complemented copies of the l(2)(n) in c(0)circle times(pi)c(0.) We continue the study of c(0)(l (2)(n)) showing that it contains a complemented copy of Stegall's space c(0)(l(2)(n)) and proving that (c(0)circle times(pi)c(0))" is isomorphic to l infinity(l(infinity)(n)circle times(pi)l (infinity)(n)) together with other results. 2 In the last section we use Hardy spaces to find an isomorphic copy of L-p in the space of compact operators from L-q to L-r, where 1 < p, q, r < infinity and 1/r = 1/p + 1/q. Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/11823/","timestamp":"2014-04-19T17:30:16Z","content_type":null,"content_length":"25511","record_id":"<urn:uuid:ed960684-43bd-4924-9375-f3925ea065cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Some Notes in Hostility Toward Subtyping What follows is a session of thinking out loud, so it’s not to be taken too seriously. The more I think about it, the more I convince myself that the idea of subtyping and class hierarchies is a mistake. “Inheritance” is a good way to lock down a design so that it becomes rigid and brittle. If you remember your (now scorched, I trust) GoF book, its main cargo was “composition over inheritance”. So why do we want this mechanism at all? Because we want type substitution. We say that if X is a subtype of Y, then X can be substituted wherever Y is expected. Informally, we want to say that an X is a Y. But what does this mean? It means that the set of all values in X is a subset of all the values in Y. So if you have a value x in X, then x is also in Y. In other words, X implies Y, or (X => Y). Hang on. This is just the type signature for a function. When we observe this, we realise that we can substitute functions for subtyping. Moreover, we can make those functions implicit (as in Scala) and then type substitution will work just as if we were using subtypes (hopefully without loss of type inference). As a bonus, our implicit type substitutions have scope, while subtyping is a global declaration. For example, take a look at the function type (A => B), recalling that this conceptually the same as saying “A implies B”, or even “A is a subtype of B”. Using subtyping, we could have types Foo and Bar, and say “Bar extends Foo”. But instead of subtyping, in Scala we could have an implicit function: implicit def fooBar(b: Bar): Foo = b.foo The easiest way to implement this is to have Bar wrap a value of type Foo by accepting it in its constructor. Then the Bar.foo method simply returns this value. But it can really be implemented any way we want. For example, if both Foo and Bar take a parameter of type Int, then we can extract it from one when constructing the other. There’s a slight problem with implicit functions like this. Somebody else may have defined a function with the same type already, with a totally different intent. To remedy that, we can create a type for a specific kind of conversion to Foo; one that implies the relation we want to express. Something like… trait IsAFoo[A] { def apply(a: A): Foo This is now unambiguous. This kind of construct is a typeclass. IsAFoo classifies all types that imply type Foo. We can supply an implicit instance of this typeclass for Bar: implicit val bar_is_a_foo = new IsAFoo[Bar] { def apply(b: Bar) = b.foo And wherever we want to accept something that “is a” Foo, we accept an implicit parameter as evidence that it is indeed a Foo. implicit def doSomethingWithFoo[A](implicit foo: IsAFoo[A])(a: A) = foo(a).methodOnFoo We can call this method with a value of type Bar, because of the existence of the implicit instance bar_is_a_foo. In fact, Scala has an even nicer syntax for this, using “view bounds”. I leave it to you to check that out. What I want to impress on you is how flexible typeclasses are. We’re not constrained to using this mechanism to substitute for subtyping. We can use it to do the converse, i.e. supertyping an existing type. Or we can have the conversion go both ways to express isomorphisms between types. The pattern here is that we want to be able to state any kind of relation between If Scala had the ability to state functional dependencies, the typeclass mechanism truly could obviate subtyping, with the added bonus that we could state any kind of type-level relation that we want, rather than just type order. I could talk about the downside of this, but… I could go on forever. Variance of Functors So, speaking of type order, there’s a tie-in here with variance. This is one of the things that trips people up when thinking about class hierarchies. I know it trips me up. But variance is much easier to reason about if we think of the subtype/supertype relation as just a function. To compare, here’s what Wikipedia has to say about variance in class hierarchies: If class B is a subtype of class A, then all member functions of B must return the same or narrower set of types as A; the return type is said to be covariant. On the other hand, the member functions of B must take the same or broader set of arguments compared with the member functions of A; the argument type is said to be contravariant. The problem for instances of B is how to be perfectly substitutable for instances of A. The only way to guarantee type safety and substitutability is to be equally or more liberal than A on inputs, and to be equally or more strict than A on outputs. OK, but why is it the case that this is the only way to guarantee substitutability and type-safety? To understand that, it helps to throw away the notion of “subtype”, and simply think only in terms of functions, instead of the mixed notions of “subtype” and “member function”. Try translating that snippet from Wikipedia so that it’s worded in terms of functions. It gets pretty convoluted. Proceeding from the premise that functions and subtype relations are interchangeable, we can derive a definition of co- and contravariance simply from the definition of substitutability. We start with this: If B is a subtype of A, then every subtype of B is also a subtype of A, and every supertype of A is also a supertype of B. Now, if we take (A <: B) to mean “A is a subtype of B”, and (A => B) to mean “function from A to B”, or equivalently “A implies B”, we can write this property as: (A <: B) <==> ((B <: C) => (A <: C)) and ((C <: A) => (B <: C)) Remember that you can substitute functions for subtyping and vice versa, so any (=>) sign above can be replaced with the (<:) sign, or the other way around, and the meaning stays intact. So let’s restate this property purely in terms of functions: If there exists a function from B to A, then for every function from C to B there exists a function from C to A, and for every function from A to C there exists a function from B to C. (B => A) <==> ((C => B) => (C => A)) and ((A => C) => (B => C)) Moving (B => A) to the left of the <==> sign, we can infer two properties of (B => A): (B => A) => (C => B) => (C => A) (B => A) => (A => C) => (B => C) Recalling that function application is logical implication, both of these properties evaluate to true, for all A, B, and C. Let’s use Wolfram Alpha to confirm this for us. See here and here. Let’s now say that C is a fixed type. Remembering what we did above with typeclasses, this gives rise to two such typeclasses, representing (C => A) and (A => C), respectively, for all A: trait FromC[A] { def fromC(c: C): A trait ToC[A] { def toC(a: A): C To restate the properties above: (B => A) => FromC[B] => FromC[A] (B => A) => ToC[A] => ToC[B] In other words, mapping a function (B => A) over FromC[B] results in FromC[A]. So the implication is preserved across the mapping. Mapping a function (B => A) over ToC[A] results in ToC[B]. So the implication is reversed across the mapping. This means that FromC is a covariant functor and ToC is a contravariant functor, by the definition of co- and contravariance. So now we have anchored the notion of substitutability to the variance of functors. For reference, here’s my off-the-cuff definition of co- and contravariance, along with some preliminaries: Definition of Higher-Order Function A higher-order function (HOF) is a function that takes another function as its argument. Definition of covariance A unary type constructor T[_] is a covariant functor if and only if there exists a HOF with the following type, for all A and B: (A => B) => T[A] => T[B]. Definition of contravariance A type constructor T[_] is a contravariant cofunctor if and only if there exists a HOF with the following type, for all A and B: (B => A) => T[A] => T[B]. Note that these last two are exactly the properties above that we got for FromC and ToC, which we derived directly from the definition of subtype and supertype. The benefits of subtyping, namely type substitution, can be expressed more naturally with a mechanism for classifying types. 4 thoughts on “Some Notes in Hostility Toward Subtyping” 1. In order to abandon the subtype relation, I think that you have to also abandon mutable data structures. If you retain mutability in the absence of subtyping, you have no means to restrict the scope of that mutability, at least not without creating essentially the same mess that subtyping creates. This comes to the whole notion of what subtyping is for in the first place – while substitutability is an important issue, I think that controlling the scope of what operations can change values within a mutable data structure — encapsulation — is probably more important. □ Nuttycom, I definitely agree that you have to abandon mutable data structures. □ Of course, there’s always the other reason why people use subclassing–code reuse–but I suppose that that could use a bit of decoupling from the subtyping thing. For example, any code used to implement a Transformer can be used to implement a Transformer, but Tranformer super Transformer, ya know? Except, (dammit)… What about multimethods? How the hell do you reuse code without subtyping there?! Also, implicits only work at compile-time, whereas ISA relationships always work. Would the classic sealed interface List { T getHead(); List getTail(); List cons(U newHead); class Cons implements List { private final T head; private final List tail; Cons(T head, List tail) { this.head = head; this.tail = tail; T getHead() { return head; List getTail() { return tail; List cons(U newHead) { return Cons(newHead, this) class Nil extends List { T getHead() {throw new UnsupportedOperationException("Nil has no head.")} List getTail() {throw new UnsupportedOperationException("Nil has no tail.")} List cons(U newHead) { return Cons(newHead, this) How would that work with implicits, eh? ☆ Multimethods can just be done with multi-parameter type classes. No mystery there. As for implementing List without subclassing, well… trait MyList[A] { import MyList._ def foldr[B](z: => B, f: (=> A, => B) => B): B def ::(a: A) = new MyList[A] { def foldr[B](z: => B, f: (=> A, => B) => B) = f(a, MyList.this.foldr(z, f)) def head = foldr[A](error("head of empty list"), (a, b) => a) def tail = foldr[Option[(A, MyList[A])]](None, { case (a, None) => Some((a, MyNil[A])) case (a, Some((b, bs))) => Some((a, b :: bs)) }) match { case Some((a, b)) => b case None => error("tail on empty list") object MyList { def MyNil[A]: MyList[A] = new MyList[A] { def foldr[B](z: => B, f: (=> A, => B) => B) = z
{"url":"http://apocalisp.wordpress.com/2009/08/27/hostility-toward-subtyping/","timestamp":"2014-04-20T15:51:34Z","content_type":null,"content_length":"67210","record_id":"<urn:uuid:675097b6-143f-4819-b128-5ecf93322bb2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Question 420: In one region, the September energy consumption levels for single-family homes are found to be normally distributed with a mean of 1050 kWh and a standard deviation of 218 kWh. Find P45, which is the consumption level separating the bottom 45% from the top 55%. If you need help with the concepts covered in this question (like z-scores and the normal-curve) you can download a visual and easy to understand Crash Course in Z-scores and an Excel Calculator which will do everything you need to do with z-scores. Not what you were looking for or need help?
{"url":"http://www.usablestats.com/askstats/question/420/","timestamp":"2014-04-17T17:03:48Z","content_type":null,"content_length":"8424","record_id":"<urn:uuid:b886516a-d932-4dda-bb7d-652dd2cb3d61>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Post a New Question | Current Questions 6th grade Math WOW! This looks like algebra, not 6th grade. Tuesday, April 23, 2013 at 7:09pm See Related Questions:Fri,8-20-10,6:17pm Your 1st term should be 3x^2. Wednesday, April 17, 2013 at 9:11pm Here's a little perl program that handles the job: sub Ceil { my $x = shift; int($x+.9999); } print "Numeric grades for midterm and final: "; my ($m,$f) = split /[,\s]/,<STDIN>; $avg = Ceil(($m+2*$f) /3); $grade = qw(F F F F F F D C B A A)[int($avg/10)]; ... Wednesday, April 17, 2013 at 11:41am The grade of a road or a railway road bed is the ratio rise/run, usually expressed as a percent. For example, a railway with a grade of 5% rises 5 ft for every 100 ft of horizontal distance. 1. The Johnstown, Pennsylvania, inclined railway was built as a "lifesaver" ... Monday, April 15, 2013 at 5:23pm dialect is used to a-stablish setting and character b-describe the atmosphere c-support the main idea d-help readers visualizwe events mi 1st answer is d then a Sunday, April 14, 2013 at 9:27pm Please ignore my 1st response. My computer malfunctioned. I will have to repeat the entire process. Sunday, April 14, 2013 at 3:35pm Algebra(1st one was typo) so is this one, try doing it the way I did it in your original post Thursday, April 11, 2013 at 7:46pm 8th grade 10th grade Tuesday, April 9, 2013 at 1:57pm an apartment building has the following apartments: 2 bdrm 3 bdrm 4 bdrm 1st floor Sunday, April 7, 2013 at 11:08pm Discrete Math Assuming the trophies are identical, or must be awarded in order 1st-to-last, then we just need to count the ways of choosing 8 people from 30. C(30,8) = 5,852,925 Thursday, April 4, 2013 at 1:40pm 1st times 2 8u + 6v = -2 2nd by 3 9u + 6v= 3 subtract them u = 5 Tuesday, April 2, 2013 at 8:34pm Find the 1st four hexagonal numbers and the formula The difference of two squares a2 - b 2 can be found in 16sq2 13sq2 = 3(16 + 13) = 3 . 29 = 87 Monday, April 1, 2013 at 10:56pm An elementary school collected a total of 240 cans during a food drive. Grade 3 students collected 1/3 of all the cans grade 4 students collected 52 cans, and the rest of the cans were collected by grade 5 students. How many cans did grade 5 collect? A.28 B.80 C.108 D.188 Friday, March 29, 2013 at 11:34pm 5th grade math how do i convert milligrams to grams- i am in fifth grade-easiest way please. Thursday, March 28, 2013 at 9:25am I agree with your 2nd and 3rd answers. I have no idea about the 1st one, though. Wednesday, March 27, 2013 at 8:18am Ya. i got 1st 1. Rb=82.0/55.79*1*10^-6= 1.47*10^6 Monday, March 25, 2013 at 12:41am Why should grade 8 students have to do volunteer hours to graduate grade 8? Please provide me with atleast 6 points and so I can expand on it. Your help is very much appreciated!:) Saturday, March 23, 2013 at 5:38pm What were your teacher's directions? To write a diamante of synonyms (1st and last nouns mean similar things)? Or to write a diamante of antonyms (1st and last nouns mean opposites)? When I assigned these, I had students do antonym diamantes, not synonym, but you must do ... Wednesday, March 20, 2013 at 6:58pm 1st one: looks like they are adding 5 , so arithmetic C is correct 2nd: terms would be 15 16.5 18 19.5 ... You are correct again with B Tuesday, March 19, 2013 at 11:43am Algebra 1 divide the 2nd by -2 to get 10x + 3y = 4 which is the same as the 1st equation, so there is an infinite number of solutions (You are basically given twice the same line) Sunday, March 10, 2013 at 12:16pm ALL subjects.. Hi!! Well,I'm in tenth grade And I was wondering if there are any practice test websites for the 10th grade Test.. I believe Pct? Please!! Help.. Friday, March 8, 2013 at 8:22am can someone help me with the pythagorean theorem? VERY HARD IM IN 7TH GRADE AND THIS IS SOME 7TH GRADE MATH HOMEWORK THAT I NEED HELP WITH!!!!!!!!!!! Thursday, March 7, 2013 at 5:26pm let the time driven by 1st bike be t hrs then the time driven by 2nd bike is t-1 hrs distance covered by 1st bike = 47t miles distance covered by 2nd bike = 54(t-1) 47t + 54(t-1) = 52 47t + 54t - 54 = 52 101t = 106 t = 106/101 hrs or appr 1 hr and 3 minutes Tuesday, March 5, 2013 at 3:09pm If you walk 2 km from your house to a store then back home, whats your displacement? (Displacement is the difference between 1st position and final position) Monday, March 4, 2013 at 12:20pm draw an angle x in the 1st quadrant with sinx=3/5, and an angle y in the 2nd quadrant with cosy=-12/13, then determine the exact value of tan(x+y). Friday, March 1, 2013 at 12:46pm 1st, find the radius of the orbit of the moon. 2nd, then divide 34,200 from the result. Tuesday, February 26, 2013 at 4:26am Algebra 1 Even numbers have a differnce of two. 2, 6, and 8 are consecutive even numbers. Let n = 1st Let n + 2 = 2nd Let n + 4 = 3rd 3(n) - 2(2n+2) = 1/3(n+4) Here is the equation. Can you solve it? Friday, February 22, 2013 at 5:14pm THE #1ST (1+sqrt(2))/sqrt(6) Friday, February 22, 2013 at 5:52am Assuming that we can't start with 0 , or else it would be a 2 - digit number, there are 9 choices for the 1st, then only 1 at the end the middle can be any of the 10 number of such numbers = 9 x 10 x 1 = 90 Wednesday, February 20, 2013 at 10:22am I've corrected my mistake in the 1st equation V(x) = - v cos α +u cosβ = - 190cos25 + 45cos15=-172.2+43.5= -128.7 km/h, V(y)= v sinα+u sinβ= 190sin25 + 45sin15= 80.3 + 11.6 = 91.9 km/h, V=sqrt{V(x) ²+V(y)²}= 158.1 km/h I ... Monday, February 18, 2013 at 5:01pm so, make the suggested substitution: p - 5q = 1/q 2p + q = s I think there's a typo in the 1st equation. Don't expect to see a quadratic. Tuesday, February 12, 2013 at 4:58pm A rather lengthy question, let's find the intersection points. 1st and 2nd line: 3x+2y=1 3x+2(x-2) = 1 5x = 5 x=1 , then y = 1-2 = -1 ----> point A(1,-1) 2nd and 3rd: 4x - 9y = -22 4x - 9(x-2) = -22 -5x = -40 x = 8 , then y = 2-8 = -6 ----> point B(5,-6) 1st and 3rd... Tuesday, February 12, 2013 at 12:14pm thanks,so what is the formula for the sum of 1st even number ? Monday, February 11, 2013 at 5:33pm 1st Multiply 2/3 x 5/6 = 5/9. Then Multiply 5/9 x 14 = 7 7/9. SO THE ANSWER IS 7 7/9. Monday, February 11, 2013 at 4:00pm Clearly the 1st and 3rd letter must be the same, but the middle can be any of the 7 letters number of ways = 7 x 7 x 1 = 49 Monday, February 11, 2013 at 10:51am A rock is dropped from a treetop 19.6 m high, and then, 1.00 s later, a second rock is thrown down. With what initial velocity must the second rock be thrown if it is to reach the ground at the same time as the first? I'm not sure how to find the initial velocity of the ... Sunday, February 10, 2013 at 7:00pm 1st bounce=0.2(4/5)=0.16m 2nd bounce=0.16(4/5)=0.128m or 128cm Sunday, February 10, 2013 at 1:14am slope of 1st line is (7-1)/(5-a) = 6/(5-a) slope of 2nd line is (1-8)/(a+2-8) = -7/(a-6) If the lines are parallel, the slopes are equal, so 6/(5-a) = -7/(a-6) 6(a-6) = -7(5-a) 6a-36 = 7a-35 -a = 1 a = -1 Wednesday, February 6, 2013 at 12:41pm what steps? They gave you a relation. Take the set of the 1st element of each pair. If you don't know how to find the domain/range of a relation when they give you each element and its image, you are deep in it. Monday, February 4, 2013 at 2:32pm sorry wrong post: Given: mass= 4.10 kg k= 210 N/m x= 2.60x10^2 m just substitute to W=1/2kx^2 and you'll get the answer for the 1st question sorry again Sunday, February 3, 2013 at 4:25pm 0.0625J is the answer for the 1st question W=1/2kx^2 =1/2(200 N/m)(0.025)^2 =0.0625J Sunday, February 3, 2013 at 4:13pm The First Stone. d1 = 0.5g*t^2 = 4.9*(1.5)^2 = 11 m. = Distance traveled by 1st stone after 1,5 s. V^2 = Vo^2 + 2g*d. V^2 = 0 + 19.6*11 = 215.6 V = 14.68 m/s = Velocity of 1st stone after 1.5 s. d1 = Vo*t + 0.5g*t^2 = 200-11. 16.68t + 4.9t^2 = 189 4.9t^2 + 16.68t - 189 = 0. ... Friday, February 1, 2013 at 5:27pm I'm in 8th grade and this is a 9th grade class. I was just wondering because she gave it to us without going over it slowly... Thursday, January 31, 2013 at 9:56pm In Marissa's Calculus course, attendance counts for 5% of the grade, quizzes count for 15% of the grade, exams count for 60% of the grade, and the final exam counts for 20% of the grade. Marissa had 100% average for attendance, 93% for quizzes, 82% for exams, and 81% on ... Sunday, January 27, 2013 at 8:27pm a boat travels at 16 knots 1st leg: 6hr @ S63W 2nd leg: 4hr @ N10E find time and bearing back to port Wednesday, January 23, 2013 at 11:44am money conversion problem I have a currency conversion problem that I have found difficult to solve. Suppose that a currency speculator believes that as time goes on the exchange rate is going to fall. On November 1st the speculator converts $1,000 US into Canadian dollars (just call the converted ... Tuesday, January 22, 2013 at 3:01pm Analytic Geometry Give the equation of the circle tangent to both axes of radius 5 and in the 1st quadrant. Can you help me solve this? Monday, January 21, 2013 at 5:31pm The 6th term of an arithmetic sequence is x while the 11th is y find the 1st 2 terms Sunday, January 20, 2013 at 9:04am 6th grade s.s forgot textbook in locker!!!! You'll be better off taking a late grade than cheating. You could look each of these up on Google. Wednesday, January 16, 2013 at 6:53pm 2nd Grade Math I'm in 6th grade, the answer is, 10. :) Monday, January 14, 2013 at 7:40pm 8th grade math I dont get this but this isint 8th grade math its 7th grade Thursday, January 10, 2013 at 2:27pm 8th grade math im in 5th grade and i no round up so 7 Wednesday, January 9, 2013 at 5:50pm Govt school math subtract the first from the second ... 2x+y = 27 x + y = 18 ----------- x = 9 sub into the 1st: 9+y = 18 y = 9 so x=9 and y=9 Wednesday, January 9, 2013 at 9:05am Govt school math Solve the question by substitution method? Equation 1st x+y=18 equation 2nd 2x+y=27 Wednesday, January 9, 2013 at 6:21am What do want done ? I can "solve" #2 : sub the first into the 2nd ... 3(y-2) - y = 6 3y - 6 - y = 6 2y = 12 y = 6 then x = y-2 = 6-2 = 4 for the 1st and 3rd, I have no clue what the xt1 and 2xt3 are supposed to be. Is the t a new variable? is it t^3 ? Tuesday, January 8, 2013 at 7:58pm but Ms. Sue im in grade 11 and in grade 10 in science I did frog disection Sunday, January 6, 2013 at 2:25pm you cannot have the "sum of the 20th term". If you mean the sum of the 1st 20 terms, then T10 = a+9d=34 S20 = 10(2a+19d) = 710 so, a=7, d=3 T25 = 7+24*3 = 79 Wednesday, January 2, 2013 at 5:07am one way: 3a+2p = 51 2a+3p = 46.50 add them up: 5a+5p = 97.50 multiply by 6/5: 6a+6p = 97.0 * 6/5 = 117.00 note that we didn't have to know how much each item cost. another way: multiply 1st by 2 and 2nd by 3 to get 6a+4p = 102 6a+9p = 139.50 subtract 1st from 2nd to get 5p... Friday, December 28, 2012 at 10:36am maths geometry 1st --- x 2nd --- x+2 3rd ---- x+4 solve for x: x + x+2 + x+4 = 153 Thursday, December 27, 2012 at 10:38am 5th grade im in fifth grade to Wednesday, December 19, 2012 at 10:13pm if 40 percent of my grade is at a 97 percent average and 40 percent of my grade is at a 92 percent average and my final test is worth 20 percent of the grade then what do i have to score on my final test to have a grade of 91.5 percent or better? Tuesday, December 18, 2012 at 10:21pm You offer to do the dishes for your family for the next month.This month has only 31 days. You suggest that they can pay you in one of three ways: A. $0.50 each day.(I already did this;=$15.50 a month. B.$0.10 the first day,$0.20 the second day,$0.30 the third day,and so on. C... Monday, December 17, 2012 at 6:32pm Math ave grade Let her grade on the final be x .5x + .1(92) + .4(76+82+83)/3 = 87.5 solve for x (I get 92.3) Monday, December 17, 2012 at 8:35am Math ave grade a student scores 76,82,83 on the first three test, she has a 92 homework averge. if the final exam is 50% of grade and homework is 10% and the test average (nit including the final) is 40% then what grade should she get on the final exam to get a 87.5 in the class? Sunday, December 16, 2012 at 11:59pm calculus ..&gt;steve y = 2/3 x^3 + 5/2 x^2 - 3x y' = 2x^2 + 5x - 3 y'' = 4x + 5 inflection where y''=0 -- at x=-5/4 intercepts at (0,0) and at x = 3/8 (-5±√57) Looks like you need to review the meanings of 1st and second Saturday, December 15, 2012 at 12:49pm 5th grade math (word problems) I didn't see an answer from yesterday, so resending. Also, is there a website to explain how to solve word problems - like key phrases to look for (ex: 'in all' - add; 'what's left' - subtraction)? Here's question from yesterday: sorry,but I don'... Wednesday, December 12, 2012 at 7:01am 5th grade math (word problems) sorry,but I don't understand word problems - like what formula/expression to use. Here's the 1st problem: Rob practiced 3/5 of his music in 25minutes. About how long does it take to practice all of his music? I know that Rob has 2/5 left of music to practice. So, do I ... Tuesday, December 11, 2012 at 4:22pm The chart shows the results of 40 students in 1st period classrooms who met the standard on the 6th grade math assessment Peacock 15 Salas 12 Tiegs 13 If these numbers were the same for the remaining periods and if 200 students tested how many students from Ms. Salas's ... Monday, December 10, 2012 at 7:42pm 7th grade math Ms. Sue please I'm from connections and in 7th grade ._. Sunday, December 9, 2012 at 5:18pm The alcohol in the first arm floats on top of the denser glycerin and pressures it down distance h from the initial level. As a result, the glycerin rises by the distance h in the second arm. Due to the equilibrium condition, the point of the two liquids contact in the 1st arm... Saturday, December 8, 2012 at 5:13am Business Math P1 = Po(1+r)^n. P1 = Principal after 1st 3 years. Po1 = $15,000 = Initial deposit @ beginning of the 1st 3 years. r = (9%/2)/100% = 0.045 = Semi-annual % rate expressed as a decimal. n = 2Comp/yr * 3yrs = 6 Compounding periods. Solve the given Eq and get: P1 = $19,533.90. P2... Friday, December 7, 2012 at 7:18pm 7th grade math Ms. Sue please Yeah I think it is. You are in connections? You are in 7th grade? I am. Thursday, December 6, 2012 at 1:11pm The height of the first stone is h₀=v₀²/2g=27²/249;9.8 = 37.2 m. v=v₀-gt, v=0 => the time of the upward motion is t= v₀/g =27/9.8=2.8 s. The 1st stone was falling down during t₁=5.5-2.8 = 2.7 s, when the 2nd stone started. ... Wednesday, December 5, 2012 at 3:28pm math 6th grade Equation of a line that divides the first and third quadrants in half Being 6 th grade it could be the beginning of function If x = 1 , then y = 1 If x= 2 , then y = 2 Tuesday, December 4, 2012 at 11:56pm adult ticket --- $x child ticke ---- $y 3x + 2y = 80.88 2x + 3y = 76.47 1st times 3 ---> 9x + 6y = 242.64 2nd times 2 --> 4x + 6y = 152.94 subtract them: 5x = 89.7 x = 17.94 in 1st 3(17.94) + 2y = 80.88 2y = 27.06 y = 13.53 adult ticket is $17.94 child's ticket is 13... Sunday, December 2, 2012 at 9:50pm possessive form of nouns # 2 is wrong. it's the 1st one. just letting you know :) Thursday, November 29, 2012 at 4:13pm 1st digit is obviously 1 so, the other digits add to 17. Only 8 and 9 can do that the number is even, so the last digit must be 8. 198 Wednesday, November 28, 2012 at 5:43pm math - Ms. Sue (9/2 + 1)/(-2 + 5/3) = 11/2 / -1/3 = -33/2 looks like you lost a - sign somewhere. The 2nd point is above and to the left of the 1st, so the line has a negative slope. Friday, November 23, 2012 at 6:45pm Give the value of the 1st ionization energy of beryllium (Be). Express your answer in Joules (J/atom). Sunday, November 18, 2012 at 6:52am Give the value of the 1st ionization energy of beryllium (Be). Express your answer in Joules (J/atom). Saturday, November 17, 2012 at 7:20am P(w) = 3/6 p(ww) = 3/6 * 2/5 = 1/5 so, the chance that 2 women in a row are chosen first is 1/5. That is the chance that no man will be interviewed 1st or 2nd. Friday, November 16, 2012 at 8:27pm Suppose that a float variable called score contains the overall points earned for this course and a char variable grade contains the final grade. The following set of cascaded if-then-else pseudocode statements determines your final grade Monday, November 12, 2012 at 6:58pm Social Studies Is the 1st one true not false? Monday, November 12, 2012 at 6:22pm Hannah's grade on her last math test was 4 points more than Mark's grade. Write an expression for Hannah's grade, using m as a variable. Evaluate the expression if m=92. Sunday, November 11, 2012 at 9:48pm A) can be explained by Newton's 1st law. B) can be explained by Newton's third? Saturday, November 10, 2012 at 6:21pm ok, go with y = 3x+5 into the 1st x(3x+5) = 2 3x^2 + 5x - 2 = 0 ((3x-1)(x+2) = 0 x = 1/3 or x = -2 --- > you are correct if x = 1/3, y= 3(1/3) + 5 = 6 if x = -2 , y = 3(-2) + 5 = -1 so x=1/3, y=6 OR x=-2, y =-1 I don't know how you got your values ??? Friday, November 9, 2012 at 3:27pm Math Statistics Kesha's mean grade for the first two terms was 74. What grade must she get in the third term to get an exact passing average of 75? Tuesday, November 6, 2012 at 4:15am Math Statistics Kesha's mean grade for the first two terms was 74. What grade must she get in the third term to get an exact passing average of 75? Tuesday, November 6, 2012 at 4:14am kindergarten-fifth grade= a total of 6 grade levels 6 grade levels times 3 classes for each grade level= a total of 18 classes for the whole school 18 classes times 27 students per classroom= 486 the final answer is: about 486 students Monday, November 5, 2012 at 7:52pm X = 1st no. 5x/8 =2nd no. X + 5x/8 = 195. Multiply both sides by 8 and solve for x. Then multiply 5/8 by The value of x. Sunday, November 4, 2012 at 7:18pm BobPursly& Writeteacher please view Yes I did but thats not the whole essay it's only the introduction and 1st paragraph, I have 6 paragraphs all together that includes my introduction and conclusion. Sunday, November 4, 2012 at 7:12pm In a paragraph, use the following words: Inertia, Acceleration, Kinetic Energy, and Newton's 1st Law to describe the physics at work during a car crash Sunday, November 4, 2012 at 5:45pm In a paragraph, use the following words: Inertia, Acceleration, Kinetic Energy, and Newton's 1st Law to describe the physics at work during a car crash Sunday, November 4, 2012 at 5:44pm Find three no. Such that,the 2nd is twice the 1st,the 3rd is three times the first and their sum is 180. Ans is 30,60,90 Friday, November 2, 2012 at 10:02pm the manager of a starbucks store plans to mix A grade coffee that cost 9.50 per pound with grade B coffee that cost 7.00 per pound to create a 20 pound blend that will sell for 8.50 a pound. how many pounds of each grade coffee are required? Tuesday, October 30, 2012 at 8:32am 7th grade math help asap plz i'm in the 6th grade and i know the answere to that!! Thursday, October 25, 2012 at 9:26pm there are 4 choices for each digit. So there are 4*4 = 16 pairs possible. Now, if there are no repeats, then there are 4 choices for the 1st, but only 3 for the second, making 4*3 = 12 possible Tuesday, October 23, 2012 at 8:57pm business statics a random sample of the grade of 78 students is taken. what is the probability that the average grade will be between 80.3% and 82.3%? Sunday, October 21, 2012 at 8:21pm the 1st term of an ap is twice the common difference find in terms of the 5th term of the ap Saturday, October 20, 2012 at 11:51am Algebra Grade 7 Grade 7 ???? Thursday, October 18, 2012 at 5:02am Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/1st_grade/?page=9","timestamp":"2014-04-18T16:32:02Z","content_type":null,"content_length":"33852","record_id":"<urn:uuid:ed0616b8-adde-4112-969a-999e3810ce9b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: option predict te for cost frontier in stata 8.2 [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] RE: st: option predict te for cost frontier in stata 8.2 From "Dev Vencappa" <lexdvv@nottingham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject RE: st: option predict te for cost frontier in stata 8.2 Date Mon, 20 Jun 2005 14:06:24 +0100 Hi Scott, Thanks for this. That's helpful. >>> smerryman@kc.rr.com 06/20/05 1:25 pm >>> Yes, I believe Stata calculates efficiency as TE_i= E(Y_i|U_i, X_i)/E(Y_i|U_i=0, X_i) where Y_i is the production or actual cost of the i-th firm. In a production frontier efficiency will range from 0 to 1 while efficiency in cost frontier will ranges from 1 to infinity. This is how Coelli defined efficiency (for instance, in "A Guide to FRONTIER Version 4.1" (1996)) However, Kumbhakar and Lovell (2000, "Stochastic Frontier Analysis") do define cost efficiency as CE_i= E(Y_i|U_i=0, X_i)/E(Y_i|U_i, X_i) > -----Original Message----- > From: owner-statalist@hsphsun2.harvard.edu [mailto:owner- > statalist@hsphsun2.harvard.edu] On Behalf Of Dev Vencappa > Sent: Sunday, June 19, 2005 7:58 PM > To: statalist@hsphsun2.harvard.edu > Subject: Re: st: option predict te for cost frontier in stata 8.2 > > > > > Stata users, > I made a typo in the previous email which may have confused some users, > so I'm posting it again. > I was running the frontier command for both a production and a cost > function. While I understand that the option predict te retrieves the > efficiency scores ranging between 0 and 1, when I was running the cost > function I was a bit confused with the scores retrieved from the cost > frontier. I got values ranging from close to 1 onwards, and I understand > these need to be bound between 0 and 1. Am I right in assuming that the > correct cost efficiency scores are just the inverse of the predicted > efficiency scores from the predict te option? I suppose Stata is simply > computing these scores in a similar way as for the production frontier, > i.e. actual output/maximum output, so actual cost/minimum cost instead of > minimum cost/actual cost? > Thanks in advance for any guidance on this. > Dev * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2005-06/msg00578.html","timestamp":"2014-04-17T07:01:29Z","content_type":null,"content_length":"7376","record_id":"<urn:uuid:ff1c1cfb-3f1a-44ca-ba23-10c9fd163a3f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Rewrite y = x^2 + 14x + 29 in general form • one year ago • one year ago Best Response You've already chosen the best response. can you make whole square on the rhs? Best Response You've already chosen the best response. Best Response You've already chosen the best response. It's already in general term right? so what do i do now it doesnt say solve it just says put it in general form Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/513ca313e4b01c4790d25f0f","timestamp":"2014-04-18T16:07:49Z","content_type":null,"content_length":"32325","record_id":"<urn:uuid:42f3f309-2e19-4362-bc26-a5e660437f3f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Referring to elements of a varlist in a program Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: Referring to elements of a varlist in a program From Dani Tilley <tilleydani@yahoo.com> To stata <statalist@hsphsun2.harvard.edu> Subject st: Referring to elements of a varlist in a program Date Thu, 8 Jul 2010 20:57:05 -0700 (PDT) I'm trying to write a program that will automate some commands, but to increase its usability I need to include 2 variables in the syntax. I need to figure out a way to refer to a user-specified variables and manipulate them in the body of the program. For instance, if the user were to specify two variables -example var1 var2- ,my program would find the mean for each and divide the sum of the means by the standard deviation of the second variable. (this is just an exercise, it doesn't mean anything) I've declared the following program example, rclass version 10.0 syntax varlist(min=2 max2) loc m=0 foreach v of varlist `varlist' { qui su `v',mean loc m=`m'+`r(mean)' qui su varlist[2] ///I don't know how to refer to this local sd = `r(sd)' local final = `m' / `sd' drop m sd return local final = `final' Any help is appreciated. DF Tilley * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-07/msg00453.html","timestamp":"2014-04-18T23:39:19Z","content_type":null,"content_length":"8295","record_id":"<urn:uuid:af8d7b76-fad4-429d-a9d2-1ac01f2e031c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
What Is Problem Solving? Copyright © University of Cambridge. All rights reserved. In this article I model the process of problem solving and thinking through a problem. The focus is on the problem solving process, using NRICH problems to highlight the processes. Needless to say, this is not how problems should be taught to a class! What is problem solving? What is the difference between the solution to a problem and the problem solving process ? How might we break down problem solving into a series of different steps? What questions and strategies might we use to solve a tricky mathematical problem? In this activity we shall analyse the problem solving process by looking at three NRICH problems. We start with System Speak and Sums of Squares As you strive to solve them can you notice the sorts of problem solving steps that you take at each stage? In this activity being 'stuck' is a good thing! When you are stuck what sorts of things do you try in order to make progress in the problem? Note them down. Once you have tried the problems, watch these two video clips in which I attempted to solve these problems from start to finish. This shows some of the thought processes I go through when solving problems. Try to focus on the way in which I approached the problem solving process.What questions do I ask? How do I approach the task? System Speak video (5 mins) Sums of squares video (7 mins) [You can download a transcript of the problem solving steps in Word here: Sums of squares transcript System Speak transcript ] How did my approach relate to yours? Can you see any sense of a 'problem solving structure' emerging? Were any of my approaches sensible? Any not so sensible? Can you use the ideas you have learned or considered to structure your approach to the more difficult problem Always two ? As you work, make a note of your problem solving process. Even if you do not manage to find the 'answer', you will still be using many problem solving strategies. Can you write these down? Watch a video of my attempt at solving this problem. Can you identify the key problem solving steps this time? And here's a conversation ( audio file ) we had about the Always Two problem and the reasons for making a video as part of the support resources.
{"url":"http://nrich.maths.org/6073/index?nomenu=1","timestamp":"2014-04-21T04:57:26Z","content_type":null,"content_length":"6126","record_id":"<urn:uuid:560bd4d0-d935-42f0-a5ff-c64566129085>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: OLS assumptions not met: transformation, gls, or glm as solution Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: OLS assumptions not met: transformation, gls, or glm as solutions? From David Hoaglin <dchoaglin@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: OLS assumptions not met: transformation, gls, or glm as solutions? Date Mon, 17 Dec 2012 07:33:13 -0500 When you plotted the dependent variable against the predictor variables, what patterns of curvature (if any) did you see? You didn't mention the number of observations. If it is large, you my want to use LOWESS to trace smooth curves through those plots. You can also look for curvature in the plots of the studentized residuals against the individual predictor variables, and a plot of those residuals against the predicted values will give you information on the pattern of heteroskedasticity. Often, transforming the dependent variable helps to straighten the relations between the dependent variable and the predictors, AND it also stabilizes the variability in the dependent variable. It is likely that the variability in the number of minutes spent on the activity increases as the expected number of minutes increases. Two other transformations to consider are the square root and the reciprocal. (If your data were time to complete a task, the reciprocal would transform slowness into fastness.) If the logarithm is the most reasonable choice, it is not necessary to make interpretation more difficult by using the natural log. Use logs base 10 instead. With either base, interpretation is in terms of ratios, which is often not difficult. After a suitable transformation you may have fewer outliers (or none). You should be cautious in excluding outliers and, especially, influential observations. If you included the zeros and used a tobit model, you would still have to do something about curvature and heteroskedasticity. David Hoaglin On Mon, Dec 17, 2012 at 5:43 AM, Laura R. <laura.roh@googlemail.com> wrote: > Dear Stata users, > I estimated an OLS model with the number of minutes (1-1440) spent on > an activity on a day as dependent variable. At first sight, the model > works fine. I receive some interesting results which are robust across > model specifications. I would like to keep it as it is, but: > - The regression diagnostics shows that the error terms are not > normally distributed, but right skewed. > - In addition, there is heteroskedasticity. > Excluding outliers and influential cases does not help. Now I can > think about 4 solutions, but I am not sure when it is justified to > decide on one of these: > 1. Keep the model and the variables as they are (but maybe use robust > standard errors) - is this possible under certain conditions, even if > I have heteroskedasticity and non-normality of residuals, and when is > this justified? > 2. Transform the dependent variable. If I take the ln of the dependent > variable, the residuals get closer to a normal distribution, and it > gets closer to homoskedasticity. But then there is the problem of > interpreting the results. > 3. Generalised least square model (gls): Use this instead. This is a > solution to heteroskedasticity, but do the residuals have to be > normally distributed in gls as well? What other new assumptions of gls > might cause new problems (pros/cons gls vs. OLS)? And how can I do > this in Stata? (Somehow with calculating a weight, I think...) > 4. Generalised linear model (glm): In some sources I read that this > also accounts for heteroskedasticity, in other sources not. Again, > what about the normal distribution of residuals here? I heard that glm > is better than OLS for non-negative dependent variables, is that > correct? What are other assumptions of gls that could make me still > prefer OLS? If I used it ,and if my dependent variable is > non-negative, and residuals are right skewed, do I have to "tell" that > Stata when estimating the model, or can I run it as it is? > (I quickly ran -glm- already, without any special specifications, and > the results are the same as from the OLS model.) > In sum, I need some decision-making support. What is the best thing to > do in this case? > One thing that would help is a comparison of assumptions of OLS, gls, > glm. I am aware of the assumptions of OLS models, but for gls and glm > I did not find comprehensive lists and explanations. > It would be great if you could give me hints on what would be a good > solution. Maybe you know a source explaining when to use which > solution if OLS assumptions of normality and homoskedasticity are not > met. > Laura > PS: I am aware of the fact that many used Tobit for similar dependent > variables, including the zeros. My case is different, and for some > reason I do not want to do this, and I excluded the zeros. > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/faqs/resources/statalist-faq/ > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-12/msg00573.html","timestamp":"2014-04-19T09:51:21Z","content_type":null,"content_length":"13201","record_id":"<urn:uuid:ff7d1e6d-8d5c-4b9a-a3d0-830557debde6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Generating distribution from an ArrayList Author Generating distribution from an ArrayList Hi all, Joined: Oct 26, 2005 Posts: 16 I have a sorted ArrayList of values. I would like to get the distribution of the values. For example: 1 - Say I have 500 values, ranging from 1-100. - I want to break them up into groups, say 10 groups: values 1-10, 11-20, 21-30, etc... - I want the counts of each of the 500 values that fall into each category. For example, 5 of the 500 are valued at 1-10, 20 between 11-20, etc... - However, I do not know the ranges of values in my ArrayList, it could be ranging from 1-30 or 1-200, but I want to break it up into, for example, 10 groups. Does anyone know how to do this? As your ArrayList is sorted the range is from the value of the first entry to the value of the last entry. Joined: Aug 05, 2005 Posts: 3169 You can use some simple maths to work out what values should go into each of your ten groups. 10 To count the number of values in the first group you iterate thru the ArrayList until you reach a value that is outside the range. To count the number of values in the second group you iterate thru the ArrayList starting at the previous end point until you reach a value that is outside the range. Repeat for each of the groups. subject: Generating distribution from an ArrayList
{"url":"http://www.coderanch.com/t/529483/java/java/Generating-distribution-ArrayList","timestamp":"2014-04-19T15:25:02Z","content_type":null,"content_length":"20777","record_id":"<urn:uuid:6651f14e-6735-4e08-a4cf-eb3d735dbf79>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Readings: Petrucci: Chapter 18 We've seen many ionic compounds. Many are soluble in water and many are not. There is no simple set of rules which we can use to predict which ones will be soluble or not. Some useful trends have been observed. • multiple-charged ions are less soluble than single-charged ions. • smaller ions are more soluble than larger ions. For example, • cations like NH[4]^+ and all alkali metal ions are small, singly charged and are all soluble. • anions like Cl^-, Br^-, I^-, NO[3]^- and ClO[4]^- are all soluble except with Mg^2+ and Ag^+. • Most hydroxides are insoluble (except with alkali metals as in NaOH or KOH). • Salts of doubly charged ions CO[3]^2-, S^2-, PO[4]^3- mostly are insoluble. • Salts of singly charged versions of these are soluble (HCO[3]^-, H[2]PO[4]^- ) Reactions where soluble compounds react to form insoluble ones are called precipitation reactions. The reverse is a dissolution reaction where solid compounds dissolve upon addition into water. For a table of solubilities click here Since salts dissolve into electrically charged species (as do acids and bases) they fall in the general category called electrolytes. We saw in the case of acids and bases that some dissociate completely into ions and are called strong while others dissociate only partially and are called weak. This terminology applies to salts as well. A strong electrolyte is any electrolyte (acid, base, salt) that dissociates 100% into ions. A weak electrolyte sets up an equilibrium in water so that some exits as the original undissociated species and some exists as ions in solution or as undissolved solid. In reactions involving ions which have the potential to form insoluble compounds, we must consider equilibrium conditions in order to decide whether or not a precipitate (ppt) will form. for example, the reaction between the calcium cation and the sulphate anion is written as follows (always put the solid as the reactant and the ions as the product.) CaSO[4](s) ^2+(aq) + SO[4]^2-(aq) Recall that the CaSO[4] is a pure solid and therefore the activity is 1. Since this equilibrium constant is the product of the ion concentrations (solubility products) we use subscript sp for this new value. K[sp] = [Ca^2+][SO[4]^2-] K[sp] is the solubility product constant and the right hand side of this equation is often called the ion product. In general, for the reaction A[x]B[y](s) ^m+(aq) + Y B^n-(aq) we have K[sp] = [A^m+]^X[B^n-]^Y @ equilibrium. We can perform equilibrium type calculations as we did previously. CaSO[4] at 25ºC in water has a solubility of 4.9×10^-3 M, i.e., enough CaSO[4] dissolves in water to make a solution which has a nominal concentration of 4.9×10^-3 M. Nominal meaning we pretend that the CaSO[4] remains as a molecular unit in water when we quote this number. It doesn't. What is it's equilibrium constant K[sp]? CaSO[4](s) Ca^2+(aq) + SO[4]^2-(aq) I solid, ignore 0 0 C x dissolves +x +x E x x x is the solubility, x = 4.9×10^-3 M K[sp] = [Ca^2+][SO[4]^2-] = x^2 = (4.9×10^-3)^^2 = 2.4×10^-5. In this next example, we will use a known K[sp] value to determine the solubility of a salt. The solubility product constant of PbCl[2] is 1.7×10^-5 at 25ºC. What is the solubility of lead(II)chloride in water at 25ºC? PbCl[2](s)^ Pb^2+(aq)[ + 2 Cl^-(aq)[ ] I solid, ignore 0 0 C x dissolves +x +2x E x 2x K[sp] = [Pb^2+][Cl^-]^2 = x(2x)^2 = 4x^3 = 1.7×10^-5 x^3 = (1.7×10^-5)/4 x = (4.3×10^-6)^1/3 x = 1.6×10^-2 . Solubility = 1.6×0^-2 M In this next example, we are going to use equilibrium ideas to determine if a precipitate will form. Lets mix equal volumes of 2.0×10^-4 M AgNO[3] with 2.0×10^-4 M NaCl. Does a ppt form? Since equal volumes of the two solutions are added together, the volume doubles and hence, each concentration is halved. The only possible reaction between the four ions present here Ag^+, Na^+, Cl^-, NO[3]^-, is: AgCl(s) ^+(aq) + Cl^-(aq) (NaNO[3] is very soluble.) Q = [Ag^+][Cl^-] = (1.0×10^-4)(1.0×10^-4) = 1.0 × 10^-8. K[sp] = 1.7×10^-10. thus, Q > K therefore (rxn will proceed to left) ppt will form. The solubility of an ionic compound in a solution which already contains one of the ions in that compound is reduced. This is the common ion effect. Consider the following. PbCl[2](s) ^2+(aq) + 2 Cl^-(aq) If we add some NaCl (or any other soluble chloride) we cause a stress on the equilibrium ([Cl^-] increases). LP states that the equilibrium will shift to the left to try to use up the extra chloride, i.e., more ppt will form. What is the solubility of PbCl[2] in 1.00 M HCl? PbCl[2](s)^ Pb^2+(aq)[ + 2 Cl^-(aq)[ ] I solid, ignore 0 1.00 C x dissolves +x +2x E x 1.00+2x K[sp] = [Pb^2+][Cl^-]^2 = 1.7×10^-5. x (1.00 + 2x)^2 = 1.7×10^-5. Now let's use the small K approximation to simplify this and avoid using the quadratic equation to solve this. Since K is small, we assume x is small with respect to the initial amounts. Thus we assume that 1.00 + 2x can be approximated by just 1.00. Thus... x (1.00 M)^2 = 1.7×10^-5. x = 1.7×10^-5 M. If no HCl or any other chloride had been present we would have determined the solubility to be 1.6×10^-2 M. The solubility is significantly reduced due to common ion effect. The common ion effect is a very useful way to deliberately reduce the solubility of slightly soluble ions. Consider Radon Ra^2+(aq). This radioactive ion can be separated out of a mixture by adding an anion which will precipitate out with it such as the sulfate ion. RaSO[4] (s) ^2+(aq) + SO[4]^2-(aq) K[sp] = [Ra^2+][SO[4]^2-] = 3.6 × 10^-11. solubility of RaSO[4] in pure water is (3.6 × 10^-11)^^1/2 or 6.0×10^-6M. If we add enough H[2]SO[4] to make a 1.00 M solution. we find [Ra^2+] = K[sp]/[SO[4]^2-] = (3.6 × 10^-11)/ 1.00 = 3.6 × 10^-11M. Many weakly soluble ionic compounds have solubilities which depend on the pH of the solution. A direct example is hydroxides since the OH^- ion is directly involved in the equilibrium constant. Other cases of pH dependence may not be quite so simple. NOTE: pH is commonly defined as -log[H^+]. This is not accurate. It should more accurately be defined as -log[a[[H]+]], i.e., activity of H^+ should be used. Click here for more info. Zinc hydroxide Zn(OH)[2] has K[sp]=4.5×10^-17 In pure water: Solubility is computed as follows: Zn(OH)[2](s)^ Zn^2+(aq)[ + 2 OH^-(aq)[ ] I solid, ignore 0 0 C x dissolves +x +2x E x 2x K[sp] = 4.5×10^-17 = x(2x)^2. x = (4.5×10^-17 )/4]^1/3 = 2.2×10^-6 M. the resulting pH is [OH^-] = 2x = 4.4×10^-6 M therefore pH = 14 - pOH = 14 - (-log( 4.4×0^-6)) = 8.64 This pH is the equilibrium pH resulting from dissolving the zinc hydroxide in pure water. LP states if we stress the system (by changing the pH) then the equilibrium will shift to reduce the stress. if pH < 8.64 (more acidic) then [OH^-] decreases (rx shifts right to try to produce more). Solubility increases. if pH > 8.64 (more basic) then [OH^-] increases (rx shifts left to try to use more). Solubility decreases. Now what will happen in this example is we start again but this time, we try to dissolve the zinc hydroxide into a solution buffered at pH = 6.0. [OH^-] = antilog[ -(14 - pH)] = antilog[ -(14 - 6)] = 1.0×10^-8. K[sp] = [Zn^2+][OH^-]^2 [Zn^2+] = K[sp]/[OH^-]^2 = 4.5×10^-17/(1.0×10^-8)^2 = 0.45 M. (= x). Much larger solubility than than in pure water. Here's a point! Tooth enamel is largely Ca[5](PO[4])[3]OH (hydroxyapatite). What do you think will happen to your teeth if organic acids (say from the partial digestion of foods rich in sugar by the saliva and biota in your mouth) are allowed to stay in your mouth. (brush your teeth within 15 minutes of eating even one mouthful). Similarly, drinking carbonated (carbonic acid) drinks are instantly dangerous to your teeth. The more of them you drink, the less tooth enamel you have and there is no natural process for building up your teeth once you degrade them. │H[2]CO[3]/HCO[3]^- │pKa=6.4 │ │HCO[3]^-/CO[3]^2- │pKa=10.3 │ Any salt containing acidic or basic ions will have Solubilities that depend on pH. Take for example, CaCO . If the pH is sufficiently low the HCO reaction will completely use up the carbonate. In the distribution diagram for the carbonic acid system below, we see that at pH=8 there is (nearly) no carbonate (blue) left. (1) CaCO[3](s) Ca^2+(aq) + CO[3]^2-(aq) K[sp](CaCO[3])=3.36×10^-9 (2) CO[3]^2-(aq) + H[3]O^+(aq) HCO[3]^-(aq) + H[2]O(l) 1/K[a2](H[2]CO[3])=1/4.7×10^-11 = 2.2×10^10 Sum CaCO[3](s) + H[3]O^+(aq) Ca^2+(aq) + HCO[3]^-(aq) + H[2]O(l) K[sum] K[sum] = K[sp](CaCO[2])/K[a](HCO[3]^-) = 3.36×10^-9 / 4.7×10^-11 = 71.5. The first thing to notice about this is that the second equilibrium is merely the K[a2](H[2]CO[3]) equation, written backwards. At around pH=8 or lower, we can approximate that it proceeds to completion, using up almost all of the carbonate ion (See the distribution diagram above). Thus, we can assume that the concentration [CO[3]^2-] is negligibly small. Hence, adding the two equations together and cancelling out the CO[3]^2- from the equation is a good approximation. If the pH were lower than, say, 5 then we could say the same for the K[a1](H[2]CO[3]) equilibrium. From the Summed equation, we see that, according to Le Châtelier, if we add acid to the solution in equilibrium, the overall equilibrium will shift right to use up some of the extra acid. This will cause the solubility of the CaCO[3] to increase. Example: Calculate the solubility of CaCO[3] in a buffered solution of pH=8. Since the second equilibrium (above) goes to completion, we can use the summed equilibrium. CaCO[3](s)^ + H[3]O^+(aq) Ca^2+(aq)[ + HCO[3]^-(aq)[ ] + H[2]O(l)^ I 1×10^-8 0 0 C Buffered means no change in [H[3]O^+] +x +x E 1×10^-8 x x x = 8.5×10^-4 . Another example: In this one, we can not assume that the intermediate concentrations are negligibly small. We have to deal with both equilibria in full. This complicates the mathematics as we will see below. What is the solubility of CaF[2] in a solution buffered at: A: pH = 5.0? B: pH = 3.0? C: pH = 1.0? (1) CaF[2] ^2+ + 2F^- K[sp] = 3.45×10^-11 (2) HF ^+ + F^- K[a] = 6.3×10^-4 A. At pH=5, we can see from the diagram (or by comparing pKa to pH) that all the F^- from the dissolution of the CaF[2] remains in solution unchanged. Therefore, this particular case is just a simple K[sp] question. No further calculations necessary. CaF[2]^ Ca^2+[ ] + 2F^-[ I 0 0 C +x +x E x x K[sp]= 3.45×10^-11 = x^2 So x = 5.9×10^-6 The solubility of CaF[2] at pH = 5 (or higher) is 5.9×10^-6 M. B. At pH=3, we can see from the distribution diagram that some of the F^- (BLUE) that came from the CaF[2] will be protonated into HF (RED). The ratio (read off the diagram or calculate using the Clausius Clapeyron equation) HF to F^- is approximately .61:.39 ~ 1.56 : 1, i.e., [HF] = 1.56[F^-]. Thus, we must deal with the two equilibria together with no assumptions possible. Now, according to the first equilibrium, the amount of HF produced is two times the amount of CaF[2] dissolved or twice the amount of Ca^2+ produced. [F][total] = 2[Ca^2+] The F^- that comes from the CaF[2] does not all stay in that form. Some of it becomes HF. The total amount of fluorine however remains the same and can be represented by the relationship [F][total] = [HF] + [F^-]. [HF] + [F^-] = 2[Ca^2+] (1.56 + 1)[F^-] = 2[Ca^2+] [F^-] = 0.781[Ca^2+]. If s is the solubility, then [Ca^2+] = s. K[sp] = [Ca^2+][F^-]^2 = s(0.781s)^2 = 3.45×10^-11 s = 3.8×10^-4. Obviously, at this lower pH, the solubility of the CaF[2] is higher C. At pH=1, we can assume that all the F^- is now converted into HF and add equation 1 and 2 together. (a) CaF[2] Ca^2+ + 2F^- K[sp = 3.45×10^-11 (b) HF H^+ + F^- K[a ] = 6.3×10^-4 (c) = a-2b CaF[2] + 2H^+ Ca^2+ + 2HF K = 3.45×10^-11 × 1/(6.3×10^-4)^2 = 8.7×10^-5 So now, our ICE table is made using this new final equation. CaF[2]^ + 2H^+[ Ca^2+[ ] + 2F^-[ ] ] I 0.1 0 0 C +x +x E 0.1 x x and we now have: (Buffered at pH = 1 means H+ conc doens't change) x = 6.0×10^-3 We use these ideas of equilibrium and the application of LP to selectively alter the solubilities of ionic compounds in the qualitative analysis labs. There are many Industrial applications were it is important to control the solubility of metal sulfides. We can easily adjust the solubility of these ions using pH such that we can selectively precipitate Metal Sulfide solids from a mixture and effectively separate the metal ions as we filter off the precipitates in turn. Metal II Sulfides: (Metal has oxidation state of +2; for example, M = Zn^2+, Fe^2+, ...) 1) MS(s) ^2+ (aq) + S^2-(aq) 2) S^2^-(s) + H[2]O(l) ^-(aq) + OH^-(aq) However, the S^2- ion is a strong base (K[b]~10^5) and will react immediately to form HS^- and a hydroxide ion. So, the two the concentration of S^2- in solution is negligible and therefore, the actual dissolution process can be approximated as the sum of the reactions 1 and 2 above. sum: MS(s) + H[2]O(l) ^2+ (aq) + HS^-(aq) + OH^-. Thus, the true K[sp] for the dissolution of a metal II sulphide is. K[sp] = [M^2+][HS^-][OH^-] Here we see that the addition of acid will use up OH^- and hence, shift the summed equilibrium to the right thus dissolving more of the salt (MS). Since the Solubility is higher in acid solution and quite low in base solution, it is often more convenient (and conventional) to rewrite the equation for the dissolution in an acidic solution. We can use K[w] = [H[3]O^+][OH^-] to do this. MS(s) + 2H[3]O^+(aq) ^2+ (aq) + H[2]S(aq) + 2H[2]O(l) We call such an equilibrium constant K[spa] for solubility-product constant in acid. we can determine the solubility s = [M^2+] = [H[2]S] We can see that as the pH is lowered (higher H[3]O^+ concentration) the solubility of the metal sulfide increases. For example: pH = 3 pH = 1 K[spa](FeS) = 6×10^2 ==> s = 0.024 s = 2.4 K[spa](ZnS;Wurtzite) = 3×10^-2 ==> s = 1.7×10^-4 s = 1.7×10^-2 If we add acid slowly, the FeS will dissolve first (pH = ca.3-4) since it's solubility is larger for a given [H^+]. As we continue to add acid, the ZnS will eventually dissolve as well (pH => -1). If we had a solution containing Zn^2+ and Fe^2+ (both at 0.10 M) we could selectively precipitate the Zn by buffering the pH to 2.38. At that pH, the solubility of FeS is exactly 0.1 while that of the ZnS is 7.0×10^-4. So, assuming [H[2]S] = 0.1M, at that pH, the ZnS would be mostly in the solid form but the FeS would be Just soluble. If the pH were to raise above 2.38 or if the amount of H[2]S were to be increased by even the tiniest amount, then some FeS would start to form ppt as well. There are many methods of adding reagents to a mixture of ions to selectively separate out the individual components. Various types of reactions are covered in the Qualitative Analysis lab. EXAMPLE: You have a sample containing both Iron and Zinc ions, both at a concentration of 0.10 M. The initial pH of the solution is 0, i.e., the [H^+] = 1.0 M. To what pH must you change the solution to get maximum separation of the iron and zinc ions if the H[2]S nominal concentration is also 0.10 M? First, the words "maximum separation" means we want to precipitate one of the ions as a salt while leaving the other ion in solution. Since it is impossible to completely separate the ions, we look for the conditions that give us the best outcome possible. This will occur when one of the ions is just barely in solution (at K[sp] but with no solid yet formed), while the other is already mostly We will be likely working in the low pH range since at high pH, the solubilities of both ions with sulfide is increased. Hence, we'll use the K[spa]. setup. since the concentrations of the ions are fixed by the experimental conditions, we need only specify the equilibrium conditions as having a single unknown, [H[3]O^ +] = x. MS(s) + 2 H[3]O^+(aq) M^2+[ ](aq) + H[2]S(aq)^ + 2H[2]O(l)^ E x 0.10 0.10 Using the defined equilibrium values, we can substitute into our K[spa] equation. \[K_{spa}=\frac{[\mathrm{M}^{+2}][\mathrm{H}_2 \mathrm{S}]}{x^2}\] for FeS, we get \[6\times 10^2=\frac{[0.10][0.10]}{x^2}\] \[x=4.1\times 10^{-3}\] for ZnS, we get \[3\times 10^{-2}=\frac{[0.10][0.10]}{x^2}\] To precipitate either or both of these ions, we need to lower the [H^+] by slowly adding a base, for example, NaOH. Clearly, the Fe^2+ will precipitate at a lower concentration of H^+ than will the Zn^2+. So as we lower the concentration of the acid through 0.57, The Zn^2+ will begin to precipitate. When we reach a value of [H^+] = 4.1×10^-3 most of the Zn^2+ will have already precipitated and none of the Fe^2+. At this point, one more drop of base would make the FeS start to precipitate so we don't go there. We have reached the point of maximum separation. Complex Ion: An ion consisting of a metal ion surrounded by ligands. A molecule or ion having alone-pair of electrons which can be 'donated' to a metal ion to form a covalent bond. Some common ligands include H[2]O, NH[3], Cl^-, CN^-. Coordination Number The number of ligands attached to a central cation. Metal ions in Solution form complexes by covalently bonding to some number of ligands. The bond is a special kind of bond where the ligand donates one or more of it's lone-pairs of electrons to one of the empty orbitals in the d-shell of the metal ion. These interactions can almost always be written as an equilibrium with all the requisite properties of equilibrium being valid. The reactions are always written so that one mole of the complex is the only product. Thus, these particular equilibrium constants are called formation constants or stability constants. for example, if we mix silver ions with ammonia we can observe the following two complexation reactions. 1) Ag^+(aq) + NH[3](aq) [3])^+(aq) K[cx1] = 2.1×10^3. 2) Ag(NH[3])^+(aq) + NH[3](aq) [3])[2]^+(aq) K[cx2] = 8.2×10^3. Where K[cx1] and K[cx2] are the formation constants for the two complexes Ag(NH[3])^+(aq) and Ag(NH[3])[2]^+(aq), respectively. If a large enough excess of NH[3] is present we can consider that the two reactions essentially go to completion and hence, the intermediate concentrations of Ag(NH[3])^+(aq) are very small (We check this later in the calculation). The overall reaction would then be Ag^+(aq) + 2 NH[3](aq) [3])[2]^+(aq) K[f] = K[cx12] = K[cx1]×K[cx2] = 1.7×10^7 NOTE: If we add two reactions together as above, their equilibrium constants can be multiplied to determine the overall equilibrium constant. These formation constants are commonly given a special symbol K[n] or β[n] (n = coordination #) to represent the fact that the equilibrium is the formation of n-coordinate complexes. Thus, K[cx12] could also be called β[2] since it represents the complexation of two ligand ammonia molecules on a silver cation. Personally, I prefer not to use the β[n] notation since it can be too easily confused with other symbols. The subscript cx is my own notation just to remind myself that these are complexation What would happen in a solution prepared by mixing 100.0 mL of 2.0 M NH[3] with 100.0 mL of 1.0×10^-3 M AgNO[3]. Let's for a minute consider the species which will be present in the solution. Ag^+, NO[3]^-, NH[3], and of course H[2]O. Possible reactions are the two complexations mentioned above and the acid-base interaction of the ammonia with water NH[3](aq) + H[2]O(l) [4](aq) + OH^-(aq) K[b] = 1.8×10^-5. The extent of reaction of this equilibrium is very insignificant when compared to the complexation reaction so we will ignore it. Hence, the only chemical system of interest is the complexation equilibria 1) and 2) above. Since there is an excess of ammonia and the K is large, we can start with the assumption of 100% reaction (click here for proof)and then work backwards through the individual reactions to determine the actual intermediate concentrations [Ag^+] and [Ag(NH[3])^+]. Starting with the second equilibrium, we can find the value for [Ag(NH[3])^+]. the assumption of 100% reaction allows us to set [Ag(NH[3])[[2]]^+] = 1.0×10^-3 ÷ 2 = 5.0×10^-4 M. Ag^+(aq)[ + NH[3](aq)^ Ag(NH[3])^+(aq) I 0 1.0 5.0×10^-4 C +x +x -x E x 1.0+x 5.0×10^-4-x We now assume that x is small c.f. 5.0×10^-4, which means, it's also small compared to the amonia concentration, so we replace 1.0+x by 1.0 and 5.0×10^-4-x by 5.0×10^-4. \[K_{cx2}=\frac{[\textrm{Ag}(\textrm{NH}_3)_2^+]}{[\textrm{Ag}(\textrm{NH}_3)^+][\textrm{NH}_3]} =8.2\times10^3=\frac{5.0\times10^{-4}}{x\times 1.0}\] x = 6.1×10^-8 so [Ag(NH[3])^+] = 6.1×10^-8 M check assumption: 6.1×10^-8 << 5.0×10^-4 (good) Now we can use this value of [Ag(NH[3])^+] (x) to calculate [Ag^+] using the first equilibrium. Ag^+(aq)[ + NH[3](aq)^ Ag(NH[3])^+(aq) I 0 1.0 6.1×10^-8 C +x +x -x E x 1.0+x 6.1×10^-8 -x we assume assume that x is small here to simplify the calculation, i.e., 1.0+x = 1.0 \[K_{cx2}=\frac{[\textrm{Ag}(\textrm{NH}_3)_2^+]}{[\textrm{Ag}(\textrm{NH}_3)^+][\textrm{NH}_3]} =2.1\times10^3=\frac{6.1\times10^{-8}}{x\times 1.0}\] x = 2.9×10^-11 M = [Ag^+] check assumption: 2.9×10^-11 << 6.1×10^-8 (good). Complex Ion equilibrium calculations can be relatively simple if the ligand is in large enough excess, even though the whole process looks a bit messy at first. Complex Ions and Solubility Because we can use complexation reaction reactions to 'tie up' metal ions in water, we can use these to increase the solubility of metal ion salts. For example, silver chloride is weakly soluble in water but quite readily dissolves in concentrated ammonia. AgCl(s) ^+(aq) + Cl^-(aq) K[sp] = 1.6×10^-10 Ag^+(aq) + NH[3](aq) [3])^+(aq) K[cx1] = 2.1×10^3 Ag^+(NH[3])(aq) + NH[3](aq) [3])[2]^+(aq) K[cx2] = 8.2×10^3 In this case, we cannot increase the solubility by adding acid as we did in previous examples because Cl^- is a very weak base ('very weak base' means 'not a base' for our purposes). In fact, since the ammonia that is complexing with the Ag^+ is a base, we will decrease the solubility by adding acid because the acid will use up some of the ammonia, thereby releasing the silver ions tied up in the complex. If we consider there to be an excess of ammonia then we can assume these three reactions to be going essentially to completion. Thus, write an overall reaction which is the sum of the three AgCl(s) + 2 NH[3](aq) [3])[2]^+(aq) + Cl^-(aq) K for this reaction is the product of the three equilibrium constants K = K[sp]K[cx1]K[cx2] = 1.6×10^-10 × 2.1×10^3 × 8.2×10^3 = 2.8×10^-3. Let's calculate the solubility of AgCl in (say) using this K constant in an amonia solution, in this example, a 10.0 M ammonia solution. AgCl(s) + 2 NH[3](aq) Ag(NH[3])[2]^+(aq) + Cl^-(aq) I 10.0 0 0 C -2x +x +x E 10.0-2x x x \[K_{cx2}=\frac{[\textrm{Ag}(\textrm{NH}_3)_2^+][\textrm{Cl}^-]}{[\textrm{NH}_3]^2} =2.8\times10^{-3}=\frac{x\times x}{(10.0-2x)^2}\] Take the square root of both sides: x = 0.48 M. (x is the solubility) in 10 molar ammonia. In pure water we can quickly calculate the solubility using the original K[sp] = 1.6×10^-10 . AgCl(s) + Ag^+(aq) + Cl^-(aq) I 0 0 C +x +x E x x K[sp] = x^2 ===> x = (1.6×10^-10)^1/2 = 1.3×10^-5 ... This is a lot smaller solubility than in the NH[3] because, of course, the silver chloride is 'pulled' into solution by the complexing action of the ammonia on the silver ions. Updated Wednesday, September 05, 2012 12:20:41 PM Michael J. Mombourquette. Copyright © 1997. Revised: September 05, 2012.
{"url":"http://www.chem.queensu.ca/people/faculty/mombourquette/FirstYrChem/solubility/index.htm","timestamp":"2014-04-16T16:51:01Z","content_type":null,"content_length":"95900","record_id":"<urn:uuid:6a5a81c6-9d57-4922-b72c-506d457c021d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 12 , 2004 "... Computer-aided verification of embedded systems hinges on the availability of good verification models of the systems at hand. Because of the combinatorial complexities that are inherent in any process of verification, such models generally are only abstractions of the full design model or system sp ..." Cited by 8 (1 self) Add to MetaCart Computer-aided verification of embedded systems hinges on the availability of good verification models of the systems at hand. Because of the combinatorial complexities that are inherent in any process of verification, such models generally are only abstractions of the full design model or system specification. As they must both be small enough to be effectively verifiable and preserve the properties under verification, the development of verification models usually requires the experience, intuition and creativity of an expert. We argue that there is a great need for systematic methods for the construction of verification models to move on, and leave the current stage that can be characterised as that of “model hacking”. The ad-hoc construction of verification models obscures the relationship between models and the systems that they represent, and undermines the reliability and relevance of the verification results that are obtained. We propose some ingredients for a solution to this problem. - Embedded System Design and Conformance Checking. International Journal of Parallel Programming "... We propose a framework based on a synchronous multi-clocked model of computation to support the inductive and compositional construction of scalable behavioral models of embedded systems engineered with de facto standard design and programming languages. Behavioral modeling is seen under the paradig ..." Cited by 5 (2 self) Add to MetaCart We propose a framework based on a synchronous multi-clocked model of computation to support the inductive and compositional construction of scalable behavioral models of embedded systems engineered with de facto standard design and programming languages. Behavioral modeling is seen under the paradigm of type inference. The aim of the proposed type system is to capture the behavior of a system under design and to re-factor it by performing global optimizing and architecture-sensitive transformations on it. It allows to modularly express a wide spectrum of static and dynamic behavioral properties and automatically or manually scale the desired degree of abstraction of these properties for efficient verification. The type system is presented using a generic and language-independent static single assignment intermediate representation. KEY WORDS: Embedded system design; formal methods; models of computation; program transformation; verification. "... Abstract. Context-Bounded Analysis has emerged as a practical automatic formal analysis technique for fine-grained, shared-memory concurrent software. Two recent papers (in CAV 2008 and 2009) have proposed ingenious translation approaches that promise much better scalability, backed by compelling, b ..." Cited by 5 (2 self) Add to MetaCart Abstract. Context-Bounded Analysis has emerged as a practical automatic formal analysis technique for fine-grained, shared-memory concurrent software. Two recent papers (in CAV 2008 and 2009) have proposed ingenious translation approaches that promise much better scalability, backed by compelling, but differing, theoretical and conceptual advantages. Empirical evidence comparing the translations, however, has been lacking. Furthermore, these papers focused exclusively on Boolean model checking, ignoring the also widely used paradigm of verification-condition checking. In this paper, we undertake a methodical, empirical evaluation of the three main source-to-source translations for context-bounded analysis of concurrent software, in a verification-condition-checking paradigm. We evaluate their scalability under a wide range of experimental conditions. Our results show: (1) The newest, CAV 2009 translation is the clear loser, with the CAV 2008 translation the best in most instances, but the oldest, brute-force translation doing surprisingly well. Clearly, previous results for Boolean model checking do not apply to verification-condition checking. (2) Disturbingly, confounding factors in the experimental design can change the relative performance of the translations, highlighting the importance of extensive and thorough experiments. For example, using a different (slower) SMT solver changes the relative ranking of the translations, potentially misleading researchers and practitioners to use an inferior translation. (3) SMT runtimes grow exponentially with verificationcondition length, but different translations and parameters give different exponential curves. This suggests that the practical scalability of a translation scheme might be estimated by combining the size of the queries with an empirical or theoretical measure of the complexity of solving that class of query. 1 - 16TH INTERNATIONAL SYMPOSIUM ON FORMAL METHODS FM'2009 , 2009 "... Explicit state methods have proven useful in verifying safety-critical systems containing concurrent processes that run asynchronously and communicate. Such methods consist of inspecting the states and transitions of a graph representation of the system. Their main limitation is state explosion, wh ..." Cited by 4 (1 self) Add to MetaCart Explicit state methods have proven useful in verifying safety-critical systems containing concurrent processes that run asynchronously and communicate. Such methods consist of inspecting the states and transitions of a graph representation of the system. Their main limitation is state explosion, which happens when the graph is too large to be stored in the available computer memory. Several techniques can be used to palliate state explosion, such as on-the-fly verification, compositional verification, and partial order reductions. In this paper, we propose a new technique of partial order reductions based on compositional confluence detection (Ccd), which can be combined with the techniques mentioned above. Ccd is based upon a generalization of the notion of confluence defined by Milner and exploits the fact that synchronizing transitions that are confluent in the individual processes yield a confluent transition in the system graph. It thus consists of analysing the transitions of the individual process graphs and the synchronization structure to identify such confluent transitions compositionally. Under some additional conditions, the confluent transitions can be given priority over the other transitions, thus enabling graph reductions. We propose two such additional conditions: one ensuring that the generated graph is equivalent to the original system graph modulo branching bisimulation, and one ensuring that the generated graph contains the same deadlock states as the original system graph. We also describe how Ccd-based reductions were implemented in the Cadp toolbox, and present examples and a case study in which adding Ccd improves reductions with respect to compositional verification and other partial order reductions. "... On the one hand network and... In this paper we present the specification of a network model in Maude and some primitives for de ning simulation strategies. The use of the model is illustrated with a simple HELLO sub-protocol taken from the IETF PIMDM (Protocol Independent Multi-Cast-Dense Mode) RFC ..." Cited by 2 (0 self) Add to MetaCart On the one hand network and... In this paper we present the specification of a network model in Maude and some primitives for de ning simulation strategies. The use of the model is illustrated with a simple HELLO sub-protocol taken from the IETF PIMDM (Protocol Independent Multi-Cast-Dense Mode) RFC [6], and based on a pseudo-code specification [21]. The network model we present reflects the key aspects of the infra-structure on which typical communication protocols run. The model is designed so that we may execute isolated protocols as well as develop techniques for composing sub-protocols, to model the more complex protocols used in practice. The long term goal is to support simulation and formal analysis at many levels of detail... "... Abstract. Exploring a graph through search is one of the most basic building blocks of various applications. In a setting with a huge state space, such as in testing and verification, optimizing the search may be crucial. We consider the problem of visiting all states in a graph where edges are gene ..." Cited by 1 (0 self) Add to MetaCart Abstract. Exploring a graph through search is one of the most basic building blocks of various applications. In a setting with a huge state space, such as in testing and verification, optimizing the search may be crucial. We consider the problem of visiting all states in a graph where edges are generated by actions and the (reachable) states are not known in advance. Some of the actions may commute, i.e., they result in the same state for every order in which they are taken (this is the case when the actions are performed independently by different processes). We show how to use commutativity to achieve full coverage of the states traversing considerably fewer edges. 1 - In SPIN Workshop on Model Checking of Software , 2004 "... We propose an algorithm to find a counterexample to some property in a finite state program. This algorithm is derived from SPIN's one, but it finds a counterexample faster than SPIN does. In particular it still works in linear time. Compared with SPIN's algorithm, it requires only one additiona ..." Add to MetaCart We propose an algorithm to find a counterexample to some property in a finite state program. This algorithm is derived from SPIN's one, but it finds a counterexample faster than SPIN does. In particular it still works in linear time. Compared with SPIN's algorithm, it requires only one additional bit per state stored. We further propose another algorithm to compute a counterexample of minimal size. Again, this algorithm does not use more memory than SPIN does to approximate a minimal counterexample. The cost to find a counterexample of minimal size is that one has to revisit more states than SPIN. We provide an implementation and discuss experimental results. , 2003 "... MAFTIA Workpackage 6 is concerned with the rigorous definition of the basic MAFTIA concepts, and the verification and assessment of the work on dependable middleware. ..." Add to MetaCart MAFTIA Workpackage 6 is concerned with the rigorous definition of the basic MAFTIA concepts, and the verification and assessment of the work on dependable middleware. , 2007 "... Fragestellungen als auch in Fragen nach dem „Zusammenpassen “ von Komponenten auftauchen. Um eine frühe Einschätzung von Kompatibilität durch Modellierung des untersuchten Systems zu ermöglichen, wurde die (Unified) Compatibility Modelling Language ((U)CML) an der Technischen Universität München ent ..." Add to MetaCart Fragestellungen als auch in Fragen nach dem „Zusammenpassen “ von Komponenten auftauchen. Um eine frühe Einschätzung von Kompatibilität durch Modellierung des untersuchten Systems zu ermöglichen, wurde die (Unified) Compatibility Modelling Language ((U)CML) an der Technischen Universität München entwickelt. Bisher ist (U)CML als vordergründig statisches Modell definiert worden. Die Nützlichkeit der Modellierung von Kompatibilität auch auf Kommunikation zwischen Komponenten auszudehnen ist Thema dieser Arbeit. Dafür wird der weit verbreitete Standard der Message Sequence Charts (MSC) mit einer an die Modellierung "... Abstract. We describe a framework for the automated verification of multi-agent systems which do distributed problem solving, e.g., query answering. Each reasoner uses facts, messages and Horn clause rules to derive new information. We show how to verify correctness of distributed problem solving un ..." Add to MetaCart Abstract. We describe a framework for the automated verification of multi-agent systems which do distributed problem solving, e.g., query answering. Each reasoner uses facts, messages and Horn clause rules to derive new information. We show how to verify correctness of distributed problem solving under resource constraints, such as the time required to answer queries and the number of messages exchanged by the agents. The framework allows the use of abstract specifications consisting of Linear Time Temporal Logic (LTL) formulas to specify some of the agents in the system. We illustrate the use of the framework on a simple example. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=96951","timestamp":"2014-04-16T07:11:44Z","content_type":null,"content_length":"38544","record_id":"<urn:uuid:d54958e0-5ec5-4762-b5e4-5a17a79176a1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00113-ip-10-147-4-33.ec2.internal.warc.gz"}