content
stringlengths
86
994k
meta
stringlengths
288
619
(Never) Mind your p's and q's: Von Neumann versus Jordan on the Foundations of Quantum Theory Duncan, Anthony and Janssen, Michel (2012) (Never) Mind your p's and q's: Von Neumann versus Jordan on the Foundations of Quantum Theory. [Preprint] Download (541Kb) | Preview In early 1927, Pascual Jordan (1927b) published his version of what came to be known as the Dirac-Jordan statistical transformation theory. Later that year and partly in response to Jordan, John von Neumann (1927a) published the modern Hilbert space formalism of quantum mechanics. Central to both formalisms are expressions for conditional probabilities of finding some value for one quantity given the value of another. Beyond that Jordan and von Neumann had very different views about the appropriate formulation of problems in the new theory. For Jordan, unable to let go of the analogy to classical mechanics, the solution of such problems required the identification of sets of canonically conjugate variables, i.e., p’s and q’s. Jordan (1927e) ran into serious difficulties when he tried to extend his approach from quantities with fully continuous spectra to those with wholly or partly discrete spectra. For von Neumann, not constrained by the analogy to classical physics and aware of the daunting mathematical difficulties facing the approach of Jordan (and, for that matter, Dirac (1927)), the solution of a problem in the new quantum mechanics required only the identification of a maximal set of commuting operators with simultaneous eigenstates. He had no need for p’s and q’s. Related to their disagreement about the appropriate general formalism for the new theory, Jordan and von Neumann stated the characteristic new rules for probabilities in quantum mechanics somewhat differently. Jordan (1927b) was the first to state those rules in full generality, von Neumann (1927a) rephrased them and then sought to derive them from more basic considerations (von Neumann, 1927b). In this paper we reconstruct the central arguments of these 1927 papers by Jordan and von Neumann and of a paper on Jordan’s approach by Hilbert, von Neumann, and Nordheim (1928). We highlight those elements in these papers that bring out the gradual loosening of the ties between the new quantum formalism and classical mechanics. Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL Social Networking: Share | Available Versions of this Item β€’ (Never) Mind your p's and q's: Von Neumann versus Jordan on the Foundations of Quantum Theory. (deposited 29 Apr 2012 15:18)[Currently Displayed] Actions (login required) Document Downloads
{"url":"http://philsci-archive.pitt.edu/9103/","timestamp":"2014-04-19T01:47:52Z","content_type":null,"content_length":"32972","record_id":"<urn:uuid:0ed4b95f-5fc6-4b63-88bd-5fb16aab39a3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: October 2007 [00984] [Date Index] [Thread Index] [Author Index] Re: Gradient of a List β€’ To: mathgroup at smc.vnet.net β€’ Subject: [mg82614] Re: Gradient of a List β€’ From: Scott Hemphill <hemphill at hemphills.net> β€’ Date: Fri, 26 Oct 2007 05:21:23 -0400 (EDT) β€’ References: <10340059.1193232789565.JavaMail.root@m35> <ffpps9$l54$1@smc.vnet.net> β€’ Reply-to: hemphill at alumni.caltech.edu DrMajorBob <drmajorbob at bigfoot.com> writes: > data = Table[{x + RandomReal[], Sin@x + 0.1 RandomReal[]}, {x, 0, Pi, > 0.1}]; > f = Interpolation[data, InterpolationOrder -> 3]; > {min, max} = data[[Ordering[data][[{1, -1}]], 1]]; > Quiet@Plot[f'[x], {x, min, max}, PlotRange -> All] > I use Quiet because Plot sometimes samples outside the data range and > throws the InterpolatingFunction::dmval message. > Notice, however, the result isn't even close to Cos[x], and it changes > quite a bit if you change the InterpolationOrder. Of course, these problems are because of the noise in both the x and y data values. Since Interpolation insists on passing exactly through the points given, the interpolating function has to wiggle around a lot to fit all the noise. The OP may not have any noise in his independent variables (x,y) and may have little or none in his function values. Still, yours is an interesting problem. One way of handling it would be to interpolate via weighted averages. For example, you could assign a Gaussian weight to all the function values based on how close the x value is to the x coordinates of the data: (* Gaussian centered at x0, with standard deviation sig *) pdf[x_,x0_,sig_] := 1/(Sqrt[2Pi]sig) Exp[-(x-x0)^2/(2sig^2)]; (* Gaussian weighted average of data, using sig = 0.5 *) (* try using other values for sig *) g[x_] = Block[{x0,y0,w}, x0 = data[[All,1]]; (* x-coordinates of data *) y0 = data[[All,2]]; (* y-coordinates of data *) w = pdf[x,#,0.5]& /@ x0; (* weight the x-coordinates *) w /= Plus @@ w; (* Normalize the weights *) w . y0 (* Return interpolated function value *) Now you have a continuous function g[x], and you can plot it as well as g'[x]. (Of course it is inefficient, since it recalculates all the weights every time you call it. You could enter "foo[x_]=g[x];" and then "foo[x]" wouldn't have that problem.) One nice feature of Gaussian weights happens if you have a set of equally spaced data points. If they extend infinitely in both positive and negative directions, or *equivalently* the function values are zero beyond the region of interest, then you can omit the normalization step. (One example is in image processing, where the region beyond the boundaries of the image may be assumed to be black.) Then the Gaussian weighting is equivalent to convolution(1) with a Gaussian kernel. This convolution has some nice properties. For example, it is infinitely differentiable, because the Gaussian is. Also, you can express its derivatives in the same form as the convolution itself, i.e. the convolution of a Gaussian with a set of data points. The OP might be able to use the two-dimensional version of the the Gaussian weighted interpolation above, but Mathematica's built-in polynomial interpolation might work perfectly well. (1) when I speak of convolution with a "data point" (x0,y0), I really mean convolving with the function y0*DiracDelta[x0]. The result is a y0 times a Gaussian centered at x0. Convolution with a collection of data points gives the sum of all the Gaussians. > On Wed, 24 Oct 2007 03:34:28 -0500, olalla <operez009 at ikasle.ehu.es> wrote: > > Hi everybody, > > > > Does anybody know how can I get the "gradient" of a list of points? > > > > My real problem is: > > > > I have a scalar field previously obtained numerically that for a > > given point (xi,yi) takes a value f(xi,yi). What I want to do is an > > estimation of the gradient of this scalar field BUT I haven't got any > > analytical function that expresses my field so I can't use the Grad > > function. > > > > How can I solve this using Mathematica? > > > > Thanks in advance > > > > Olalla, Bilbao UPV/EHU > > > > > -- > DrMajorBob at bigfoot.com Scott Hemphill hemphill at alumni.caltech.edu "This isn't flying. This is falling, with style." -- Buzz Lightyear
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/Oct/msg00984.html","timestamp":"2014-04-17T21:47:03Z","content_type":null,"content_length":"29411","record_id":"<urn:uuid:2902ef1a-d197-4ab1-b526-49bd5cd6b308>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00276-ip-10-147-4-33.ec2.internal.warc.gz"}
Left and right cosets (modern Algebra) April 6th 2013, 06:20 AM #1 Mar 2013 United States Let H be a subgroup of G, and let S={left cosets of H in G}, and T={Right cosets of H in G}. Now Define f: S-->T, by the following function f(aH)=H(a^-1) Prove that f is well define!!! All I know is "A well defined set is that if aH = bH then f(aH) = f(bH)" Please Help!!!! Re: Left and right cosets (modern Algebra) To show a relation f is "well defined", you need to verify if x=z, then f(x)=f(z); i.e. f is single valued. So let aH and bH be left cosets of H in G. Then aH=bH iff b^-1a is in H iff Ha^-1=Hb^ -1. This actually shows more: f is a 1:1 function. Furthermore f is onto (you can do this), and so the number of left cosets is the number of right cosets. Re: Left and right cosets (modern Algebra) I started with aH=bH and did left cancellation. I was able to get to (b^-1)aH=H. How can I proceed? Re: Left and right cosets (modern Algebra) Do the same thing on the right: $Ha^{-1}=Hb^{-1}$ iff $Ha^{-1}b=H$ April 6th 2013, 08:11 AM #2 Super Member Dec 2012 Athens, OH, USA April 7th 2013, 07:36 AM #3 Mar 2013 United States April 7th 2013, 01:41 PM #4 Super Member Dec 2012 Athens, OH, USA
{"url":"http://mathhelpforum.com/advanced-math-topics/216806-left-right-cosets-modern-algebra.html","timestamp":"2014-04-18T01:47:09Z","content_type":null,"content_length":"38881","record_id":"<urn:uuid:388907b5-14a4-4738-89e6-a190c344c9e9>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: August 1998 [00055] [Date Index] [Thread Index] [Author Index] Parabolic equation in DSolve β€’ To: mathgroup at smc.vnet.net β€’ Subject: [mg13660] Parabolic equation in DSolve β€’ From: Vitaliy Ababiy <vinni at chair12.phtd.tpu.edu.ru> β€’ Date: Mon, 10 Aug 1998 12:34:03 -0700 (GMT+7) β€’ Sender: owner-wri-mathgroup at wolfram.com Widely know that the solution u[x,t]=1/(2K Sqrt[\pi t])Exp[-x^2/(4K^2 t)] is analitical result for parabolic equation D[u[x,t],t]==K D[u[x,t],{x,2}] u[x,0]== DiracDelta[x] or in general formula u[x,t]=1/(2K Sqrt[\pi t])Integrate[f[xx]Exp[-(x-xx)^2/(4K^2 t)], but there is no way to find it in analytical. I try DSolve[D[f[x, t], t] ==K D[f[x, t], {x,2}], f[x,t], {x,t}] and etc. with any initial and boundary conditions {f[x,t]==DiracDelta[x], f[-Inf,t]==0,[+Inf,t]==0} don't give result or "Partial differential equation may not have a general solution. Try Calculus`DSolveIntegrals` to find special solutions." This way Derivative[0,1][f][x,t] == K Derivative[2,0][f][x,t],f[x,t], {x,t}] do not give result too. There is some result in numeriacal analys (from man) K=1; solution=NDSolve[{D[f[x, t], t] ==K D[f[x, t], {x,2}], f[x, 0] == Exp[-x^2], f[-5, t] == f[5, t]},f, {x, -5, 5}, {t, 0, 5}]; Plot3D[Evaluate[f[x, t] /. First[solution]],{x,-5,5},{t,0,5}, but is any way fo find it result in analytic? | Vitali Ababi | | Physical Technical Department | | Tomsk Polytechnic University | | 634004 Tomsk, Russia |
{"url":"http://forums.wolfram.com/mathgroup/archive/1998/Aug/msg00055.html","timestamp":"2014-04-16T10:23:46Z","content_type":null,"content_length":"35498","record_id":"<urn:uuid:7799fed5-397a-4a60-93fb-6d976b1e0dd2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
The Financial Toolboxβ„’ product contains several functions to derive and analyze interest rate curves, including data conversion and extrapolation, bootstrapping, and interest-rate curve conversion One of the first problems in analyzing the term structure of interest rates is dealing with market data reported in different formats. Treasury bills, for example, are quoted with bid and asked bank-discount rates. Treasury notes and bonds, on the other hand, are quoted with bid and asked prices based on $100 face value. To examine the full spectrum of Treasury securities, analysts must convert data to a single format. Financial Toolbox functions ease this conversion. This brief example uses only one security each; analysts often use 30, 100, or more of each. First, capture Treasury bill quotes in their reported format % Maturity Days Bid Ask AskYield TBill = [datenum('12/26/2000') 53 0.0503 0.0499 0.0510]; then capture Treasury bond quotes in their reported format % Coupon Maturity Bid Ask AskYield TBond = [0.08875 datenum(2001,11,5) 103+4/32 103+6/32 0.0564]; and note that these quotes are based on a November 3, 2000 settlement date. Settle = datenum('3-Nov-2000'); Next use the toolbox tbl2bond function to convert the Treasury bill data to Treasury bond format. TBTBond = tbl2bond(TBill) TBTBond = 0 730846 99.26 99.27 0.05 (The second element of TBTBond is the serial date number for December 26, 2000.) Now combine short-term (Treasury bill) with long-term (Treasury bond) data to set up the overall term structure. TBondsAll = [TBTBond; TBond] TBondsAll = 0 730846 99.26 99.27 0.05 0.09 731160 103.13 103.19 0.06 The Financial Toolbox software provides a second data-preparation function,tr2bonds, to convert the bond data into a form ready for the bootstrapping functions. tr2bonds generates a matrix of bond information sorted by maturity date, plus vectors of prices and yields. [Bonds, Prices, Yields] = tr2bonds(TBondsAll); Deriving an Implied Zero Curve Using this market data, you can use one of the Financial Toolbox bootstrapping functions to derive an implied zero curve. Bootstrapping is a process whereby you begin with known data points and solve for unknown data points using an underlying arbitrage theory. Every coupon bond can be valued as a package of zero-coupon bonds which mimic its cash flow and risk characteristics. By mapping yields-to-maturity for each theoretical zero-coupon bond, to the dates spanning the investment horizon, you can create a theoretical zero-rate curve. The Financial Toolbox software provides two bootstrapping functions: zbtprice derives a zero curve from bond data and prices, and zbtyield derives a zero curve from bond data and yields. Using zbtprice [ZeroRates, CurveDates] = zbtprice(Bonds, Prices, Settle) ZeroRates = CurveDates = CurveDates gives the investment horizon. ans = Additional Financial Toolbox functions construct discount, forward, and par yield curves from the zero curve, and vice versa. [DiscRates, CurveDates] = zero2disc(ZeroRates, CurveDates,... [FwdRates, CurveDates] = zero2fwd(ZeroRates, CurveDates, Settle); [PYldRates, CurveDates] = zero2pyld(ZeroRates, CurveDates,...
{"url":"http://www.mathworks.nl/help/finance/term-structure-of-interest-rates.html?nocookie=true","timestamp":"2014-04-23T07:32:08Z","content_type":null,"content_length":"37439","record_id":"<urn:uuid:3377db78-50b3-4e1f-b44e-b43567efb2ae>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Squares and Sticks James LYONSW at UCONNVM.UCONN.EDU Fri Jun 17 11:50:11 EST 1994 * Charles T. Faulkner * * Univ of Tennessee, Knoxville * * (ctfaulkn at utkvx.utk.edu) * ********************************** writes: >I don't believe a name change is necessary, but I wonder why a statistical >puzzle has to take the form of match sticks, tooth picks or something other >than biological entities. Is the field so diverse that we can not ask "what >is the probability that 3 (or more ) organisims from these localities will >share the same attribute (morphological, behavioral etc.). You may ask any quesiton you wish. That is NOT the question I posed. Here's the question in with relevant biological terms included: I've got a real problem, and I hope ot tap into the community brain power. I'ma graduate student interested in furthering attempts in the field of phylogenetics and systematics to derive a believable p-value to attach to a phylogenetic hypothesis (a.k.a., tree). Suppose I have 25 taxa in my study of 100 characters. If I consider each character state which is identical in any given pair as potential (or apparent) synapomorphy, I can build a matrix of scores of apparent synapomorphy. If I consider each pair of taxa in all possible and relevant (i.e., informative) three taxon statements, I can build a matrix of RELATIVE APPARENT SYNAPOMORPHY. Any time a pair of taxa shares a character state to the exclusion of a thirs taxon (any third taxon), the RAS score for that pair for that character is increased by one. If I sum all RAS scores for all characters, I have a total score, or SIGMA RAS, which could be applied in a number of ways (one of which is certainty estimation; another may be phylogeny estimation). My question is: one can know the following: (A,B),D and (A,C),D for a given character. However, given this, one can infer (B,C),D on the basis of that same character. One could make the argument that there are, therefore, fewer degrees of freedom in the RAS matrix than can be actually calculated, and that any manipulation is more suspect than it would be if we knew which RAS scores for which taxa were independent and which were not. Since I hope that I'm more of a biologist than a statistician, I hope that there are those out there who recognize the conundrum I have; the observation that two taxa share a character state is sometimes less surpising than it should be, and sometimes more surprising than it should be. Treating each observation of apparent synapomorphy as independent could cause an underestimated error term. But unless we know the actually history of lineage splitting, we cannot tell when we should be surprised and we we should not be surprised! caveat: by "we" I mean those with interest in the problem More information about the Bioforum mailing list
{"url":"http://www.bio.net/bionet/mm/bioforum/1994-June/009607.html","timestamp":"2014-04-17T11:43:59Z","content_type":null,"content_length":"4968","record_id":"<urn:uuid:bebbfe46-f691-487f-b51c-cfcb7ca7e56b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Computing determinants of characters up vote 2 down vote favorite For a group $G$ there is a well-defined map $\operatorname{Irr}(G) \to \operatorname{Lin}(G)$ which sends $\chi \mapsto \det \chi$, where $\det \chi$ the linear character of $G$ given by taking the determinant of the representation affording $\chi$. In general, is there a good way to go about computing $\det \chi$ without having to construct a representation affording $\chi$, or are there some conditions on $G$ or $\chi$ under which this can be done easily? add comment 1 Answer active oldest votes If you know $\chi$ then you can write down $\det \chi$ using Newton's identities. This is simply the observation that one can express the determinant of a matrix $\rho(g)$ in up vote 3 down vote terms of traces of powers $\rho(g)^k=\rho(g^k)$ of that matrix. 1 Strictly speaking, knowing $\chi$ in the sense of knowing the function $\chi : G \to \mathbb{C}$ is not enough: you also need to know enough of the multiplication table of $G$ to take powers. – Qiaochu Yuan Nov 27 '12 at 2:33 add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/114606/computing-determinants-of-characters","timestamp":"2014-04-17T04:44:03Z","content_type":null,"content_length":"49998","record_id":"<urn:uuid:e1045a69-5ea6-4729-9288-74a9408579d3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
longest simple closed geodesic up vote 2 down vote favorite I am obviously not familiar with differential geometry. But some times I do want to know detailed answers to the following questions? May someone help? When will there be a longest simple closed geodesic on a metric space? Of course, this is too general a question. To be more touchable, what is the case for Riemannian manifolds with non-positive Or more generally is there any reference for the the relationship between longest/shortest simple closed curve, diameter, area and curvature? dg.differential-geometry mg.metric-geometry 2 You'll need some kind of assumption of finiteness of topology at least. For example, by stitching together a sequence of pairs of pants properly, you'll be able to construct a complete, connected surface with curvature $K=-1$ that has a sequence of simple closed geodesics whose lengths go off to infinity. – Robert Bryant Jan 22 '13 at 12:38 2 It seems that the question is ill-posed. On the 2-dimensional torus, there is in each homology class a simple closed geodesic of shortest length, and this length goes to infinity with the homology class. Maybe you want to restrict to manifolds of positive curvature? (Their fundamental group is finite.) – ACL Jan 22 '13 at 13:03 Thanks Professor Bryant! You mean if we consider a hyperbolic surface which is topologically infinite (the sum of genus, punctures and ideal boundaries is infinite), then it is possible that the lengths tend to infinity? But what about the case for a closed hyperbolic surface of finite genus $g \geq 2$? Is it follows from the compactness that the lengths of simple closed curves are upper bounded? – silktomath Jan 22 '13 at 13:12 2 @silktomath: The answer to your first question is 'yes'. For the answer to your second (and third) question, which is 'no', see Maryam Mirzakhani's paper Growth of the number of simple closed geodesics on hyperbolic surfaces, Annals of Mathematics, 168 (2008), 97–125, which gives an estimate for how fast the number of such simple closed geodesics grows with given length $L$. Boundedness not only fails, but fails spectacularly on a hyperbolic surface of genus $g>1$. – Robert Bryant Jan 22 '13 at 17:27 @Robert Bryant: Thanks for all comments! I will check the paper you mentioned which might be of further interest. – silktomath Jan 23 '13 at 4:38 add comment 2 Answers active oldest votes The following paper proves that, on any surface homeomorphic to a sphere of unit area, the shortest closed geodesic cannot be longer than $8$: up vote 2 down vote A. Nabutovsky and R. Rotman. "The length of the shortest closed geodesic on a 2-dimensional sphere." Int Math Res Notes, Volume 2002, Issue 23, pp. 1211-1222. (journal link) add comment The question of the relation between the length of the shortest closed geodesic and the area of a surface is called systolic geometry. You can notably look at the work of Balacheff, up vote 2 down Bavard, Croke Gendulphe, Katz, Parlier, Saboureau. Thanks for the list – silktomath Jan 23 '13 at 4:43 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry mg.metric-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/119552/longest-simple-closed-geodesic","timestamp":"2014-04-16T19:38:12Z","content_type":null,"content_length":"60665","record_id":"<urn:uuid:8f0e3375-f373-4889-acc2-cb57c4f9e0cd>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Elements of Algebra Elements of Algebra: Being an Abridgement of Day's Algebra Adapted to the Capacities of the Young, and the Method of Instruction, in Schools and Academies (Google eBook) Durrie & Peck , 1844 - 252 pages We haven't found any reviews in the usual places. Popular passages Divide the first term of the dividend by the first term of the divisor, and write the result as the first term of the quotient. Multiply the whole divisor by the first term of the quotient, and subtract the product from the dividend. It is evident that the terms of a proportion may undergo any change which will not destroy the equality of the ratios ; or which will leave the product of the means equal to the product of the When there is a series of quantities, such that the ratios of the first to the second, of the second to the third, of the third to the fourth, &c. After remarking that the mathematician positively knows that the sum of the three angles of a triangle is equal to two right angles... One quantity is said to be a multiple of another, when the former contains the latter a certain number of times without a remainder. There are four numbers in geometrical progression, the second of which is less than the fourth by 24 ; and the sum of the extremes is to the sum of the means, as 7 to 3. What are the numbers ? Ans. RULE. Multiply all the numerators together for a new numerator, and all the denominators for a new denominator: then reduce the new fraction to its lowest terms. MULTIPLYING BY A WHOLE NUMBER is TAKING THE MULTIPLICAND AS MANY TIMES, AS THERE ARE UNITS IN THE MULTIPLIER. II. Divide the greater number by the less and the preceding divisor by the last remainder till nothing remains. The last divisor is the... As the product of the divisor and quotient is equal to the dividend, the quotient may be found, by resolving the dividend into two such factors, that one of them shall be the divisor. The other will, of course, be the quotient. Suppose abd is to be divided by a. The factor a and bd will produce the dividend. The first of these, being a divisor, may be set aside. References from web pages Floyd Mathematics Textbooks: Indiana State University Library ELEMENTS OF ALGEBRA, BEING AN ABRIDGMENT OF DAY'S ALGEBRA, ADAPTED TO THE CAPACITIES OF THE YOUNG, AND THE METHOD OF INSTRUCTION IN SCHOOLS AND ACADEMIES. ... jaguar.indstate.edu/ about/ units/ rbsc/ floyd/ math.html Bibliographic information
{"url":"http://books.google.com/books?id=1goAAAAAYAAJ&source=gbs_book_other_versions_r&cad=5","timestamp":"2014-04-19T23:56:56Z","content_type":null,"content_length":"133804","record_id":"<urn:uuid:04570b27-83fb-42e7-91f2-4d0e78299617>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Multifactor Screener: Uses of Screener Estimates in the Cancer Control Supplement Dietary intake estimates derived from the Multifactor Screener are rough estimates of usual intake of Pyramid servings of fruits and vegetables, percentage of energy from fat, and fiber. These estimates are not as accurate as those from more detailed methods (e.g., 24-hour recalls). However, validation research suggests that the estimates may be useful to characterize a population's median intakes, to discriminate among individuals or populations with regard to higher vs. lower intakes, to track dietary changes in individuals or populations over time, and to allow examination of interrelationships between diet and other variables. In addition, diet estimates from the Cancer Control Supplement (CCS) could be used as benchmark national data for smaller surveys, for example, in a particular state. Variance-Adjustment Factor What is the variance adjustment estimate and why do we need it? Data from the Multifactor Screener are individuals' reports about their intake and, like all self-reports, contain some error. The algorithms we use to estimate Pyramid servings of fruits and vegetables, percentage energy from fat, and grams of fiber calibrate the data to 24-hour recalls. The screener estimate of intake represents what we expect the person would have reported on his 24-hour recall, given what he reported on the individual items in the screener. As a result, the mean of the screener estimate of intake should equal the mean of the 24-hour recall estimate of intake in the population. (It would also equal the mean of true intake in the population if the 24-hour recalls were unbiased. However, there are many studies suggesting that recalls underestimate individuals' true intakes). When describing a population's distribution of dietary intakes, the parameters needed are an estimate of central tendency (i.e. mean or median) and an estimate of spread (variance). The variance of the screener, however, is expected to be smaller than the variance of true intake, since the screener prediction formula estimates the conditional expectation of true intake given the screener responses, and in general the variance of a conditional expectation of a variable X is smaller than the variance of X itself. As a result, the screener estimates of intake cannot be used to estimate quantiles (other than median) or prevalence estimates of true intake without an adjustment. Procedures have been developed to estimate the variance of true intake using data from 24-hour recalls, by taking into consideration within person variability^1, 2. We extended these procedures to allow estimation of the variance of true intake using data from the screener. The resulting variance adjustment factor adjusts the screener variance to approximate the variance of true intake in the population. How did we estimate the variance adjustment factors? We have estimated the adjustment factors in the various external validation datasets available to us. The results indicate that the adjustment factors differ by gender and dietary variable. Under the assumption that the variance adjustment factors appropriate to National Health Interview Study (NHIS) are similar to those in Observing Protein and Energy Nutrition Study (OPEN), the variance-adjusted screener estimate of intake should have variance closer to the estimated variance of true intake that would have been obtained from repeat 24-hour recalls. For Pyramid servings of fruits and vegetables, the variance adjustment factors in OPEN and Eating at America's Table Study (EATS) are quite similar, which gives us some indication that these factors might be relatively stable from population to population. Variance Adjustment Factors for the NHIS Multifactor Screener, from the OPEN Study β”‚ Nutrient β”‚ Gender β”‚ Variance Adjustment Factor β”‚ β”‚ Total Fruit & Vegetable Intake β”‚ Male β”‚ 1.3 β”‚ β”‚ (Pyramid Servings) β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚ Female β”‚ 1.1 β”‚ β”‚ Fruit & Vegetable Intake β”‚ Male β”‚ 1.3 β”‚ β”‚ (excluding fried potatoes) β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ (Pyramid Servings) β”‚ Female β”‚ 1.2 β”‚ β”‚ β”‚ Male β”‚ 1.5 β”‚ β”‚ Percentage Calories from Fat β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚ Female β”‚ 1.3 β”‚ β”‚ Fiber Intake β”‚ Male β”‚ 1.2 β”‚ β”‚ (grams) β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚ Female β”‚ 1.2 β”‚ How do you use the variance adjustment estimates? To estimate quantile values or prevalence estimates for an exposure, you should first adjust the screener so that it has approximately the same variance as true intake. Adjust the screener estimate of intake by: β€’ multiplying intake by an adjustment factor (an estimate of the ratio of the standard deviation of true intake to the standard deviation of screener intake); and β€’ adding a constant so that the overall mean is unchanged. The formula for the variance-adjusted screener is: variance-adjusted screener = (variance adjustment factor)*(unadjusted screener - mean[unadj scr.]) + mean[unadj scr.] This procedure is performed on the normally distributed version of the variable (i.e., Pyramid servings of fruits and vegetables is square-rooted, percentage energy from fat is untransformed, and fiber is cube rooted). For fruits and vegetables and fiber, the results can then be squared or cubed, respectively, to obtain estimates in the original units. The variance adjustment procedure is used to estimate prevalence of obtaining recommended intakes for the 2000 NHIS in: Thompson FE, Midthune D, Subar AF, McNeel T, Berrigan D, Kipnis V. Dietary intake estimates in the National Health Interview Survey, 2000: Methodology, results, and interpretation. J Am Dietet Assoc 2005;105:352-63. When do you use variance adjustment estimates? The appropriate use of the screener information depends on the analytical objective. Following is a characterization of suggested procedures for various analytical objectives. β”‚ Analytical Objective β”‚ Procedure β”‚ β”‚ Estimate mean or median intake in the population or within subpopulations. β”‚ Use the unadjusted screener estimate of intake. β”‚ β”‚ Estimate quantiles (other than median) of the distribution of intake in the population; estimate prevalence of β”‚ Use the variance-adjusted screener estimate. β”‚ β”‚ attaining certain levels of dietary intake. β”‚ β”‚ β”‚ Classify individuals into exposure categories (e.g., meeting recommended intake vs. not meeting recommended intake) β”‚ Use the variance-adjusted screener estimates to determine appropriate β”‚ β”‚ for later use in a regression model. β”‚ classification into categories. β”‚ β”‚ Use the screener estimate as a continuous covariate in a multivariate regression model. β”‚ Use the unadjusted screener estimate. β”‚ β”‚ Use the screener estimate as a response (dependent) variable. β”‚ Use the variance-adjusted screener estimate. β”‚ Attenuation of Regression Parameters Using Screener Estimates When the screener estimate of dietary intake is used as a continuous covariate in a multivariate regression, the estimated regression coefficient will typically be attenuated (biased toward zero) due to measurement error in the screener. The "attenuation factor"^3 can be estimated in a calibration study and used to deattenuate the estimated regression coefficient (by dividing the estimated regression coefficient by the attenuation factor). We estimated attenuation factors in the OPEN study (see below). If you use these factors to deattenuate estimated regression coefficients, note that the data come from a relatively small study that consists of a fairly homogeneous population (primarily white, well-educated individuals). Attenuation factors for screener-predicted intake: OPEN β”‚ β”‚ Square-Root Fruit & Veg β”‚ Square-Root Fruit & Veg β”‚ β”‚ Cube-Root Fiber β”‚ β”‚ Gender β”‚ (Pyramid Servings) β”‚ (excluding French Fries) β”‚ Percentage Energy From Fat β”‚ (grams) β”‚ β”‚ β”‚ β”‚ (Pyramid Servings) β”‚ β”‚ β”‚ β”‚ Men β”‚ 0.75 β”‚ 0.79 β”‚ 0.96 β”‚ 0.70 β”‚ β”‚ Women β”‚ 0.81 β”‚ 0.87 β”‚ 0.88 β”‚ 0.69 β”‚ If you categorize the screener values into quantiles and use the resulting categorical variable in a linear or logistic regression, the bias (due to misclassification) is more complicated because the categorization can lead to differential misclassification in the screener^4. Although methods may be available to correct for this^5, 6, it is not simple, nor are we comfortable suggesting how to do it at this time. Even though the estimated regression coefficients are biased (due to measurement error in the screener or misclassification in the categorized screener), tests of whether the regression coefficient is different from zero are still valid. For example, if one used the SUDAAN REGRESS procedure with fruit and vegetable intake (estimated by the screener) as a covariate in the model, one could use the Wald F statistic provided by SUDAAN to test whether the regression coefficient were statistically significantly different from zero. This assumes that there is only one covariate in the model measured with error; when there are multiple covariates measured with error, the Wald F test that a single regression coefficient is zero may not be valid, although the test that the regression coefficients for all covariates measured with error are zero is still valid. Last Modified: 11 Apr 2014
{"url":"http://www.appliedresearch.cancer.gov/nhis/multifactor/uses.html","timestamp":"2014-04-18T13:06:33Z","content_type":null,"content_length":"22412","record_id":"<urn:uuid:c5f29cce-3695-4a72-ab97-cf226711415d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Charged Black Holes: The Reissner-NordstrΓΆm Geometry Reissner-NordstrΓΆm geometry The Reissner-NordstrΓΆm geometry describes the geometry of empty space surrounding a charged black hole. If the charge of the black hole is less than its mass (measured in geometric units G = c = 1), then the geometry contains two horizons, an outer horizon and an inner horizon. Between the two horizons space is like a waterfall, falling faster than the speed of light, carrying everything with it. Upstream and downstream of the waterfall, space moves slower than the speed of light, and relative calm Fundamental charged particles like electrons and quarks are not black holes: their charge is much greater than their mass, and they do not contain horizons. If the geometry is continued all the way to the centre of the black hole, then there is a gravitationally repulsive, negative-mass singularity there. Uncharged persons who fall into the charged black hole are repelled by the singularity, and do not fall into it. The diagram at left is an embedding diagram of the Reissner-NordstrΓΆm geometry, a 2-dimensional representation of the 3-dimensional spatial geometry at an instant of Reissner-NordstrΓΆm time. Between the horizons, radial lines at fixed Reissner-NordstrΓΆm time are time-like rather than space-like, which is to say that they are possible wordlines of radially infalling (albeit not freely falling) observers. The animated dashes follow the positions of such infalling observers as a function of their own proper time.
{"url":"http://casa.colorado.edu/~ajsh/rn.html","timestamp":"2014-04-18T00:34:32Z","content_type":null,"content_length":"24812","record_id":"<urn:uuid:8b44b439-01cd-438a-9111-533151de87e3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Injective dimension of $\mathcal{O}_X$-modules up vote 5 down vote favorite Let $(X, \mathcal{O}_X)$ be a regular noetherian scheme of finite Krull dimension (over a field $k$ if needed). Is it true that any $\mathcal{O}_X$-module (not necessarily quasi-coherent) has a finite resolution by injective $\mathcal{O}_X$-modules? This is suggested by the remark on page 136 in Hartshorne's "Residues and Duality" but I could not find a reference. Similarly, has any $\mathcal{O}_X$-module a finite resolution by flat $\mathcal{O}_X$-modules? ag.algebraic-geometry ac.commutative-algebra add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged ag.algebraic-geometry ac.commutative-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/112786/injective-dimension-of-mathcalo-x-modules?answertab=oldest","timestamp":"2014-04-19T02:08:04Z","content_type":null,"content_length":"45578","record_id":"<urn:uuid:9ca75232-4590-4a9b-be87-b09bed1cd48d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00362-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics 606 (Spring 2006) Instructor: David Pollard Email: david.pollard@yale.edu When: Tuesday, Friday 10:30--11:45 Where: 24 Hillhouse Avenue Office hours: To be arranged with each group Markov chains on general state spaces; diffusions; Markov random fields; Gibbs measures; percolation. After STAT 600. Intended audience Students who have already taken a measure theoretic probability course No single book. References to books and the literature will be given for each subtopic, as I become more organized. The class materials for an introductory stochastic processes course (Stat 251, Fall 2004) might be helpful. I will also prepare some handouts. Topics (tentative) 1. Markov chains on countable state spaces. The classical theory leading to existence of a stationary distribution and conditions for convergence to that distribution. 2. Markov chains on general state spaces. Exploration of how the theory for countable state spaces can be extended. Steven Orey, "Limit theorems for Markov chain transition probabilities". (My copy was published by Van Nostrand in 1971.) Very concise. Contains many of the main ideas, but without the recent refinements. Esa Nummelin, "General irreducible Markov chains and non-negative operators", Cambridge University Press 1984. (Paperback 2004.) Less concise than Orey. Describes the splitting technique for creating an artificial atom. S.P. Meyn and R.L. Tweedie, "Markov chains and stochastic stability", Springer 1993. Clear but it takes a lot of reading to reach the main ideas. Many examples. I started with this book then moved back to Nummelin then Orey. Persi Diaconis & David Freedman, Technical reports from http://www.stat.berkeley.edu/tech-reports/index.html 501. (December 1, 1997) On Markov Chains with Continuous State Space and 497. (November 24, 1997) On the Hit & Run Process 3. Markov random fields for finitely or countably many variables. A self-contained introduction to Gibbs measures. Ross Kindermann and J. Laurie Snell, "Markov Random Fields and Their Applications", available for free from http://www.ams.org/online_bks/conm1/ . Hans-Otto Georgii, "Gibbs Measures and Phase Trasitions", de Gruyter 1988. Tough reading. 4. Random trees. An introduction to some of the work by Aldous, Steele, and others. Russell Lyons and Yuval Peres, "Probability on Trees and Networks", available from http://mypage.iu.edu/~rdlyons/prbtree/prbtree.html. Interesting material on branching processes and random trees. J. Michael Steele and David Aldous, "The Objective Method: Probabilistic Combinatorial Optimization and Local Weak Convergence,", available from http://www-stat.wharton.upenn.edu/~steele/ 5. Percolation, if time permits. I hope that students will work in groups to flesh out arguments sketched in class. I will meet with each group regularly to help. I will explain in the first lecture how this method of learning can
{"url":"http://www.stat.yale.edu/~pollard/Courses/606.spring06/","timestamp":"2014-04-16T16:05:04Z","content_type":null,"content_length":"5493","record_id":"<urn:uuid:bb0fe4a3-3731-40bd-aedc-2d5957b60702>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
Help me understand this math problem!!! [Archive] - Straight Dope Message Board 06-16-2004, 01:32 PM How many different license plate members can be made using 2 letters followed by 4 digits if: letters and digits may be repeated Now I've worked the problem via an example in class and in my book and the answer I have gotten is 26^2 * 10^4. I am certain that 26^2 is correct for the letters part, but is the number part correct? If so, how come I use a base ten (I understand the 4 part) Thanks for your help
{"url":"http://boards.straightdope.com/sdmb/archive/index.php/t-261929.html","timestamp":"2014-04-19T00:02:54Z","content_type":null,"content_length":"4238","record_id":"<urn:uuid:be8a91af-0244-4840-9336-ffd749ea6ffa>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimizing Tactics for Use of the U.S. Antiviral Strategic National Stockpile for Pandemic Influenza β€’ We are sorry, but NCBI web applications do not support your browser and may not function properly. More information PLoS One. 2011; 6(1): e16094. Optimizing Tactics for Use of the U.S. Antiviral Strategic National Stockpile for Pandemic Influenza Benjamin J. Cowling, Editor^ This article has been cited by other articles in PMC. In 2009, public health agencies across the globe worked to mitigate the impact of the swine-origin influenza A (pH1N1) virus. These efforts included intensified surveillance, social distancing, hygiene measures, and the targeted use of antiviral medications to prevent infection (prophylaxis). In addition, aggressive antiviral treatment was recommended for certain patient subgroups to reduce the severity and duration of symptoms. To assist States and other localities meet these needs, the U.S. Government distributed a quarter of the antiviral medications in the Strategic National Stockpile within weeks of the pandemic's start. However, there are no quantitative models guiding the geo-temporal distribution of the remainder of the Stockpile in relation to pandemic spread or severity. We present a tactical optimization model for distributing this stockpile for treatment of infected cases during the early stages of a pandemic like 2009 pH1N1, prior to the wide availability of a strain-specific vaccine. Our optimization method efficiently searches large sets of intervention strategies applied to a stochastic network model of pandemic influenza transmission within and among U.S. cities. The resulting optimized strategies depend on the transmissability of the virus and postulated rates of antiviral uptake and wastage (through misallocation or loss). Our results suggest that an aggressive community-based antiviral treatment strategy involving early, widespread, pro-rata distribution of antivirals to States can contribute to slowing the transmission of mildly transmissible strains, like pH1N1. For more highly transmissible strains, outcomes of antiviral use are more heavily impacted by choice of distribution intervals, quantities per shipment, and timing of shipments in relation to pandemic spread. This study supports previous modeling results suggesting that appropriate antiviral treatment may be an effective mitigation strategy during the early stages of future influenza pandemics, increasing the need for systematic efforts to optimize distribution strategies and provide tactical guidance for public health policy-makers. In March/April 2009, a new swine-origin strain of influenza A/H1N1 virus (pH1N1) was detected in human populations in California and Mexico. The U.S. government declared a Public Health Emergency on April 26, 2009, followed on June 12 by a declaration of a global pandemic by the World Health Organization. By May 6, the U.S. Centers for Disease Control and Prevention (CDC) had distributed 11 million of the 50 million antiviral treatment courses held in the federal portion of the Strategic National Stockpile (SNS); since the recipients had local stockpiles as well, this allowed the CDC to exceed the pre-determined target of distribution of 31 million treatment courses of oseltamivir and zanamivir prior to the acceleration phase of the pandemic [1]. Accompanying the distribution was guidance recommending the use of antivirals primarily for treatment of suspected or confirmed cases of severe respiratory infection caused by this new strain [2]. Recent extrapolations from reported cases estimate that the pandemic caused over 50 million infections in the U.S. population; the majority of these have been asymptomatic or clinically mild, but pH1N1 nevertheless led to a substantial burden of hospitalization and death [3], [4]. In contrast to the clear guidance for public health leaders regarding the initial shipment of antivirals, the evidence base for determining the fate of the remainder of the stockpile is thin. Key policy statements have called for the use of mathematical models to support the development of an evidence-based policy for effectively deploying the remaining antiviral stockpile and other limited or costly measures to limit morbidity and mortality from pH1N1 [5], [6]. While mathematical modelers have taken great strides towards building predictive models of disease transmission dynamics within human populations, the computational complexity of these models often precludes systematic optimization of the demographic, spatial and temporal distribution of costly resources. Thus the typical approach has been to evaluate a relatively small set of candidate strategies [7]–[10]. Here, we use a new algorithm that efficiently searches large strategy spaces to analyze the optimal use of the U.S. antiviral stockpile against pandemic influenza prior to widespread and effective vaccination. Specifically, we seek to compute explicit release schedules for the SNS to minimize the cumulative infections in the first twelve months of an epidemic like that caused by pH1N1, with the objective of delaying disease transmission to allow for the development and deployment of a vaccine. We assume, in line with recent CDC guidance, that antivirals will be used exclusively for treatment of symptomatic individuals rather than wide-scale pre-exposure prophylaxis. We apply our algorithm to a U.S. national-scale network model of influenza transmission that is based on demographic and travel data from the U.S. Census Bureau and the Bureau of Transportation Statistics. We consider disease parameters estimated for the novel 2009 pH1N1 pandemic as well as more highly transmissible strains of pandemic influenza. We couple a fast, scalable, and adaptable optimization algorithm to a detailed simulation model of influenza transmission within and among the Optimization method A time-based intervention policy is a series of actions Fig. 1). Our objective is to rapidly search large sets of time-based intervention policies to find those that will be most effective at achieving a public health goal, such as limiting morbidity and mortality associated with influenza. Using a stochastic disease simulator, Simple Policy Tree. Suppose there are three possible actions and, in each time step, we can only choose one of them. To compute solutions to the above problem, we use trees to represent all possible policies (Fig. 1). The first (highest) level of a policy tree is a single node attached to several edges; each of those edges corresponds to one of the possible actions in the first time period and leads to a level-two node. Similarly each level-two node is attached to edges corresponding to all possible actions during the second time period, and so on. Each intervention policy corresponds to a unique path through the tree. The naive approach to finding the optimal path through the tree is to simulate multiple disease outbreaks for each intervention policy (path) and record the expected morbidity or mortality (or other public health outcome measure). However, such exhaustive searches are computationally intractable for large trees. We can more efficiently search for the optimal policy by prudently sampling paths from the tree. To strategically search the tree, we use an optimization algorithm called Upper Confidence Bounds Applied to Trees (UCT) [11], [12]. It selects paths from the tree using a multi-armed bandit algorithm inside of each tree node. The canonical application of a bandit algorithm is maximizing the total payoff from playing a set of slot machines for a fixed number of rounds, where the payoff distributions of the machines are unknown and, in each round, we may select only one machine. In this scenario, each edge emanating from the node corresponds to a slot machine that can be chosen by the node's bandit algorithm; for a policy tree, the edges correspond to possible policy actions. Before each policy simulation, bandit algorithms within the nodes select an edge to follow based on the results of prior trials. The combined choices of the bandit algorithms produce a path through the tree, corresponding to a sequence of public health actions, that is then passed into the simulation. The bandit algorithms determine which edge (action) to follow next by balancing two desirable characteristics: strong past performance and few prior trials. With this strategic path sampling, subtrees with good performance are explored more thoroughly than those with poor performance. Specifically, suppose we are descending through the tree and have arrived at node Initially, [1]. This gives initial estimates of [1] is well defined. At the end of each simulation run, if the simulation results in a reward of Pandemic influenza transmission model Our model includes the [13], [14]. We model movement among cities using both Census Bureau's County-To-County Worker Flow Files [15] and the Bureau of Transportation Statistics Origin and Destination Survey for all quarters of 2007, which contains a [16]. We assume that each exposed or asymptomatic infectious traveler has some chance of starting an sustained epidemic in the destination city, by initiating a chain of transmission events to susceptible individuals. We assume further that this happens with probability [17]. If there are Within each city, disease transmission is modeled using a compartmental model with five compartments: susceptible, exposed, asymptomatic infectious, symptomatic infectious, and recovered (Fig. 2b). Progression from one compartment to another is governed by published estimates for pandemic influenza transmission and disease progression rates, as given in Table 1. When infected individuals progress from asymptomatic to symptomatic they seek treatment at a rate [18]–[24]; and effectively treated cases immediately move to the recovered compartment. Untreated and ineffectively treated cases remain infectious until they recover naturally. Epidemics are initialized assuming that there are [25]) distributed stochastically, proportional to city sizes. Thus we are considering distribution policies that begin approximately one to two months following the initial emergence of the strain within the United States. Assuming that maximal flu vaccine coverage can be achieved within Text S1, Video S1). Influenza transmission and intervention parameters. Antiviral policy actions The model considers 11 possible antiviral stockpile actions every month over a twelve month period: distribution of [18]–[24]. If an infected individual resides in a jurisdiction with remaining distributed antivirals, then they receive appropriate treatment (i. e., access to medications within [26], [27]. Consistent with current CDC antiviral guidance, we did not model the use of antivirals for large scale prophylaxis of susceptible populations in the absence of infection. Computational requirements Each optimization is based on First, we consider SNS distribution schedules for the 2009 pH1N1 pandemic (Figure 3). We found that simple distribution schedules such as releasing an arbitrary fixed quantity each month from the federal stockpile to the states proportional to population size perform optimally, due to the mild nature of the disease. In fact, we find very little difference between two extreme scenarios: (a) an infinite supply of antivirals available at all times in all cities, and (b) no federal stockpile releases beyond the 31 million initially purchased by states (Figure 3a). At low uptake, the initial 31 million courses are sufficient to meet demand; at high uptake, the aggressive early treatment essentially stops the epidemic before exhausting supplies; and only at intermediate levels is the the demand sufficiently high and the epidemic sufficiently long-lived to exhaust the available supplies (through a combination of treatment and wastage). A simple SNS release schedule of one million courses per month proportional to population size (in addition to the initial 31 million courses) is sufficient to meet the ongoing demand, regardless of uptake, and thus performs well as an infinite supply (Figure 3a). Antiviral SNS Policy Performance for 2009 pH1N1. The rapid allocation of the first Federal stockpile allotment and the contributions of antivirals by the states to provide for the 31 million courses in the early stages of the epidemic are critical in these simulations. If we remove these courses and assume conservatively that the first Federal distributions take place approximately 3–4 months into the pandemic, we find that antiviral treatment only modestly slow transmission (Figure 3b). Simple release schedules are predicted to perform much more poorly without the initial distribution, with large early distributions outperforming regular small distributions. We assumed a reproduction number of [28]–[31]. In contrast, we obtained different results for more transmissible strains of pandemic influenza, with reproduction numbers [10], [32], [33]. Figures 4a–4c include the following performance curves: Antiviral SNS Policy Performance for Pandemic Influenza. 1. Several policies in which the stockpile is released monthly in fixed quantities proportional to population size, until the 12 month time horizon is reached or until the SNS is depleted. The releases range from 1 million courses for 12 months, to a single release of 50 million courses. 2. An idealized scenario with an infinite supply of antivirals available to each city throughout the epidemic. The outcome of this scenario indicates the maximal potential impact of antiviral use at any given utilization rate, free of any logistical constraints on supply. 3. Two optimized strategies resulting from our analysis. In one optimized strategy, we allowed releases to be either proportional to population size or proportional to influenza prevalence in the city. In the other optimized strategy, we allowed only releases proportional to population. Unlike the 2009 pH1N1 scenario, more contagious strains of pandemic influenza require greater care in selection of antiviral release strategy. For example, the simple policy of Figure 4). Optimized release policies (computed by UCT optimization) consistently perform almost as well as the infinite supply scenario. In all cases, except when the reproduction number is 2.4 and the uptake is 0.75, the optimality gap – difference in performance of computed policy and best idealized outcome divided by the best idealized outcome – is at most one tenth of one percent. For the single outlier scenario, the optimality gap is Under realistic assumptions about transmissability, we found that simple release schedules perform almost as well as the optimized policies. For example, at In all of the simulations, the proportion of infected individuals who seek timely treatment (what we refer to as uptake) has a dramatic impact on both policy optimality and outcomes. Figure 5 shows the optimized policies for pandemic flu with a reproduction number of Figure 5a) or only proportional to population size (Figure 5b). At an uptake of Figure 4b shows that these two optimized strategies as well as fixed releases of 5 million to 50 million per month proportional to population size perform optimally, on par with the infinite supply scenario. Figure 5c furthermore illustrates that, at uptake of Figure 5c) and most simple policies perform suboptimally (Figure 4b). Optimized Policies for Pandemic Influenza with a Reproduction Number of Even when prevalence-based releases are allowed, the optimal policies tend to be dominated by population-based releases (Figure 5a). This combined with the comparable performance of exclusively population-based policies across all scenarios suggests that prevalence-based distributions are probably unnecessary. Thus we focus on Figures 5b and 5c to gain quantitative insights into the relationship between uptake and best policy. At low levels of uptake (between Fig. 5c). At these levels of uptake, so few people are treated that the initial 31 M courses satisfies the demand. For uptakes between Fig. 5b). Finally, for the highest levels of uptake (greater than Figure 4b. Since avian influenza H5N1 became a potential public health threat in 2003, public health agencies around the globe have been planning for the next influenza pandemic. While the concerted response to pH1N1 reflects this careful preparation, several expected and unexpected events, including its apparent North American origin, the rapid overburdening of U.S. laboratory capacity, non-uniform testing and treatment policies among U.S. states, and delays in production of a viable vaccine, all reinforce the need for a dynamic and quantitative playbook for pandemic mitigation using pharmaceutical By adapting an established algorithm to optimize disease mitigation policies, this study provides an advance from the traditional candidate strategy approach to rapid and systematic analysis of numerous policy options. This is just one of many possible optimization methods suitable for this purpose [34]–[37]. Our choice of UCT was based on the insight that, with some careful modeling, disease intervention strategies can be nicely mapped onto policy trees and that this approach can be coupled to any stochastic epidemic model. This approach has performed successfully on large policy trees [38] and has favorable convergence properties [35]. In particular, unlike simulated annealing and genetic algorithms, it is guaranteed to eventually converge on the optimal policy. The UCT algorithm preferentially samples subtrees of the policy tree that have performed well in the past (see [35] for a mathematical discussion). The algorithm performs best when all of the policies within a single subtree of the policy tree perform similarly; it can then effectively determining the β€œgoodness” of any subtree by sampling it only a few times. To achieve algorithmic efficiency, one should therefore use expert knowledge and intuition to structure the policy tree in this way. If there is a single optimal solution in a subtree surrounded by many poorly performing solutions, then the UCT algorithm may require many simulations to find it (although it is guaranteed to eventually do so). Unbalanced policy trees, with one subtree much deeper than another, are natural topologies to produce such an unfavorable grouping of solutions. The single outlier in Figure 4c was likely caused by a combination of imbalance in the antiviral policy tree and the sheer volume of policy options at each time point. First, one subtree includes releasing the entire SNS in the first month with no actions following, while another involves waiting several months to release a small sequence of antivirals. Second, allowing both population-based and prevalence-based distributions increases the options available at each time point and reduces the depth to which policies can be optimized in a given amount of time. Although we know that the outlier is not the true optimal solution, we have have opted to present it in the graphs to highlight intuition on the algorithm's performance. For UCT, additional simulations are guaranteed to improve the optimality gap; in this case, they would have moved the optimized mixed policy to at least match the optimized population-based policy. We initially conducted this analysis during summer 2009, as the pH1N1 pandemic was unfolding, in response to questions posed to us by public health agencies regarding the effective use of antivirals prior to the availability of pH1N1 vaccines. Although the CDC has since issued antiviral guidelines and pH1N1 vaccines are now widely available, our analyses provide insight into the likely impacts of antivirals on pH1N1 transmission to date and effective strategies for antiviral-based mitigation of future flu pandemics. Our analysis suggests that while pH1N1 may have been slowed with targeted, aggressive, and clinically successful use of antivirals, the impact of such a policy would have been highly insensitive to the choice of Federal distribution schedule. The 31 million courses already available to states prior to the pandemic would have gone a long way towards meeting the early demand. However, for more contagious pandemic strains (with higher reproduction numbers), use of an optimized distribution schedule would be expected to significantly improve the intervention outcome. In some cases, simple strategies involving regular fixed releases perform as well as more complex optimized strategies. For example, for a pandemic strain with Our optimization allowed for the possibility of distributions proportional to prevalence, although such actions are not consistent with the current CDC policy and would likely be both politically and logistically difficult. Technically, implementing such a scheme would impose a major surveillance burden, as it would necessitate the estimation of prevalence rates throughout the nation based on noisy or delayed data. Notably, the results suggest that prevalence-based distributions are not expected to enhance the impact of antiviral treatments. The impact of antiviral treatment policies is naturally sensitive to the rate at which individuals who should receive these countermeasures actually do in clinical settings ([39]. In September 2009, the CDC issued antiviral guidelines the encouraged prioritization of high risk cases and discouraged antiviral treatment of typical cases. This suggests that throughout the summer and fall of 2009, we have likely been in the range where all strategies perform equally poorly and are predicted to minimally mitigate transmission. This is not to say that antivirals have had no impact on pH1N1 outcomes: to date, they have been used to significantly reduce morbidity and mortality associated with pH1N1 when used in potentially severe cases. Thus, for future pandemics, public health measures to increase the rates of antiviral usage beyond current levels may have the potential to slow transmission prior to the availability of vaccines. An increase in uptake rates may be practically limited by clinical symptoms of the disease in question, such as the presence of fever, which was one recommended criterion for prescribing antivirals. Our analysis shows, however, that the impact of antiviral control measures depends not only on the rates of uptake but also may critically depend on the Strategic National Stockpile distribution schedule used to sustain that uptake, particularly for highly contagious strains. We did not consider the development of antiviral resistance in this study. Currently circulating strains of seasonal influenza have acquired resistance to oseltamivir [40] and there is evidence that the pH1N1 virus is capable of experiencing genetic mutations that confer resistance to at least one neuraminidase drug; thankfully, to date there is little evidence of sustained transmission of such mutations. We also did not incorporate the use of antivirals for prophylaxis, the future availability of vaccines, simultaneous use of vaccines or NPI's like school or event closures, or the option of targeting the stockpile towards particular demographic groups, all of which are likely important and may influence the optimal policy. The effectiveness of any antiviral policy will depend critically on the extent to which antivirals reduce the severity and transmission of flu. Our assumptions regarding antiviral efficacy are in agreement with the literature [18]–[24]; most of these studies assume maximum likelihood-based estimates of antiviral efficacies calculated by Longini et. al [32] using data from a clinical study by Welliver et. al [41]. More recent clinical trials indicate that the odds of a secondary infection in individual contacts decreases by approximately [42], [43]. While the antiviral efficacy we assumed here lies well within the confidence intervals estimated in these papers, better estimates of these and other parameter values will certainly improve the future optimization studies. In this study, we assume that all distributed antiviral courses undergo wastage. There are multiple potential causes of wastage, including courses that are prescribed to patients who never use them, use them to treat diseases other than flu, or use them too late in their flu infection to significantly impact transmission. Since there is very little information on the rates at which such loss or misuse occurs, let alone how these rates change over the course of a pandemic, we have modeled wastage using a generic decay function. Comparisons between the optimized policies (assuming wastage) and an infinite supply scenario (with no wastage) suggest that there exist distributions schedules that effectively avert potential public health costs associated with wastage. Although better estimates of the magnitude and dynamics of wastage would improve the accuracy of the model and may suggest slightly different optimal strategies, we expect that those strategies will still overcome the potential detrimental effects of wastage. Our work complements a growing body of modeling studies on the distribution and timing of antiviral and vaccination policies. Bajardi et al. recently developed a similar large-scale geographic disease spread model, with which they showed that a vaccination campaign following the initial outbreak may require additional mitigation strategies to delay the epidemic [44]. Danon et al. showed, however, that such models can be sensitive to the addition of movement patterns not captured in census data; specifically, the addition of random movement can hasten an epidemic [45]. A modeling study by Handel et al. suggests that when antivirals are the only mode of control, using antivirals towards the end of the epidemic to minimize overshoot is a good control policy [46]. An intuitive mathematical model developed by Lipstich et al. shows that while antiviral use likely promotes the rise of antiviral resistant strains, they nonetheless can significantly delay the epidemic [47]. Studies by NuΓ±o et al. and Wu et al. also suggest that antivirals used for treatment can slow the spread of the epidemic [48], [49]. Vaccination studies may provide some insight into the potential impacts of large scale antiviral prophylaxis, which we have not considered in our analysis. For example, using a deterministic meta-population model, Wu et al. showed that it may be preferable to allocate large quantities of vaccines to particular geographic areas in order to achieve local herd immunity as opposed to distributing vaccines proportional to population [50]. Ball et al. have studied a related vaccine distribution problem on a graph-based model of disease spread, and also show that targeting local groups performs well if the entire sub-population can be effectively protected [51]. Finally, Bootsma and Ferguson studied the 1918 influenza pandemic, and found that the timing of interventions can be critical, with delays in implementation and premature lifting of interventions reducing the impact of control measures [52]. From rapid genetic sequence analysis to automated syndromic surveillance systems, public health emergency response is rapidly improving in technical capabilities both in the U.S. and worldwide; the rapid response to and characterization of the novel pandemic influenza A (pH1N1) virus is a testament to this. However, planning the policies of public health response to such identified and emergent threats remains a highly non-quantitative endeavor. We present here a policy optimization approach that is highly modular and can be easily adapted to address multiple additional issues. Our hope is that these quantitative methods will assist clinical experts in developing effective policies to mitigate influenza pan- and epidemics using a combined arsenal of vaccines, antivirals and non-pharmaceutical interventions. Specifically, a very similar analysis can be used at the international level to optimize global allocation of the WHO's limited antiviral stockpile to resource-poor countries. One can substitute any stochastic model of disease transmission, at any scale, for our national-scale, U.S. influenza model. In addition, while the optimization algorithm is particularly well suited for time-based interventions, any well-behaved policy space can be used [35]. The approach should thereby facilitate a more comprehensive consideration of pandemic policy options, and will perhaps confirm the efficacy of the current policy or suggest more promising strategic options for the future. Supporting Information Text S1 Supplemental information, including detailed model description, model validation runs, and additional optimization scenarios. Video S1 Supplemental visualizations for scenarios described in Text S1. The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention. The authors thank John Tegeris at BARDA for providing up-to-date information about the U.S. Federal and state Strategic National Stockpiles of antivirals, the BARDA influenza vaccine group for providing up-to-date estimates for the availability of pH1N1 vaccines, the City of Milwaukee Health Department and the Harvard Center for Communicable Disease Dynamics for sharing unpublished data on the receipt of oseltamivir in Milwaukee. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin (http://www.tacc.utexas.edu) for providing HPC resources that have contributed to the research results reported within this paper. Competing Interests: The authors have declared that no competing interests exist. Funding: The authors have no support or funding to report. US Centers for Disease Control. CDC health update: Swine influenza A (H1N1) update: New interim recommendations and guidance for health directors about strategic national stockpile material. 2009. http://www.cdc.gov/h1n1u/HAN/042609.htm. CDC website, Accessed on 2010-12-14. US Centers for Disease Control. CDC estimates of 2009 H1N1 influenza cases, hospitalizations and deaths in the United States, April 2009-January 16, 2010. 2009. http://cdc.gov/h1n1flu/estimates/April January 16.htm. CDC website, Accessed on 2010-12-14. Reed C, Angulo FJ, Swerdlow DL, Lipsitch M, Meltzer MI, et al. Estimates of the prevalence of pandemic (H1N1) 2009, united states, April–July 2009. Emerging Infectious Diseases. 2009;15:2004–2007. [ PMC free article] [PubMed] 5. White House National Security Staff. National framework for 2009-H1N1 influenza preparedness and response. 2009. Department of Homeland Security. Homeland security presidential directive 21. 2007. http://www.dhs.gov/xabout/laws/gc 1219263961449.shtm. DHS website, Accessed on 2010-12-14. Longini I, Halloran M, Nizam A, Yang Y. Containing pandemic influenza with antiviral agents. American Journal of Epidemiology. 2004;159:623–633. [PubMed] Ferguson N, Cummings D, Cauchemez S, Fraser C, Riley S, et al. Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature. 2005;437:204–214. [PubMed] Bansal S, Pourbohloul B, Meyers L. A comparative analysis of influenza vaccination programs. PLoS Medicine. 2006;3:e387. [PMC free article] [PubMed] Germann TC, Kadau K, Longini IM, Macken CA. Proceedings of the National Academy of Sciences; 2006. Mitigation strategies for pandemic influenza in the United States. pp. 5935–5940. [PMC free article] 11. Kocsis L, SzepesvΓ‘ri C. Bandit based monte-carlo planning. Machine Learning: ECML. 2006;2006:282–293. 12. Auer P, Cesa-Bianchi N, Fischer P. Finite-time analysis of the multiarmed bandit problem. Machine Learning. 2002;47:235–256. 17. Keeling MJ, Rohani P. Princeton University Press; 2008. Modeling Infectious Diseases. Lee V, Chen M. Effectiveness of neuraminidase inhibitors for preventing staff absenteeism during pandemic influenza. Emerging Infectious Diseases. 2007;13:449–457. [PMC free article] [PubMed] McCaw J, McVernon J. Prophylaxis or treatment? Optimal use of an antiviral stockpile during an influenza pandemic. Mathematical Biosciences. 2007;209:336–360. [PubMed] Lee V, Phua K, Chen M, Chow A, Stefan M, et al. Economics of neuraminidase inhibitor stockpiling for pandemic influenza, Singapore. Emerging Infectious Diseases. 2006;12:95–102. [PMC free article] [ Doyle A, Bonmarin I, Levy-Bruhl D, Strat Y, Desenclos JC. Influenza pandemic preparedness in France: modelling the impact of interventions. Journal of Epidemiology and Community Health. 2006;60 :399–404. [PMC free article] [PubMed] Barnes B, Glass K, Becker N. The role of health care workers and antiviral drugs in the control of pandemic influenza. Mathematical Biosciences. 2007;209:403–416. [PubMed] Hota S, McGeer A. Antivirals and the control of influenza outbreaks. Clinical Infectious Diseases. 2007;45:1362–1368. [PubMed] Lipsitch M, Cohen T, Murray M, Levin B. Antiviral resistance and the control of pandemic influenza. PLoS Medicine. 2007;4:e15. [PMC free article] [PubMed] Ling LM, Chow AL, Lye DC, Tan AS, Krishnan P, et al. Effects of Early Oseltamivir Therapy on Viral Shedding in 2009 Pandemic Influenza A (H1N1) Virus Infection. Clinical Infectious Diseases. 2010;50 :963–969. [PubMed] Yu H, Liao Q, Yuan Y, Zhou L, Xiang N, et al. Effectiveness of oseltamivir on disease progression and viral RNA shedding in patients with mild pandemic 2009 influenza A H1N1: opportunistic retrospective study of medical charts in China. British Medical Journal. 2010;341:c4779. [PMC free article] [PubMed] Fraser C, Donnelly CA, Cauchemez S, Hanage WP, Van Kerkhove MD, et al. Pandemic potential of a strain of influenza A (H1N1): Early findings. Science. 2009;324:1557–1561. [PMC free article] [PubMed] Ghani AC, Baguelin M, Griffin J, Flasche S, Pebody R, van Hoek AJ, et al. The early transmission dynamics of H1N1pdm influenza in the United Kingdom. 2009. PLoS Currents: Influenza: RRN1130. [PMC free article] [PubMed] Pourbohloul B, Ahued A, Davoudi B, Meza R, Meyers L, et al. Initial human transmission dynamics of a novel swine-origin influenza A (H1N1) virus (S-OIV) in North America. Influenza and Other Respiratory Viruses. 2009;3:215–222. [PMC free article] [PubMed] Yang Y, Sugimoto JD, Halloran ME, Basta NE, Chao DL, et al. The transmissibility and control of pandemic influenza A (H1N1) virus. Science. 2009;326:729–733. [PMC free article] [PubMed] Longini IM, Halloran ME, Nizam A, Yang Y. Containing pandemic influenza with antiviral agents. American Journal of Epidemiology. 2004;159:623–633. [PubMed] Mills CE, Robins JM, Lipsitch M. Transmissibility of 1918 pandemic influenza. Nature. 2004;432:904–906. [PubMed] 34. Azadivar F. Proceedings of the 31st conference on Winter simulation; 1999. Simulation optimization methodologies. pp. 93–100. 35. Coquelin P, Munos R. Bandit algorithms for tree search. Computing Research Repository:abs/cs/0703062. 2007 36. Mannor S, Tsitsiklis J. The sample complexity of exploration in the multi-armed bandit problem. Journal of Machine Learning Research. 2004;5:623–648. 37. Dar E, Mannor S, Mansour Y. Proceedings of the 15th Annual Conference on Computational Learning Theory; 2002. PAC bounds for multi-armed bandit and Markov decision processes. pp. 255–270. 38. Gelly S, Wang Y.Exploration exploitation in Go: UCT for Monte-Carlo Go. 20th annual Conference on Neural Information Processing Systems. 2006. Goldstein E, Cowling B, O'Hagan J, Danon L, Fang V, et al. Oseltamivir for treatment and prevention of pandemic influenza A/H1N1 virus infection in households, Milwaukee, 2009. BMC Infectious Diseases. 2010;10:211. [PMC free article] [PubMed] Hurt A, Ernest J, Deng Y, Iannello P, Besselaar T, et al. Emergence and spread of oseltamivir-resistant A(H1N1) influenza viruses in oceania, south east asia and south africa. Antiviral Research. 2009;83:90–93. [PubMed] Welliver R, Monto AS, Carewicz O, Schatteman E, Hassman M, et al. Effectiveness of Oseltamivir in Preventing Influenza in Household Contacts: A Randomized Controlled Trial. The Journal of the American Medical Association. 2001;285:748–754. [PubMed] Goldstein E, Cowling B, O'Hagan J, Danon L, Fang V, et al. Oseltamivir for treatment and prevention of pandemic influenza A/H1N1 virus infection in households, Milwaukee, 2009. BMC Infectious Diseases. 2010;10:211. [PMC free article] [PubMed] Ng S, Cowling B, Fang V, Chan K, Ip D, et al. Effects of oseltamivir treatment on duration of clinical illness and viral shedding and household transmission of influenza virus. Clinical Infectious Diseases. 2010;50:707–714. [PMC free article] [PubMed] Bajardi P, Poletto C, Balcan D, Hu H, Goncalves B, et al. Modeling vaccination campaigns and the fall/winter 2009 activity of the new A(H1N1) influenza in the Northern hemisphere. Emerging Health Threats Journal. 2009;2:e11. [PMC free article] [PubMed] Danon L, House T, Keeling MJ. The role of routine versus random movements on the spread of disease in Great Britain. Epidemics. 2009;1:250–258. [PubMed] Handel A, Longini IM, Antia R. What is the best control strategy for multiple infectious disease outbreaks? Proceedings of the Royal Society B: Biological Sciences. 2007;274:833–837. [PMC free article] [PubMed] Lipsitch M, Cohen T, Murray M, Levin BR. Antiviral resistance and the control of pandemic influenza. PLoS Medicine. 2007;4:e15. [PMC free article] [PubMed] NuΓ±o M, Chowell G, Gumel AB. Assessing the role of basic control measures, antivirals and vaccine in curtailing pandemic influenza: scenarios for the US, UK and the Netherlands. Journal of The Royal Society Interface. 2007;4:505–521. [PMC free article] [PubMed] Wu JT, Leung GM, Lipsitch M, Cooper BS, Riley S. Hedging against antiviral resistance during the next influenza pandemic using small stockpiles of an alternative chemotherapy. PLoS Medicine. 2009;6 :e1000085. [PMC free article] [PubMed] Wu JT, Riley S, Leung GM. Spatial considerations for the allocation of pre-pandemic influenza vaccination in the United States. Proceedings of the Royal Society B: Biological Sciences. 2007;274 :2811–2817. [PMC free article] [PubMed] 51. Ball F, Mollison D, Scalia-Tomba G. Epidemics with two levels of mixing. The Annals of Applied Probability. 1997;7:46{89. Bootsma MCJ, Ferguson NM. The effect of public health measures on the 1918 influenza pandemic in U.S. cities. Proceedings of the National Academy of Sciences. 2007;104:7588–7593. [PMC free article] [ Articles from PLoS ONE are provided here courtesy of Public Library of Science Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3023704/?tool=pubmed","timestamp":"2014-04-18T05:43:50Z","content_type":null,"content_length":"151161","record_id":"<urn:uuid:9ea33631-0758-415b-82df-b5d3a17b22a2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Standard error of measurement Β© Paul Cooijmans Standard error of measurement is the standard deviation of the expected error; that is, the standard deviation of an individual's scores if it were possible to take the test repeatedly without a learning effect between the test administrations. A rule of thumb for interpreting standard error: One's true score on the test in question lies with 95 % probability between plus and minus two standard errors from one's actual score. In interpreting standard error one may also consider that its value really only applies to the middle part of the test's score range, and loses meaning at the edges. In general, standard error is only meaningful where the scale on which it is expressed is linear, which is not everywhere and always the case. Standard error (Οƒ[error]) is computed by combining a test's reliability (r[xx]) with its raw score standard deviation (Οƒ): Οƒ[error] = Οƒ Γ— √(1 - r[xx])
{"url":"http://www.iq-tests-for-the-high-range.com/statistics/explained/standard_error_of_measurement.html","timestamp":"2014-04-16T10:15:58Z","content_type":null,"content_length":"2462","record_id":"<urn:uuid:c6f41ec3-efd1-4f33-ae42-9ddbec36abd9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Chain Rule problem September 13th 2012, 06:31 PM #1 Jun 2012 Chain Rule problem I'm supposed to apply the chain rule with the following question, and as far as I know, the chain rule is f '[g(x)] * g '(x). According to the question, I need to find f[g(x)] and g[f(x)]. $f(x) = (x/8) + 7$ and $g(x) = 6x -1$ I thought I should simply just go as follows... $(1/8)(6x -1) * 6$ = $(3/4)(6x - 1)$ $= (9x/2) - (3/4)$ This isn't the right answer, but I don't know what I'm doing wrong.. Re: Chain Rule problem I'm confused as to what you are trying to do. Are you trying to simply find the composition or the derivative of the composition? Your post seems inconsistent on this.. can you state the whole Perhaps the problem wants you to find f(g(x)) first, then the derivative of this composition, and see that the answer is the same if you use the chain rule? Last edited by SworD; September 13th 2012 at 06:41 PM. Re: Chain Rule problem I'm confused as to what you are trying to do. Are you trying to simply find the composition or the derivative of the composition? Your post seems inconsistent on this.. can you state the whole Perhaps the problem wants you to find f(g(x)) first, then the derivative of this composition, and see that the answer is the same if you use the chain rule? I think you're on the right track.. The question from the text-book is as follows. "Find f[g(x)] and g[f(x)]. f(x) = (x/8) + 7. g(x) = 6x - 1." I really don't know what I should be doing with the question. The chapter it's from is on the chain rule, so I assumed I needed to use the chain rule to find the derivative of f[g(x)], but that doesn't seem to be the case. If it helps at all, the answer I'm SUPPOSED to get is this... (6x + 55)/8 Re: Chain Rule problem Now that I've looked at the question properly, it doesn't seem to be asking me to find any derivatives. Just f[(g(x)]. But I still don't know how I should be getting to the answer above. I would have thought that f[g(x)] would equal... [(6x - 1) / 8] + 7 Re: Chain Rule problem It doesn't ask for derivatives. You just plug in g(x) into f(x) as though it were any number. $f(g(x)) = \frac{(6x-1)}{8} + 7 = \frac{6x}{8} - \frac{1}{8} + \frac{56}{8} = \frac{6x + 55}{8}$ $g(f(x)) = 6\left (\frac{x}{8} + 7 \right) - 1 = \frac{6}{8}x + 42 - 1 = \frac{6}{8}x + 41$ Also, notice that if you take the derivative of either of these, you get $\frac{6}{8}$, consistent with the chain rule: $f'(g(x) \cdot g'(x) = \frac{1}{8} \cdot 6 = \frac{6}{8}$ $g'(f(x) \cdot g'(x) = 6 \cdot \frac{1}{8} = \frac{6}{8}$ Last edited by SworD; September 13th 2012 at 07:08 PM. Re: Chain Rule problem Thanks for that, I was just overthinking it all. Pretty self-explanatory in the end. I just forgot to look at 7 as a fraction instead of an integer... September 13th 2012, 06:35 PM #2 Sep 2012 Planet Earth September 13th 2012, 06:45 PM #3 Jun 2012 September 13th 2012, 06:57 PM #4 Jun 2012 September 13th 2012, 07:02 PM #5 Sep 2012 Planet Earth September 13th 2012, 07:29 PM #6 Jun 2012
{"url":"http://mathhelpforum.com/calculus/203419-chain-rule-problem.html","timestamp":"2014-04-18T16:41:12Z","content_type":null,"content_length":"46080","record_id":"<urn:uuid:5f55d688-f4cc-4db1-a9b7-cadd1f3056ee>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Frequency shift to alleviate acoustic feedback ("Richard F. Lyon" ) Subject: Re: Frequency shift to alleviate acoustic feedback From: "Richard F. Lyon" <dicklyon@xxxxxxxx> Date: Thu, 24 Jan 2013 22:41:05 -0800 Content-Type: text/plain; charset=ISO-8859-1 To put it more simply, the original assumption that frequency shifting would be "the simplest method" was unfounded. Frequency shifting is actually quite complicated, subtle, error prone, and not so well defined. On Thu, Jan 24, 2013 at 9:04 AM, Steve Beet <steve.beet@xxxxxxxx> wrote: > Dear Siping, > I'm afraid I don't know of any references on this subject, but I'll try to > explain a bit more clearly - and without making too many mistakes this time. > As I understand it, what you are doing is roughly as follows: > You take the first 160 samples, apply a Hanning window, then zero-pad to > 512 samples so you can take the DFT. At this stage you should really set > the zero-frequency and Nyquist-frequency FFT outputs to zero (the phase is > ambiguous at these two points, and it's less likely that you will introduce > any audible artefacts if you set the amplitudes to zero, rather than using > non-zero but incorrect values). > You then shift the FFT outputs (real and imaginary) up by one bin and > perform an IFFT to get an approximation to a Hanning-windowed version of > the frequency-shifted signal you require. Just to clarify - the IFFT > effectively sums a set of sinewaves with initial phase and amplitude > determined by the real and imaginary parts of the FFT bin value. You should > (of course) also make sure that the new zero and Nyquist frequency bins are > still zeroed at this point. > Finally, you take the first 160 outputs from the IFFT, and add them, > starting at the 81st sample, to the next set of outputs, formed by stepping > along the input by 80 samples and repeating the above procedure. > It is worth noting that the result of the IFFT operation is only > *approximately* Hanning-windowed. Indeed, it was derived from the original > Hanning-windowed signal, but because of the effects of frequency-shifting > the different frequency components will have increasingly different phase > relationships as you progress along the array, and when you get beyond the > point where the original signal had ended and the zero-padding had begun > (160 samples), there will still be finite (i.e. non-zero) signal energy. > If it were not for the frequency shifting, at every point which was > originally zero-padded, the different sinewaves would all cancel out > exactly and the output signal would be an exact copy of the input, > including the effects of the window. But frequency is just the derivative > of the phase of each sinewave, so if you change the frequency of a > sinewave, the respective phase will progress at a different rate, and after > a while two sinewaves which may originally have reinforced each other (i.e. > been "in phase") may cancel each other out (i.e. be "out-of-phase"). > So the problem with this overlapped-block approach is that the successive > IFFT outputs are not accurately limited to the 160 sample length that your > inputs were, and while the total amplitude of 50% overlapped Hanning > windows would be constant, when you overlap and add your IFFT outputs, the > effective amplitude of the "window" will have been distorted and will not > be constant any more. > Further, during the transition between the peak of one window and the > next, there will also be periods when the shifted frequency sinewaves from > one window, reinforce or cancel those from the other, producing a kind of > "beating" effect. > I'm afraid this is getting rather too long and detailed for the general > auditory list though, so if you want to discuss it further, I'd suggest we > continue off-list. > Best, > Steve Beet > On Thu, 24 Jan 2013 10:05:14 +0800 > Siping Tao <siping.tao@xxxxxxxx> wrote: > > Dear Steve, > > > > Your explaination is reasonable, I do just copy the real and imaginary > > parts without the consideration of phase. > > > > I did not fully understand the method you recommended due to my limited > > knowledge. > > > > 1. "*by the end of each block, the new signal will have been through a > > different number of cycles of each sinusoidal component*" > > what's the meaning of cycles? why overlap processing induces this > > problem? > > 2. "*by converting the phase of each FFT bin to a time delay (rather > than a > > phase angle)*" > > how can I get the phase or set the phase for the delayed bin? In > > overlap case, how to promise each bin has only one phase > > rather than the average of several different phases? Any example code > > or papers, whatever? > > > > Thanks for your help! > > Siping > > > > On Wed, Jan 23, 2013 at 6:48 PM, Steve Beet <steve.beet@xxxxxxxx> wrote: > > > > > Dear Siping, > > > > > > The most likely explanation is that when you shift the FFT by one bin, > you > > > (presumably) just copy the real and imaginary parts, so the initial > phase > > > of the respective components stays the same at the new frequency. > However, > > > by the end of each block, the new signal will have been through a > different > > > number of cycles of each sinusoidal component, so the phase at the end > of > > > the frequency-shifted block will not match up with the phase at the > start > > > of the next block. > > > > > > Manipulating the phases so that the different components maintain the > > > "correct" phase relationships with the signal components in subsequent > > > blocks could be done by converting the phase of each FFT bin to a time > > > delay (rather than a phase angle), and calculating the phase for the > > > shifted signal which would give the same time delay. > > > > > > There may be other issues too (e.g. the handling of the first and last > DFT > > > bins, where there's no phase information), but I would try setting the > > > phase such that the signal at the centre of each overlapped window is > > > "correct", and hope that the taper of the Hanning window will minimise > the > > > effect of any discontinuities near the ends of each block. > > > > > > Let us know how you get on! > > > > > > Steve Beet > > > > > > > > > On Wed, 23 Jan 2013 11:05:08 +0100 > > > Zlatan Ribic <zlatan@xxxxxxxx> wrote: > > > > > > > M. Hartley Jones: "Frequency Shifter for "Howl" Suppression", > Wireless > > > World, July 1973. 317-322 > > > > > > > > ----- Original Message ----- > > > > From: Siping Tao > > > > To: AUDITORY@xxxxxxxx > > > > Sent: Wednesday, January 23, 2013 9:56 AM > > > > Subject: Frequency shift to alleviate acoustic feedback > > > > > > > > > > > > Dear experts, > > > > > > > > Acoustic feedback can be removed by several methods: frequency > shift, > > > phase shift, notch filter, adaptive cancellation. I tried the simplest > > > method I thought, frqeuency shift. However, it's not easy as I > thought. In > > > realtime processing scenario, I need to process every 10ms audio sample > > > without significant delay, so I do the following implementation: > > > > > > > > 1. sampling rate is 16K, so I have 160 samples every 10ms. > > > > 2. do DFT for these 160 samples, the DFT length is 512, pending > zeros > > > since I only have 160 samples > > > > 3. shift the frequency by one fft coefficient, that is, shift > > > 16000/512=31.25Hz (DC is not shifted) > > > > 4. do IDFT > > > > > > > > After doing that, I can notice the spectrum is shifted in > cool-edit, > > > but with some processing noise (not the artifacts due to frequency > shift). > > > I guess this noise is caused by different processing for successive10ms > > > data, I am not sure here. However, I try to use overlap processing in > my > > > code, hanning window, 50% overlap, then the processing noise is reduced > > > much. Unfortunately, I found that overlap processing sometimes make the > > > frequency shift useless (e.g. 75% overlap by blackman window), what I > mean > > > useless is I cannot notice spectrum shift in cool-edit. > > > > > > > > Can anybody help me to understand why overlap processing hurts > > > frequency shift? Or point out the incorrect parts of my implementation. > > > > > > > > Thanks, > > > > Siping > > > Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable To put it more simply, the original assumption that frequency shifting woul= d be &quot;the simplest method&quot; was unfounded.<br>Frequency shifting i= s actually quite complicated, subtle, error prone, and not so well defined.= <br>Dick<br><br><div class=3D"gmail_quote">On Thu, Jan 24, 2013 at 9:04 AM,= Steve Beet <span dir=3D"ltr">&lt;<a href=3D"mailto:steve.beet@xxxxxxxx" ta= rget=3D"_blank">steve.beet@xxxxxxxx</a>&gt;</span> wrote:<br><blockquote cl= ass=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;p= Dear Siping,<br> I&#39;m afraid I don&#39;t know of any references on this subject, but I&#3= 9;ll try to explain a bit more clearly - and without making too many mistak= es this time.<br> As I understand it, what you are doing is roughly as follows:<br> You take the first 160 samples, apply a Hanning window, then zero-pad to 51= 2 samples so you can take the DFT. At this stage you should really set the = zero-frequency and Nyquist-frequency FFT outputs to zero (the phase is ambi= guous at these two points, and it&#39;s less likely that you will introduce= any audible artefacts if you set the amplitudes to zero, rather than using= non-zero but incorrect values).<br> You then shift the FFT outputs (real and imaginary) up by one bin and perfo= rm an IFFT to get an approximation to a Hanning-windowed version of the fre= quency-shifted signal you require. Just to clarify - the IFFT effectively s= ums a set of sinewaves with initial phase and amplitude determined by the r= eal and imaginary parts of the FFT bin value. You should (of course) also m= ake sure that the new zero and Nyquist frequency bins are still zeroed at t= his point.<br> Finally, you take the first 160 outputs from the IFFT, and add them, starti= ng at the 81st sample, to the next set of outputs, formed by stepping along= the input by 80 samples and repeating the above procedure.<br> It is worth noting that the result of the IFFT operation is only *approxima= tely* Hanning-windowed. Indeed, it was derived from the original Hanning-wi= ndowed signal, but because of the effects of frequency-shifting the differe= nt frequency components will have increasingly different phase relationship= s as you progress along the array, and when you get beyond the point where = the original signal had ended and the zero-padding had begun (160 samples),= there will still be finite (i.e. non-zero) signal energy.<br> If it were not for the frequency shifting, at every point which was origina= lly zero-padded, the different sinewaves would all cancel out exactly and t= he output signal would be an exact copy of the input, including the effects= of the window. But frequency is just the derivative of the phase of each s= inewave, so if you change the frequency of a sinewave, the respective phase= will progress at a different rate, and after a while two sinewaves which m= ay originally have reinforced each other (i.e. been &quot;in phase&quot;) m= ay cancel each other out (i.e. be &quot;out-of-phase&quot;).<br> So the problem with this overlapped-block approach is that the successive I= FFT outputs are not accurately limited to the 160 sample length that your i= nputs were, and while the total amplitude of 50% overlapped Hanning windows= would be constant, when you overlap and add your IFFT outputs, the effecti= ve amplitude of the &quot;window&quot; will have been distorted and will no= t be constant any more.<br> Further, during the transition between the peak of one window and the next,= there will also be periods when the shifted frequency sinewaves from one w= indow, reinforce or cancel those from the other, producing a kind of &quot;= beating&quot; effect.<br> I&#39;m afraid this is getting rather too long and detailed for the general= auditory list though, so if you want to discuss it further, I&#39;d sugges= t we continue off-list.<br> Steve Beet<br> <div class=3D"im"><br> On Thu, 24 Jan 2013 10:05:14 +0800<br> Siping Tao &lt;<a href=3D"mailto:siping.tao@xxxxxxxx">siping.tao@xxxxxxxx</a>= &gt; wrote:<br> &gt; Dear Steve,<br> </div>&gt; Your explaination is reasonable, I do just copy the real and ima= <div class=3D"im">&gt; parts without the consideration of phase.<br> &gt; I did not fully understand the method you recommended due to my limite= &gt; knowledge.<br> </div>&gt; 1. &quot;*by the end of each block, the new signal will have bee= n through a<br> &gt; different number of cycles of each sinusoidal component*&quot;<br> <div class=3D"im">&gt; =A0 =A0 what&#39;s the meaning of cycles? why overla= p processing induces this<br> &gt; problem?<br> </div>&gt; 2. &quot;*by converting the phase of each FFT bin to a time dela= y (rather than a<br> &gt; phase angle)*&quot;<br> <div class=3D"im">&gt; =A0 =A0 how can I get the phase or set the phase for= the delayed bin? In<br> &gt; overlap case, how to promise each bin has only one phase<br> &gt; =A0 =A0 rather than the average of several different phases? Any examp= le code<br> &gt; or papers, whatever?<br> &gt; Thanks for your help!<br> </div>&gt; Siping<br> <div class=3D"im HOEnZb">&gt;<br> &gt; On Wed, Jan 23, 2013 at 6:48 PM, Steve Beet &lt;<a href=3D"mailto:stev= e.beet@xxxxxxxx">steve.beet@xxxxxxxx</a>&gt; wrote:<br> &gt; &gt; Dear Siping,<br> &gt; &gt;<br> </div><div class=3D"HOEnZb"><div class=3D"h5">&gt; &gt; The most likely exp= lanation is that when you shift the FFT by one bin, you<br> &gt; &gt; (presumably) just copy the real and imaginary parts, so the initi= al phase<br> &gt; &gt; of the respective components stays the same at the new frequency.= &gt; &gt; by the end of each block, the new signal will have been through a= &gt; &gt; number of cycles of each sinusoidal component, so the phase at th= e end of<br> &gt; &gt; the frequency-shifted block will not match up with the phase at t= he start<br> &gt; &gt; of the next block.<br> &gt; &gt;<br> &gt; &gt; Manipulating the phases so that the different components maintain= &gt; &gt; &quot;correct&quot; phase relationships with the signal component= s in subsequent<br> &gt; &gt; blocks could be done by converting the phase of each FFT bin to a= &gt; &gt; delay (rather than a phase angle), and calculating the phase for = &gt; &gt; shifted signal which would give the same time delay.<br> &gt; &gt;<br> &gt; &gt; There may be other issues too (e.g. the handling of the first and= last DFT<br> &gt; &gt; bins, where there&#39;s no phase information), but I would try se= tting the<br> &gt; &gt; phase such that the signal at the centre of each overlapped windo= w is<br> &gt; &gt; &quot;correct&quot;, and hope that the taper of the Hanning windo= w will minimise the<br> &gt; &gt; effect of any discontinuities near the ends of each block.<br> &gt; &gt;<br> &gt; &gt; Let us know how you get on!<br> &gt; &gt;<br> &gt; &gt; Steve Beet<br> &gt; &gt;<br> &gt; &gt;<br> &gt; &gt; On Wed, 23 Jan 2013 11:05:08 +0100<br> &gt; &gt; Zlatan Ribic &lt;<a href=3D"mailto:zlatan@xxxxxxxx">zlatan@xxxxxxxx= IBIC.COM</a>&gt; wrote:<br> &gt; &gt;<br> &gt; &gt; &gt; M. Hartley Jones: &quot;Frequency Shifter for &quot;Howl&quo= t; Suppression&quot;, Wireless<br> &gt; &gt; World, July 1973. 317-322<br> &gt; &gt; &gt;<br> &gt; &gt; &gt; =A0 ----- Original Message -----<br> &gt; &gt; &gt; =A0 From: Siping Tao<br> &gt; &gt; &gt; =A0 To: <a href=3D"mailto:AUDITORY@xxxxxxxx">AUDITORY= &gt; &gt; &gt; =A0 Sent: Wednesday, January 23, 2013 9:56 AM<br> &gt; &gt; &gt; =A0 Subject: Frequency shift to alleviate acoustic feedback<= &gt; &gt; &gt;<br> &gt; &gt; &gt;<br> &gt; &gt; &gt; =A0 Dear experts,<br> &gt; &gt; &gt;<br> &gt; &gt; &gt; =A0 Acoustic feedback can be removed by several methods: fre= quency shift,<br> &gt; &gt; phase shift, notch filter, adaptive cancellation. I tried the sim= &gt; &gt; method I thought, frqeuency shift. However, it&#39;s not easy as = I thought. In<br> &gt; &gt; realtime processing scenario, I need to process every 10ms audio = &gt; &gt; without significant delay, so I do the following implementation:<= &gt; &gt; &gt;<br> &gt; &gt; &gt; =A0 1. sampling rate is 16K, so I have 160 samples every 10m= &gt; &gt; &gt; =A0 2. do DFT for these 160 samples, the DFT length is 512, = pending zeros<br> &gt; &gt; since I only have 160 samples<br> &gt; &gt; &gt; =A0 3. shift the frequency by one fft coefficient, that is, = &gt; &gt; 16000/512=3D31.25Hz (DC is not shifted)<br> &gt; &gt; &gt; =A0 4. do IDFT<br> &gt; &gt; &gt;<br> &gt; &gt; &gt; =A0 After doing that, I can notice the spectrum is shifted i= n cool-edit,<br> &gt; &gt; but with some processing noise (not the artifacts due to frequenc= y shift).<br> &gt; &gt; I guess this noise is caused by different processing for successi= &gt; &gt; data, I am not sure here. However, I try to use overlap processin= g in my<br> &gt; &gt; code, hanning window, 50% overlap, then the processing noise is r= &gt; &gt; much. Unfortunately, I found that overlap processing sometimes ma= ke the<br> &gt; &gt; frequency shift useless (e.g. 75% overlap by blackman window), wh= at I mean<br> &gt; &gt; useless is I cannot notice spectrum shift in cool-edit.<br> &gt; &gt; &gt;<br> &gt; &gt; &gt; =A0 Can anybody help me to understand why overlap processing= &gt; &gt; frequency shift? Or point out the incorrect parts of my implement= &gt; &gt; &gt;<br> &gt; &gt; &gt; =A0 Thanks,<br> &gt; &gt; &gt; =A0 Siping<br> &gt; &gt;<br> This message came from the mail archive
{"url":"http://www.auditory.org/postings/2013/52.html","timestamp":"2014-04-20T00:38:06Z","content_type":null,"content_length":"22223","record_id":"<urn:uuid:ea0e4b83-9ce1-4f83-a886-140e18b0bfb3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
The weight of the block in the drawing is 95.3 N. The The weight of the block in the drawing is 95.3 N. The coefficient of static... Home Tutors Physics The weight of the block in the... The weight of the block in the drawing is 95.3 N. The coefficient of static friction between the block and the vertical wall is 0.540. (a) What minimum force F is required to prevent the block from sliding down the wall? (Hint: The static frictional force exerted on the block is directed upward, parallel to the wall.) N 230.38N (b) What minimum force is required to start the block moving up the wall? (Hint: The static frictional force is now directed down the wall.) N 51.46N 12. 0/2 points All Submissions Notes In the drawing, the weight of the block on the table is 474 N and that of the hanging block is 185 N. Ignore all frictional effects, and assume the pulley to be massless. (a) Find the acceleration of the two blocks. b) Find the tension in the cord N 133.07 13. Notes A cable is lifting a construction worker and a crate, as the drawing shows. The weights of the worker and crate are 951 and 1490 N, respectively. The acceleration of the cable is 0.620 m/s2, upward. (a) What is the tension in the cable below the worker? (b) What is the tension in the cable above the worker? 14. Notes A box is sliding up an incline that makes an angle of 15.0Β° with respect to the horizontal. The coefficient of kinetic friction between the box and the surface of the incline is 0.180. The initial speed of the box at the bottom of the incline is 1.10 m/s. How far does the box travel along the incline before coming to rest? A falling skydiver has a mass of 125 kg. (a) What is the magnitude of the skydiver's acceleration when the upward force of air resistance has a magnitude that is equal to one-fourth of his weight? (b) After the parachute opens, the skydiver descends at a constant velocity. What is the force of air resistance (magnitude and direction) that acts on the skydiver? A rock of mass 33 kg accidentally breaks loose from the edge of a cliff and falls straight down. The magnitude of the air resistance that opposes its downward motion is 252 N. What is the magnitude of the acceleration of the rock? A 92.0-kg person stands on a scale in an elevator. What is the apparent weight when the elevator is (a) accelerating upward with an acceleration of 1.70 m/s2, (b) moving upward at a constant speed, and (c) accelerating downward with an acceleration of 1.20 m/s2? Your work in question(s) will also be submitted or saved. 18. –/1 points Notes Question part The steel I-beam in the drawing has a weight of 7.00 kN and is being lifted at a constant velocity. What is the tension in each cable attached to its ends? In a supermarket parking lot, an employee is pushing ten empty shopping carts, lined up in a straight line. The acceleration of the carts is 0.065 m/s2. The ground is level, and each cart has a mass of 24 kg. (a) What is the net force acting on any one of the carts? (b) Assuming friction is negligible, what is the force exerted by the fifth cart on the sixth cart? From the top of a tall building, a gun is fired. The bullet leaves the gun at a speed of 340 m/s, parallel to the ground. As the drawing shows, the bullet puts a hole in a window of another building and hits the wall that faces the window. (y = 0.46 m, and x = 6.4 m.) Using the data in the drawing, determine the distances D and H, which locate the point where the gun was fired. Assume that the bullet does not slow down as it passes through the window. H = m D = m The drawing shows a large cube (mass = 35 kg) being accelerated across a horizontal frictionless surface by a horizontal force P. A small cube (mass = 2.2 kg) is in contact with the front surface of the large cube and will slide downward unless P is sufficiently large. The coefficient of static friction between the cubes is 0.71. What is the smallest magnitude that P can have in order to keep the small cube from sliding downward? 2.4 N
{"url":"http://www.coursehero.com/tutors-problems/Physics/6341254-The-weight-of-the-block-in-the-drawing-is-953-N-The-coefficient-of-s/","timestamp":"2014-04-17T07:36:51Z","content_type":null,"content_length":"50377","record_id":"<urn:uuid:e1ef5860-b702-4a7f-84dd-7800c0a90d1c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal BB AP shell weight & MV? in Battleship Vs Battleship Forum sergeante wrote: The sectional density is a de facto measure of form factor. sergeante wrote: All your approach does is take the volume of an ideal projectile -- a sphere -- and compare it to the mass of an actual one. No it does not. It takes weight & divides it by the cube of diameter (which would geometrically be a cube, not a sphere) to produce a RELATIVE proportional value of a shell's weight to diameter. sergeante wrote: It's not dimensionless at all, because the mass is a factor. Also, because you're using an ideal projectile, it's going to have the same volume for any given diameter, and proportionally the same difference in volume between one caliber and the next. That means your reference projectile could be anything of the same diameter, as long as it was a proportionally identical shape from caliber to caliber. YES IT IS. It does not take any real measure of a shells size & is NOT a representation of any real shape/dimension(s)/volume. It is a VERY SIMPLIFIED RELATIVE representation of RELATIVE size as a function of diameter. sergeante wrote: This in fact is the whole point behind using sectional density in ballistics -- it's a direct measure of comparative mass for a given size. There's no need to complicate things I am SIMPLIFYING things (the ONLY variables are diameter & weight), not complicating them. pfcem wrote:THE PROPORTIONA IS NOT A REPRESENTATION OF DENSITY! Then what is it? You take an ideal shape, which is the same volume for every projectile of a given diameter. Then you use it in a function with weight to get a number. That's just another way of calculating density. No it does not. It takes weight & divides it by the cube of diameter (which would geometrically be a cube, not a sphere) to produce a RELATIVE proportional value of a shell's weight to diameter. Machs nichts. Everything else in any formula of volume is a constant. The geometric expansion comes from the exponent ^3. Everything else is insignificant. You want to think in terms of cubes, I'll think in terms of spheres. It's fundamentally the same thing. YES IT IS. It does not take any real measure of a shells size & is NOT a representation of any real shape/dimension(s)/volume. It is a VERY SIMPLIFIED RELATIVE representation of RELATIVE size as a function of diameter. You have to plug in the projectile weight to get a number. Weight is (mostly) a function of shape. Sectional densisty would be just as revealing. A longer projectile has a higher weight and a higher sectional density. What else are you trying to find out? I am SIMPLIFYING things (the ONLY variables are diameter & weight), not complicating them. The only variables in sectional density are radius and weight. All you do by increasing the exponent is proportionally scale the result. It's not particularly invalid, but it's not particularly revealing either. pfcem wrote: spiffingchap wrote: No, it isn't. You're cubing something with units of length, to give a quantity with units of volume. You are then multiplying it by your "proportion" to give a quantity with units of mass. The "proportion" therefore has units of density (mass volume^-1). You hopefully see the problem - since the volume of the shell isn't its diameter cubed your "proportion" doesn't really measure anything. It measures a combination of the density of the shell and its cross-section, but it is impossible to tell from the number which is which. For instance a cuboid shell could be much less dense than a spherical shell and yet have a higher "proportion of diameter cubed". It's even possible to imagine a shell with an infinite length (but finite diameter) that would have an infinite weight and therefore an infinite (by your measure) density! YES IT IS! The problem is your inability to understand what a proportion is & is not representative of. If it were a proportion then multiplying it by any other quantity would return a quantity with the same units. This isn't even a debate you are just wrong. sergeante wrote: Then what is it? You take an ideal shape, which is the same volume for every projectile of a given diameter. Then you use it in a function with weight to get a number. That's just another way of calculating density. It is a proportion of the weight of a shell to the cube of it's diameter. It takes NO shape & is only a PROPORIONAL representation of RELATIVE volume. Density is a function of ACTUAL volume. sergeante wrote: Machs nichts. Everything else in any formula of volume is a constant. The geometric expansion comes from the exponent ^3. Everything else is insignificant. You want to think in terms of cubes, I'll think in terms of spheres. It's fundamentally the same thing. And without any (much less all) of those constants, simply cubing diameter DOES NOT give you an actual volume (a cube, the only 3-dimensional shape where cubing one dimension gives its volume, does not have a radius but rather equal length sides) & thus, as I keep explaining, means that it is NOT a calculation of density. sergeante wrote: You have to plug in the projectile weight to get a number. Weight is (mostly) a function of shape. Sectional densisty would be just as revealing. A longer projectile has a higher weight and a higher sectional density. What else are you trying to find out? The RELATIVE PROPORTIONAL weight of shells for quick & easy comparison with the RELATIVE PROPORTIONAL weight of other shells. sergeante wrote: The only variables in sectional density are radius and weight. All you do by increasing the exponent is proportionally scale the result. It's not particularly invalid, but it's not particularly revealing either. Actually if bother to look at the numbers, you SHOULD find them VERY revealing... It shows which shells are RELATIVELY light or heavy for their caliber vs that of any/all other shells. AND it shows how RELATIVELY similar some shells are to each other. For example; Seeing that the 1140 lb 12" AP Mark 18 has a value of 0.6597 & the 2700 lb 16" AP Mark 8 has a value of 0.6592 reveals just how PROPORTIONALLY similar they are (the proportional difference between them being just 1 to 2 lb) AND that a PROPORTIONALLY equivalent 14" 'super-heavy' shell would weigh 1808.8 - 1810.2 lb. The numbers also show the experimental US 3850 lb 18" AP shell (0.6602) & the 335 lb 8" AP Mark 21 (0.6543) to be PROPORTIONALLY similar as well but the 130 lb 6" AP Mark 35 (0.6019) to be PROPORTIONALLY lighter than the other 'super-heavy' shells. spiffingchap wrote: If it were a proportion then multiplying it by any other quantity would return a quantity with the same units. This isn't even a debate you are just wrong. No, YOU are wrong because none of these shells are cubes. Cubing diameter DOES NOT give you their volume & dividing their weight by their diameter cubed DOES NOT give you their density. Cubing diameter gives you something in length cubed, it has to, and that is a variation on volume. Dividing mass (weight in common speech) by something in length cubed gives you something in mass / length cubed, which is a variation on density. Thus "1140 lb 12" AP Mark 18 has a value of 0.6597" isn't true because that calculation does not give a dimensionless ratio, the value is 0.6597 lbs/ cu in, assuming your figure is correct. Like Tony said; what you proposed isn't invalid in the circs but there are other, more widely defined terms that will compare what you want just as well. Sectional density if you want to be all scientific or mass per inch of diameter if you just want a rough and ready comparison of a 14" with a 13.5" in the same way you can say there are "light" and "heavy" 13.5" shells. For that your data would be; 12" Mk 18 AP = 95 lb / in 16" Mk 8 AP = 168.75 lb / in Quite a difference now, because cubing one of the dimensions is bound to cube it's effect on the figure, meaning the further that dimension gets from unity the more it matters. Working in inches your figure will give some very odd results when you compare the 20mm, 25mm, 1.2", 37mm and 40mm shells. As the "normal" shell weight will be bigger as diameter rises I'm thinking your original question is going to have to be restricted to a calibre or tight range of calibres, which goes back to "Optimal for what?" pfcem wrote:It is a proportion of the weight of a shell to the cube of it's diameter. It takes NO shape & is only a PROPORIONAL representation of RELATIVE volume. Density is a function of ACTUAL You're not tracking what I'm getting at. All you're doing is (somewhat) normalizing sectional density over a range of projectile diameters. The difficulty with that is that projectiles can be designed considerably differently across the full range of calibers a navy might use. Examples of l/d ratios for interwar and wartime projectiles (using navweaps.com data): >16" : 4.00, 4.50 >14" : 4.00, 4.50 >12" : 4.50 >8" : 4.50 >6" : 4.50 >18.1" : 4.25 >16" : 4.28 >14" : 4.29 >8" : 4.46 >6" : 4.45 >16" : 4.14 >15" : 3.33, 4.73 >14" : 4.4 >8" : 4.5 >6" : (not in the data) >15" : 4.39 >11" : 3.75, 4.45 >8" : 4.4 >15" : 4.46 >12.6" : 4.46 (apparently inferred from 15") >8" : 4.17 >6" : 4.13 >15" : 4.99 >13" : 5.00 >8" : 4.78 >6" : 4.77 So the intention of comparing projectile forms across a wide caliber range hangs up fatally on the different imperatives perceived for cruiser and battleship guns. And without any (much less all) of those constants, simply cubing diameter DOES NOT give you an actual volume (a cube, the only 3-dimensional shape where cubing one dimension gives its volume, does not have a radius but rather equal length sides) & thus, as I keep explaining, means that it is NOT a calculation of density. No. It's a calculation of a reference volume. Including the weight gives you a density relative to that volume. Whether or not it's the density relative to the actual volume is irrelevant, just as sectional densisty doesn't measure actual density at any section, but the weight of the projectile compared to its diameter. The RELATIVE PROPORTIONAL weight of shells for quick & easy comparison with the RELATIVE PROPORTIONAL weight of other shells. Fine. But it has no practical utility, because sectional densisty works just a well for projectile of the same diameter. On the other hand, comparing across caliber ranges fails due to differences in projectile designs across calibers, even within the same navy. Actually if bother to look at the numbers, you SHOULD find them VERY revealing... It shows which shells are RELATIVELY light or heavy for their caliber vs that of any/all other shells. AND it shows how RELATIVELY similar some shells are to each other. For example; Seeing that the 1140 lb 12" AP Mark 18 has a value of 0.6597 & the 2700 lb 16" AP Mark 8 has a value of 0.6592 reveals just how PROPORTIONALLY similar they are (the proportional difference between them being just 1 to 2 lb) AND that a PROPORTIONALLY equivalent 14" 'super-heavy' shell would weigh 1808.8 - 1810.2 lb. The numbers also show the experimental US 3850 lb 18" AP shell (0.6602) & the 335 lb 8" AP Mark 21 (0.6543) to be PROPORTIONALLY similar as well but the 130 lb 6" AP Mark 35 (0.6019) to be PROPORTIONALLY lighter than the other 'super-heavy' shells. See above. Sectional density is sufficient within any given caliber, while l/d ratios are probably more revealing across caliber ranges. ...I wonder if maybe we're getting a bit too heated about the details here. From my reading, both sectional density and d^3 have their uses for comparing shells. Sectional density, when combined with l/d ratio, is needed for a detailed comparison of shells at the level of engineering analysis done to calculate penetration results based on proving ground results, to extrapolate performance and thus reduce the number of (very expensive) proving ground tests that have to be done. It gives much more confidence in the results in marginal cases, where a few tenths of an inch of armor, a few pounds of shell weight, or a few FPS of muzzle velocity may make a significant difference in performance. Diameter cubed, however, is useful for taking the numbers of a shell of known performance and making a quick "back of the napkin" estimate of the performance of a shell of unknown performance. This is useful for such things as initial estimates of the armor required to defeat your new gun, based on scaling up the current shell to match, for example, and thus giving you a "ballpark" figure for how thick the armor you test it against should be to set its maximum penetration in the smallest number of tests. (It would, for example, let you say, "OK, we know for a fact this new six-incher should punch through at least 1.724 times as much armor as the five-incher did, so let's start running our test series at 1.7 times the armor thickness as the five-incher penetrated, and go from there.") It's also somewhat useful for giving yourself a rough estimate of what potential threats would be able to penetrate, and thus providing the designers with a ballpark figure for the required armor thickness, which then can be refined with more detailed information (if available) and the ship's weight budget. So both methods have their uses; d^3 is a "quick and dirty" way of comparing guns on the assumption of similarly-designed shells, while sectional density is a much more precise, but also more time-consuming (and, for shells where design data is limited, VERY difficult) method. (And now, I await the equivalent of a "Shut up, Wesley!" from both parties, who were happily enjoying their argument... sergeante wrote: All you're doing is (somewhat) normalizing sectional density over a range of projectile diameters. The difficulty with that is that projectiles can be designed considerably differently across the full range of calibers a navy might use. No I am not. sergeante wrote: No. It's a calculation of a reference volume. Including the weight gives you a density relative to that volume. Whether or not it's the density relative to the actual volume is irrelevant, just as sectional densisty doesn't measure actual density at any section, but the weight of the projectile compared to its diameter. Quite the opposite. That it is not a real volume means that it is NOT a calculation of density. I just love how your are able to get that "sectional densisty doesn't measure actual density" but unable to grasp that dividing the cube of a projectile's diameter (which is NOT a measure/calculation of its volume becasue it is NOT a cube) doesn't measure actual density. sergeante wrote: Fine. But it has no practical utility, because sectional densisty works just a well for projectile of the same diameter. On the other hand, comparing across caliber ranges fails due to differences in projectile designs across calibers, even within the same navy. It does ecatly what it is intended to do - which is show the RELATIVE weight of projectiles for quick & easy comparision. With it you can easily see which projectiles are RELATIVELY heavy/light for their caliber compared to other projectiles. With it you can see how many projectiles of different calibers are relatively similar to each other (at least in terms of their RELATIVE weights) & with it you can calculate what weight of projectile of another caliber would be if said projectile was proportionally equivalent to that of a known projectile (such as that a 14" US super-heavy projectile proportionally equivalent to the US 12" & 16" super-heavy projectiles would weigh 1808.8 - 1810.2 lb or that British 14" & 16" projectiles proportionally equivalent to the British 1938 lb 15" shell would be 1575.67 lb & 2352.01 lb respectively). It doesn't fail comparing across caliber ranges due to differences in projectile designs across calibers, quite the opposite - is SHOWS relative similarities & differences in projectile designs. sergeante wrote: Sectional density is sufficient within any given caliber, while l/d ratios are probably more revealing across caliber ranges. pfcem wrote:It doesn't fail comparing across caliber ranges due to differences in projectile designs across calibers, quite the opposite - is SHOWS relative similarities & differences in projectile designs. Like I said, try it in inches down at LAA calibres. That exaggerates the effect but that effect is still there anytime the calibre is different. If you like go metric but use metres for big guns. What it leaves out besides is why a given shell is heavier - is it longer but about the same proportion of burster charge or is it the same size but more steel and less explosive? Which is "Optimal" will again depend on "What for?" ChrisPat wrote: Like I said, try it in inches down at LAA calibres. I have. Here are a few examples. 0.508 40mm Bofors & US 1.985 lb shell 0.689 US 1.1" & US 0.917 lb shell 0.555 20mm Oerlikon & US 0.271 lb shell In fact I have done it for small arms. If you try it you might learn something about them... ChrisPat wrote: That exaggerates the effect but that effect is still there anytime the calibre is different. What "effect"? ChrisPat wrote: If you like go metric but use metres for big guns. Being a unitless proportion, you can use any units of measure for projectile weight/mass OR dimeter you want as long as you use the same ones for all those you are comparing. ChrisPat wrote: What it leaves out besides is why a given shell is heavier - is it longer but about the same proportion of burster charge or is it the same size but more steel and less explosive? It isn't meant to determine why a given shell is heavier or lighter - it is just a quick & easy calculation to show which are relatively/propotionally heavier or lighter. ChrisPat wrote: Which is "Optimal" will again depend on "What for?" I thought I made that clear in the lead post. I assume most here understand what BB AP shells are/were for...AND that what is "optimal" depends on which "what for"/characteristics they emphasize over pfcem wrote:In fact I have done it for small arms. If you try it you might learn something about them... Small arms rounds are typically some kind of shot. Burster size doesn't occur, which is to say the main way a small arms bullet gets heavier is by getting longer. That eliminates one of the main variables that your method ignores so should make your method more appropriate. What "effect"? That you are comparing weight with three dimensional size in some way. A rise in those three dimensions means a cubical type rise in volume and therefore weight if (big if) construction remains the same. Hence a rise in calibre shifts the "normal" weight for size, and comparing shell weights for different calibres gets more and more difficult. Being a unitless proportion, you can use any units of measure for projectile weight/mass OR dimeter you want as long as you use the same ones for all those you are comparing. Your measure is not unitless. Mass (kg) divided by diammeter (m) cubed (m^3) gives a figure in kg/m^3, subject to the limits of this keyboard. Just because something is not density does not mean it cannot have the same units, just because it does have those units doesn't mean it is density. It isn't meant to determine why a given shell is heavier or lighter - it is just a quick & easy calculation to show which are relatively/propotionally heavier or lighter. But then you can have two shells of equally "heavy" weight with very different intent and performance. The second RN 13.5" AP shells were "heavy", the latest USN 16" were "heavy", either could be optimal but they are very different in design and effect. I assume most here understand what BB AP shells are/were for...AND that what is "optimal" depends on which "what for"/characteristics they emphasize over others. In which case you ruled out an agreed answer in that same first post. Most here might understand what battleship AP shells are for but few can state it simply and those that do tend to get little to no agreement. pfcem wrote: Quite the opposite. That it is not a real volume means that it is NOT a calculation of density. I just love how your are able to get that "sectional densisty doesn't measure actual density" but unable to grasp that dividing the cube of a projectile's diameter (which is NOT a measure/ calculation of its volume becasue it is NOT a cube) doesn't measure actual density. When you take the cube of the projectile diameter, you're creating a standard reference volume for that diameter. It's precisely the same for every projectile of a given diameter. When you compare the weight of a given projectile to this reference volume, what you are doing is creating a ratio of the weight of the projectile to the reference volume. What is the ratio of weight to a reference volume? That's right -- density. Whether it's pounds per cubic inch, grams per cubic centimeter, or the weight of a projectile to a reference volume. No, it's not the actual density of the projectile. But it's the relative density of the given projectile to other projectiles, if you smushed them all so that they fit precisely into the reference volume. And this has precisely the same validity as sectional density. Sectional density is also not a real density. It's a ratio of weight to a reference area. The difference is merely that the reference area in sectional density is not a cube, but a disk of a given radius. For example, let's look at 16" projectiles (dimensions figured in pounds and inches): US Mk 3 : 2110 US Mk 5 : 2240 US Mk 8 : 2700 UK Mk IB : 2048 Jap Type 88 : 2205 Jap Type 91 : 2249 Sectional density reference area = 8 * 8 * 3.14159 = 201 Sectional densities (lb/sq-in): US Mk 3 : 10.50 US Mk 5 : 11.14 US Mk 8 : 13.43 UK Mk IB : 10.19 Jap Type 88 : 10.97 Jap Type 91 : 11.19 Now. Let's establish the UK Mk IB as our reference. The relative sectional densities are: US Mk 3 : 1.03 US Mk 5 : 1.09 US Mk 8 : 1.32 UK Mk IB : 1.00 Jap Type 88 : 1.08 Jap Type 91 : 1.10 Now pfcem's way... Cubic reference volume = 16 * 16 * 16 = 4096 Weight relative to reference volume (lb/cu-in): US Mk 3 : 0.52 US Mk 5 : 0.55 US Mk 8 : 0.66 UK Mk IB : 0.5 Jap Type 88 : 0.54 Jap Type 91 : 0.55 Once again, let's establish the UK Mk IB as our reference. The relative sectional densities are: US Mk 3 : 1.04 US Mk 5 : 1.10 US Mk 8 : 1.32 UK Mk IB : 1.00 Jap Type 88 : 1.08 Jap Type 91 : 1.10 There's no difference, except for rounding errors caused by the use of smaller magnitudes. See, all you're doing is exchanging a standard reference volume height for one scaled to the projectile diameter. Interesting thought, but it's no more revealing than sectional density. It doesn't fail comparing across caliber ranges due to differences in projectile designs across calibers, quite the opposite - is SHOWS relative similarities & differences in projectile designs. Sorry, but that does not turn out to be the case. It shows that something is different in the relative amounts and distribution of mass. But it doesn't tell you what, any more than sectional density would. For example, you can get the same relative density (using either sectional density or your method) for two projectiles, but one could have a long body and a blunt ogive, while the other has a somewhat shorter body and a much finer ogive. Just knowing the relative mass of the projectile doesn't tell you one damn thing about how it's designed. ChrisPat wrote: Small arms rounds are typically some kind of shot. Burster size doesn't occur, which is to say the main way a small arms bullet gets heavier is by getting longer. That eliminates one of the main variables that your method ignores so should make your method more appropriate. I am not trying to determine the size of the burster - JUST THE RELATIVE/PROPORTIONAL LIGHTNESS/HEAVINESS of the projectile compared to other projectiles. ChrisPat wrote: That you are comparing weight with three dimensional size in some way. A rise in those three dimensions means a cubical type rise in volume and therefore weight if (big if) construction remains the same. Hence a rise in calibre shifts the "normal" weight for size, and comparing shell weights for different calibres gets more and more difficult. No, I am comparing the weight divided by diameter cubed. ChrisPat wrote: Your measure is not unitless. Mass (kg) divided by diammeter (m) cubed (m^3) gives a figure in kg/m^3, subject to the limits of this keyboard. Just because something is not density does not mean it cannot have the same units, just because it does have those units doesn't mean it is density. Diameter cubed is NOT the volume of a projectile (unless the projectile happens to be A CUBE), IT IS NOT DENSITY as some have insisted it is. ChrisPat wrote: But then you can have two shells of equally "heavy" weight with very different intent and performance. The second RN 13.5" AP shells were "heavy", the latest USN 16" were "heavy", either could be optimal but they are very different in design and effect. I am not trying to determine intent and performance - JUST THE RELATIVE/PROPORTIONAL LIGHTNESS/HEAVINESS of the projectile compared compared to other projectiles. ChrisPat wrote: In which case you ruled out an agreed answer in that same first post. Most here might understand what battleship AP shells are for but few can state it simply and those that do tend to get little to no agreement. Who said anything about an agreed answer? I was & am VERY CLEARLY asking for people's OPINIONS & assume people's opinions will vary. sergeante wrote: When you take the cube of the projectile diameter, you're creating a standard reference volume for that diameter. It's precisely the same for every projectile of a given diameter. When you compare the weight of a given projectile to this reference volume, what you are doing is creating a ratio of the weight of the projectile to the reference volume. What is the ratio of weight to a reference volume? That's right -- density. Whether it's pounds per cubic inch, grams per cubic centimeter, or the weight of a projectile to a reference volume. No, it's not the actual density of the projectile. But it's the relative density of the given projectile to other projectiles, if you smushed them all so that they fit precisely into the reference volume. And this has precisely the same validity as sectional density. Sectional density is also not a real density. It's a ratio of weight to a reference area. The difference is merely that the reference area in sectional density is not a cube, but a disk of a given radius. For example, let's look at 16" projectiles (dimensions figured in pounds and inches): US Mk 3 : 2110 US Mk 5 : 2240 US Mk 8 : 2700 UK Mk IB : 2048 Jap Type 88 : 2205 Jap Type 91 : 2249 Sectional density reference area = 8 * 8 * 3.14159 = 201 Sectional densities (lb/sq-in): US Mk 3 : 10.50 US Mk 5 : 11.14 US Mk 8 : 13.43 UK Mk IB : 10.19 Jap Type 88 : 10.97 Jap Type 91 : 11.19 Now. Let's establish the UK Mk IB as our reference. The relative sectional densities are: US Mk 3 : 1.03 US Mk 5 : 1.09 US Mk 8 : 1.32 UK Mk IB : 1.00 Jap Type 88 : 1.08 Jap Type 91 : 1.10 Now pfcem's way... Cubic reference volume = 16 * 16 * 16 = 4096 Weight relative to reference volume (lb/cu-in): US Mk 3 : 0.52 US Mk 5 : 0.55 US Mk 8 : 0.66 UK Mk IB : 0.5 Jap Type 88 : 0.54 Jap Type 91 : 0.55 Once again, let's establish the UK Mk IB as our reference. The relative sectional densities are: US Mk 3 : 1.04 US Mk 5 : 1.10 US Mk 8 : 1.32 UK Mk IB : 1.00 Jap Type 88 : 1.08 Jap Type 91 : 1.10 There's no difference, except for rounding errors caused by the use of smaller magnitudes. See, all you're doing is exchanging a standard reference volume height for one scaled to the projectile diameter. Interesting thought, but it's no more revealing than sectional density. Sectional density is of the ACTUALL cross section. The ONLY time the they are the same is if the diameter is EXACTLY pi...my method is DIAMETER cubed, not radius cubed. I never said it was more revealing than sectional density but is it a lot quicker & easier to do - it can even be done on paper (not that anybody does math on paper these days) without having to know what pi is. sergeante wrote: Sorry, but that does not turn out to be the case. It shows that something is different in the relative amounts and distribution of mass. But it doesn't tell you what, any more than sectional density would. For example, you can get the same relative density (using either sectional density or your method) for two projectiles, but one could have a long body and a blunt ogive, while the other has a somewhat shorter body and a much finer ogive. Just knowing the relative mass of the projectile doesn't tell you one damn thing about how it's designed. You really are trying WAY to hard to insist that it is intended to/should show something more than it is. You COULD have done the calculation for every BB AP shell used from WWI through WWII in less time & actually seen how quick, easy & useful it is in less time than you have taken is all the BS. pfcem wrote:Sectional density is of the ACTUALL cross section. And? It's still a reference gauge consistent for all shells of the same diameter. The ONLY time the they are the same is if the diameter is EXACTLY pi...my method is DIAMETER cubed, not radius cubed. If the diameter is exactly pi? The diameter is never pi. Pi is the ratio of the diameter to the circumference. It is a dimensionless number that never changes. It is always the same value. I know it out to five digits without even looking it up: 3.14159 Whoever taught you plane geometry needs to give you your money back. I never said it was more revealing than sectional density but is it a lot quicker & easier to do - it can even be done on paper (not that anybody does math on paper these days) without having to know what pi is. Well, I can understand why knowing what pi is can be an issue for you, in particular. But everyone else knows what it is, and has no problem using it in calculations. You really are trying WAY to hard to insist that it is intended to/should show something more than it is. You COULD have done the calculation for every BB AP shell used from WWI through WWII in less time & actually seen how quick, easy & useful it is in less time than you have taken is all the BS. I can't really understand what your intentions are. There is no reason not to use sectional density if all you want to do is compare a few projectiles at the most basic level. But neither your approach nor sectional density really tell you anything about the projectiles' relative design merits. All it really tells you is how much mass is hitting the target inside a given area. Beyond that, one could have projectiles of precisely the same sectional or cubic density that have significantly different designs. Or projectiles of significantly different sectional or cubic densities that are intended to do different things, and therefore just aren't comparable on that basis alone. As for the "BS" as you call it, it shows that your approach doesn't tell anyone anything that sectional density can't. And I really don't see what is so bothersome and time consuming about squaring a radius and multiplying by pi, compared to cubing a diameter. If you see a significant difference in time and effort -- especially with modern computing tools, but even with a paper and pencil -- perhaps doing simple arithmetic is not for you at any level of complexity. pfcem wrote:I am not trying to determine the size of the burster - JUST THE RELATIVE/PROPORTIONAL LIGHTNESS/HEAVINESS of the projectile compared to other projectiles. In which case you aren't comparing anything meaningful. No, I am comparing the weight divided by diameter cubed. Yes, diameter cubed is a volume. "Weight" btw is more properly mass, the imperial measure of weight being the slug and the SI one the Newton. Diameter cubed is NOT the volume of a projectile (unless the projectile happens to be A CUBE), Cubes don't have diameters. More to the point I didn't say diameter cubed is the volume, I said it has units of volume. Iy appears that you aren't familiar with calculating units. IT IS NOT DENSITY as some have insisted it is. I don't think anyone has, I (and probably others) have pointed out a mass divided by a volume measure will have units of density. I am not trying to determine intent and performance - JUST THE RELATIVE/PROPORTIONAL LIGHTNESS/HEAVINESS of the projectile compared compared to other projectiles. And an "Optimum", which begs the question "For what?", which in turn demands that the intent be known. Who said anything about an agreed answer? I was & am VERY CLEARLY asking for people's OPINIONS & assume people's opinions will vary. OK, my clear and unambiguous opinion: Depends. If you want something a bit more likely to provoke useful discussion be more specific. What for? pfcem wrote:...my method is DIAMETER cubed, not radius cubed. I never said it was more revealing than sectional density but is it a lot quicker & easier to do - it can even be done on paper (not that anybody does math on paper these days) without having to know what pi is. Aside from Tony's valid points on this post neither method requires you to know pi, nor does the twofold difference between radius and diameter matter - both are constants for all the projectiles and both get cancelled out when you divide through by your reference projectile. Mass divided by Diameter cubed; (m / 8r^3) / (M / 8R^3) = mR^3 / Mr^3 Mass divided by Sectional sensity; (m/pi r^2) / M/pi R^2) = mR^2 / Mr^2 sergeante wrote: And? It's still a reference gauge consistent for all shells of the same diameter. Just as I said, it the ACTUALL cross section of the projectile - something that can actually be measured AND HAS UNITS. Diameter cubed OTOH is NOT a real/actual volume - it is a RELATIVE representation of size. sergeante wrote: If the diameter is exactly pi? The diameter is never pi. Pi is the ratio of the diameter to the circumference. It is a dimensionless number that never changes. It is always the same value. I know it out to five digits without even looking it up: 3.14159. Yes the only time the calculation for diameter cubed is the same as the calculation for sectional desity are the same is when said diameter is exactly pi. The calculation is INTENDED to be quick & easy. Diameter cubed (for a two digit diameter as BB guns are) takes just 3 or 4 button pushes (depeding on if cubed is a 1st or 2nd function on the calculator you are using), sectional density takes 5 or 6 (depeding on if pi is a 1st or 2nd function on the calculator you are using) PLUS with diameter cubed being a whole number for many guns it is MUCH easier to remember (re-enter for later calculations without having to calculate it again) that sectional density. sergeante wrote: Well, I can understand why knowing what pi is can be an issue for you, in particular. But everyone else knows what it is, and has no problem using it in calculations. Knowing the value of pi (to how ever many digits one can/choses to remember) is not the issue, doing math ON PAPER while using pi is. Try it...not once by several times. And by the way, I do PLENTY of complicated calculations. Enough to have learned that anything you can do to make whatever calculating you do quicker &/or easier is well worth the time & effort it saves you when you do lots of calculating. sergeante wrote: I can't really understand what your intentions are. There is no reason not to use sectional density if all you want to do is compare a few projectiles at the most basic level. But neither your approach nor sectional density really tell you anything about the projectiles' relative design merits. All it really tells you is how much mass is hitting the target inside a given area. Beyond that, one could have projectiles of precisely the same sectional or cubic density that have significantly different designs. Or projectiles of significantly different sectional or cubic densities that are intended to do different things, and therefore just aren't comparable on that basis alone. The intention is to have a QUICK & EASY way to compare the RELATIVE/PROPORTIONAL weight of various projectiles. Diameter cubed is a QUICKER & EASIER calculation than sectional density AND if you had even bothered to look at the examples I have given &/or bother to do any yourself you would see something about the resulting numbers... It isn't about relative design merits or how much mass is hitting the target inside a given area or what the projectiles are intended to do. It IS about the RELATIVE/PROPORTIONAL weight of various sergeante wrote: As for the "BS" as you call it, it shows that your approach doesn't tell anyone anything that sectional density can't. And I really don't see what is so bothersome and time consuming about squaring a radius and multiplying by pi, compared to cubing a diameter. If you see a significant difference in time and effort -- especially with modern computing tools, but even with a paper and pencil -- perhaps doing simple arithmetic is not for you at any level of complexity. It is BS because you have wasted more time & effort to try & make it out to be things it is not intended to be & completely ignoring what it is that it would take to look at the examples I have give & do many, many additional calculations yourself (for what ever guns you wish) & actually have learned how quick & easy it is AND how useful it is. ChrisPat wrote: In which case you aren't comparing anything meaningful. Quite the opposite. Look at the examples I have given &/or do some calculations yourself. ChrisPat wrote: Yes, diameter cubed is a volume. No, diameter cubed is NOT a volume. Cubes DO NOT have diameters... ChrisPat wrote: Cubes don't have diameters. More to the point I didn't say diameter cubed is the volume, I said it has units of volume. Iy appears that you aren't familiar with calculating units. Well at least now you recognize that cubes don't have diameters... ChrisPat wrote: And an "Optimum", which begs the question "For what?", which in turn demands that the intent be known. Good God could you be more childish? "For what" YOU want a BB AP shell to do. For example, the US wanted AP shells with superior deck penetration so it chose "super-heavy" shells that were RELATIVELY/PROPORTIONALLY heavier (& lower MV) than previous/other AP shells. ChrisPat wrote: OK, my clear and unambiguous opinion: Depends. If you want something a bit more likely to provoke useful discussion be more specific. What for? READ THE LEAD POST! I already stated that it depends. If YOU don't know what BB AP shells are/were meant to do &/or don't know what YOU would want BB AP shells to do, don't respond.
{"url":"http://warships1discussionboards.yuku.com/topic/22031/Optimal-BB-AP-shell-weight-MV?page=3","timestamp":"2014-04-18T21:37:09Z","content_type":null,"content_length":"127948","record_id":"<urn:uuid:07b4a6d0-0959-4663-84b2-ccb3be8139db>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: Clay Mathematics Proceedings 2012; 276 pp; softcover Volume: 16 ISBN-10: 0-8218-6864-0 ISBN-13: 978-0-8218-6864-5 List Price: US$79 Member Price: US$63.20 Order Code: CMIP/16 Luis SantalΓ³ Winter Schools are organized yearly by the Mathematics Department and the SantalΓ³ Mathematical Research Institute of the School of Exact and Natural Sciences of the University of Buenos Aires (FCEN). This volume contains the proceedings of the third Luis SantalΓ³ Winter School which was devoted to noncommutative geometry and held at FCEN July 26-August 6, 2010. Topics in this volume concern noncommutative geometry in a broad sense, encompassing various mathematical and physical theories that incorporate geometric ideas to the study of noncommutative phenomena. It explores connections with several areas including algebra, analysis, geometry, topology and mathematical physics. Bursztyn and Waldmann discuss the classification of star products of Poisson structures up to Morita equivalence. Tsygan explains the connections between Kontsevich's formality theorem, noncommutative calculus, operads and index theory. Hoefel presents a concrete elementary construction in operad theory. Meyer introduces the subject of \(\mathrm{C}^*\)-algebraic crossed products. Rosenberg introduces Kasparov's \(KK\)-theory and noncommutative tori and includes a discussion of the Baum-Connes conjecture for \(K\)-theory of crossed products, among other topics. Lafont, Ortiz, and SΓ‘nchez-GarcΓ­a carry out a concrete computation in connection with the Baum-Connes conjecture. Zuk presents some remarkable groups produced by finite automata. Mesland discusses spectral triples and the Kasparov product in \(KK\)-theory. Trinchero explores the connections between Connes' noncommutative geometry and quantum field theory. Karoubi demonstrates a construction of twisted \(K\) -theory by means of twisted bundles. Tabuada surveys the theory of noncommutative motives. Titles in this series are co-published with the Clay Mathematics Institute (Cambridge, MA). Graduate students and research mathematicians interested in various aspects of noncommutative geometry.
{"url":"http://ams.org/bookstore?fn=20&arg1=whatsnew&ikey=CMIP-16","timestamp":"2014-04-17T22:43:11Z","content_type":null,"content_length":"15980","record_id":"<urn:uuid:86f175a4-4f79-4ca9-9143-e430d1938a93>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Fraction Word Problems Math Practice Online > free > lessons > Florida > 6th grade > Fraction Word Problems If your child needs math practice, click here. Fraction Word Problems Solving word problems that involve fraction addition and subtraction. You may need pencil and paper. ┃Sample Problems for Fraction Word Problems β”‚Lesson for Fraction Word Problems┃ This topic aligns to the following state standards Grade 4: Num 4. Explains and demonstrates the addition and subtraction of common fractions using concrete materials, drawings, story problems, and algorithms. Grade 4: Num 1. Uses problem-solving strategies to determine the operation(s) needed to solve one- and two- step problems involving addition, subtraction, multiplication, and division of whole numbers, and addition and subtraction of decimals and fractions. Grade 4: Num 1. Solves real-world problems involving addition, subtraction, multiplication, and division of whole numbers, and addition and subtraction of decimals and fractions using an appropriate method (for example, mental math, pencil and paper, calculator). Grade 4: Num 5. Solves real-world problems involving the addition or subtraction of decimals (to hundredths) or common fractions with like or unlike denominators. Grade 5: Num 1. Solves real-world problems involving addition, subtraction, multiplication, and division of whole numbers, and addition, subtraction, and multiplication of decimals, fractions, and mixed numbers using an appropriate method (for example, mental math, pencil and paper, calculator). Grade 6: Num 1. Knows the appropriate operations to solve real-world problems involving whole numbers, decimals, and fractions. Grade 6: Num 2. Solves real-world problems involving whole numbers, fractions, decimals, and common percents using one or two-step problems. Grade 7: Num 1. Knows the appropriate operation to solve real-world problems involving fractions, decimals, and integers. Grade 7: Num 1. Solves multi-step real-world problems involving whole numbers, fractions or decimals using appropriate methods of computation, such as mental computation, paper and pencil, and Grade 8: Num 1. Solves multi-step real-world problems involving fractions, decimals, and integers using appropriate methods of computation, such as mental computation, paper and pencil, and Copyright Accurate Learning Systems Corporation 2008. MathScore is a registered trademark.
{"url":"http://www.mathscore.com/math/free/lessons/Florida/6th_grade/Fraction_Word_Problems.html","timestamp":"2014-04-16T22:05:40Z","content_type":null,"content_length":"3816","record_id":"<urn:uuid:77b98094-e22f-463b-ab38-59d666b5c1ef>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis of Algorithms 1. Analysis of Algorithms This chapter considers the general motivations for algorithmic analysis and relationships among various approaches to studying performance characteristics of algorithms. 1.1 Why Analyze an Algorithm? The most straightforward reason for analyzing an algorithm is to discover its characteristics in order to evaluate its suitability for various applications or compare it with other algorithms for the same application. Moreover, the analysis of an algorithm can help us understand it better, and can suggest informed improvements. Algorithms tend to become shorter, simpler, and more elegant during the analysis process. 1.2 Computational Complexity. The branch of theoretical computer science where the goal is to classify algorithms according to their efficiency and computational problems according to their inherent difficulty is known as computational complexity . Paradoxically, such classifications are typically not useful for predicting performance or for comparing algorithms in practical applications because they focus on order-of-growth worst-case performance. In this book, we focus on analyses that be used to predict performance and compare algorithms. 1.3 Analysis of Algorithms. A complete analysis of the running time of an algorithm involves the following steps: β€’ Implement the algorithm completely. β€’ Determine the time required for each basic operation. β€’ Identify unknown quantities that can be used to describe the frequency of execution of the basic operations. β€’ Develop a realistic model for the input to the program. β€’ Analyze the unknown quantities, assuming the modelled input. β€’ Calculate the total running time by multiplying the time by the frequency for each operation, then adding all the products. Classical algorithm analysis on early computers could result in exact predictions of running times. Modern systems and algorithms are much more complex, but modern analyses are informed by the idea that exact analysis of this sort could be performed in principle 1.4 Average-Case Analysis. Elementary probability theory gives a number of different ways to compute the average value of a quantity. While they are quite closely related, it will be convenient for us to explicitly identify two different approaches to compute the mean. β€’ Distributional. Let $\Pi_N$ be the number of possible inputs of size $N$ and $\Pi_{Nk}$ be the number of inputs of size $N$ that cause the algorithm to have cost $k$, so that $\Pi_N=\sum_k\Pi_ {Nk}$. Then the probability that the cost is $k$ is $\Pi_{Nk}/\Pi_N$ and the expected cost is $${1\over \Pi_N}\sum_k k\Pi_{Nk}.$$ The analysis depends on "counting." How many inputs are there of size $N$ and how many inputs of size $N$ cause the algorithm to have cost $k$? These are the steps to compute the probability that the cost is $k$, so this approach is perhaps the most direct from elementary probability theory. β€’ Cumulative. Let $\Sigma_N$ be the total (or cumulated) cost of the algorithm on all inputs of size $N$. (That is, $\Sigma_N=\sum_kk\Pi_{Nk}$, but the point is that it is not necessary to compute $\Sigma_N$ in that way.) Then the average cost is simply $\Sigma_N/\Pi_N$. The analysis depends on a less specific counting problem: what is the total cost of the algorithm, on all inputs? We will be using general tools that make this approach very attractive. The distributional approach gives complete information, which can be used directly to compute the standard deviation and other moments. Indirect (often simpler) methods are also available for computing moments when using the other approach, as we will see. In this book, we consider both approaches, though our tendency will be towards the cumulative method, which ultimately allows us to consider the analysis of algorithms in terms of combinatorial properties of basic data structures. 1.5 Example: Analysis of quicksort. The classical quicksort algorithm was invented by C.A.R. Hoare in 1962: public class Quick private static int partition(Comparable[] a, int lo, int hi) int i = lo, j = hi+1; while (true) while (less(a[++i], a[lo])) if (i == hi) break; while (less(a[lo], a[--j])) if (j == lo) break; if (i >= j) break; exch(a, i, j); exch(a, lo, j); return j; private static void sort(Comparable[] a, int lo, int hi) if (hi <= lo) return; int j = partition(a, lo, hi); sort(a, lo, j-1); sort(a, j+1, hi); To analyze this algorithm, we start by defining a cost model (running time) and an input model (randomly ordered distinct elements). To separate the analysis from the implementation, we define $C_N$ to be the number of compares to sort $N$ elements and analyze $C_N$ (hypothesizing that the running time for any implementation will be $\sim aC_N$ for some implementation-dependent constant $a$). Note the following properties of the algorithm: β€’ $N+1$ compares are used for partitioning. β€’ The probability that the partitioning element is the $k$th smallest is $1/N$ for $k$ between $0$ and $N-1$. β€’ The size of the two subarrays to be sorted in that case are $k$ and $N-k-1$. β€’ The two subarrays are randomly ordered after partitioning. These imply a mathematical expression (a recurrence relation ) that derives directly from the recursive program $$C_N = N+1 + \sum_{0\le k \le N-1}{1\over N}(C_k + C_{N-k-1})$$ This equation is easily solved with a series of simple albeit mysterious algebraic steps. First, apply symmetry, multiply by $N$, subtract the same equation for $N-1$ and rearrange terms to get a simpler recurrence. $$\eqalign{ C_N &= N+1 + {2\over N}\sum_{0\le k \le N-1}C_k\\ NC_N &= N(N+1) + 2\sum_{0\le k \le N-1}C_k\\ NC_N - (N-1)C_{N-1} &= N(N+1) -(N-1)N + 2C_{N-1}\\ NC_N &= (N+1)C_{N-1} +2N\\ }$$ Note that this simpler recurrence gives an efficient algorithm to compute the exact answer. To solve it, divide both sides by $N(N+1)$ and telescope. $$\eqalign{ NC_N &= (N+1)C_{N-1} + 2N {\quad\rm for\quad} N > 1 {\quad\rm with\quad} C_1 = 2\\ {C_N\over N+1} &= {C_{N-1}\over N} + {2\over N+1}\\ &= {C_{N-2}\over N-1} + {2\over N} + {2\over N+1}\\ &= 2H_{N+1} - 2\\ C_N &= 2(N+1)H_{N+1} - 2(N+1) = 2(N+1)H_N - 2N. }$$ The result is an exact expression in terms of the Harmonic numbers. 1.6 Asymptotic Approximations The Harmonic numbers can be approximated by an integral (see Chapter 3), $$H_N \sim \ln N,$$ leading to the simple asymptotic approximation $$C_N \sim 2N\ln N.$$ It is always a good idea to validate our math with a program. This code public class QuickCheck public static void main(String[] args) int maxN = Integer.parseInt(args[0]); double[] c = new double[maxN+1]; c[0] = 0; for (int N = 1; N <= maxN; N++) c[N] = (N+1)*c[N-1]/N + 2; for (int N = 10; N <= maxN; N *= 10) double approx = 2*N*Math.log(N) - 2*N; StdOut.printf("%10d %15.2f %15.2f\n", N, c[N], approx); produces this output. % java QuickCheck 1000000 10 44.44 26.05 100 847.85 721.03 1000 12985.91 11815.51 10000 175771.70 164206.81 100000 2218053.41 2102585.09 The discrepancy in the table is explained by our dropping the $2N$ term (and our not using a more accurate approximation to the integral 1.7 Distributions. It is possible to use similar methods to find the standard deviation and other moments. The standard deviation of the number of compares used by quicksort is $\sqrt{7 - 2\pi^2/3}N \approx .6482776 N$ which implies that the expected number of compares is not likely to be far from the mean for large $N$. Does the number of compares obey a normal distribution? No. Characterizing this distribution is a difficult research challenge. 1.8 Probabilistic Algorithms. Is our assumption that the input array is randomly ordered a valid input model? Yes, because we can randomly order the array before the sort. Doing so turns quicksort into a randomized algorithm whose good performance is guaranteed by the laws of probability. It is always a good idea to validate our models and analysis by running experiments. Detailed experiments by many people on many computers have done so for quicksort over the past several decades. In this case, a flaw in the model for some applications is that the array items need not be distinct. Faster implementations are possible for this case, using three-way partitioning. Selected Exercises Follow through the steps above to solve the recurrence $$A_N=1+{2 \over N} \sum_{1\le j\le N} A_{j-1} {\quad\rm for\quad} N>0.$$ Show that the average number of exchanges used during the first partitioning stage (before the pointers cross) is $(N-2)/6$. (Thus, by linearity of the recurrences, the average number of exchanges used by quicksort is ${1\over6}C_N-{1\over2}A_N$.) If we change the first line in the quicksort implementation above to call insertion sort when hi-lo <= M then the total number of comparisons to sort $N$ elements is described by the recurrence $$ C_N=\begin{cases}N+1+\displaystyle{1\over N} \sum_{1\le j\le N} (C_{j-1}+C_{N-j})&N>M;\\ {1\over4}N(N-1)&N\ le M\\ \end{cases}$$ Solve this recurrence. Ignoring small terms (those significantly less than $N$) in the answer to the previous exercise, find a function $f(M)$ so that the number of comparisons is approximately $$2N\ln N+f(M)N.$$ Plot the function $f(M)$, and find the value of $M$ that minimizes the function.
{"url":"http://aofa.cs.princeton.edu/10analysis/","timestamp":"2014-04-19T10:25:52Z","content_type":null,"content_length":"16699","record_id":"<urn:uuid:7fc6a44c-f47c-4a14-aab8-8ceef0ec4c4e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Log tables Seven figure logs These were used where higher accuracy was needed and were much more cumbersome. For example in my book of 7 figures logarithms, the logarithms alone occupy 200 pages (compared to 2 pages above). For other calculations, say involving trigonometrical functions, the logarithms of the functions were tabulated. This facilitated such calculations as 3.764 x sin 40Β° since one could go straight to log (3.764) + log (sin40Β°). The example below shows one page (of the 90) dealing with logarithms of sines, tangents, etc. Note also the column of differences. If one wanted to calculate log (sin 13Β° 15' 35" ) then from the table this would be calculated as: 9.3602154 + (35/60) * 5361 Presumably for convenience of presentation, the values are given for a base distance of 10,000,000,000 units.
{"url":"http://www.sliderules.info/a-to-z/log-tables.htm","timestamp":"2014-04-20T13:22:35Z","content_type":null,"content_length":"3448","record_id":"<urn:uuid:a2c70eb9-635b-4c02-9728-32b7b9f7068e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Possibilistic logic: a retrospective and prospective view. (English) Zbl 1076.68084 Summary: Possibilistic logic is a weighted logic introduced and developed since the mid-1980s, in the setting of artificial intelligence, with a view to develop a simple and rigorous approach to automated reasoning from uncertain or prioritized incomplete information. Standard possibilistic logic expressions are classical logic formulas associated with weights, interpreted in the framework of possibility theory as lower bounds of necessity degrees. Possibilistic logic handles partial inconsistency since an inconsistency level can be computed for each possibilistic logic base. Logical formulas with a weight strictly greater than this level are immune to inconsistency and can be safely used in deductive reasoning. This paper first recalls the basic features of possibilistic logic, including information fusion operations. Then, several extensions that mainly deal with the nature and the handling of the weights attached to formulas, are suggested or surveyed: the leximin-based comparison of proofs, the use of partially ordered scales for the weights, or the management of fuzzily restricted variables. Inference principles that are more powerful than the basic possibilistic inference in case of inconsistency are also briefly considered. The interest of a companion logic, based on the notion of guaranteed possibility functions, and working in a way opposite to the one of usual logic, is also emphasized. Its joint use with standard possibilistic logic is briefly discussed. This position paper stresses the main ideas only and refers to previous published literature for technical details. 68T37 Reasoning under uncertainty 68T27 Logic in artificial intelligence
{"url":"http://zbmath.org/?q=an:1076.68084","timestamp":"2014-04-21T12:10:01Z","content_type":null,"content_length":"21764","record_id":"<urn:uuid:43dfe3ba-10bc-45e1-afc1-97f62b5399b1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Bryn Mawr Academic Programs Computer Science Deepak Kumar Steven Lindell, at Haverford College Associate Professor: David G. Wonnacott, at Haverford College Assistant Professors: Douglas S. Blank Dianna Xu John Dougherty, at Haverford College Geoffrey Towell Affiliated Faculty: George E. Weaver Jr. Theodore Wong Computer Science is the science of algorithms β€” their theory, analysis, design and implementation. As such it is an interdisciplinary field with roots in mathematics and engineering and applications in many other academic disciplines. The program at Bryn Mawr is founded on the belief that computer science should transcend from being a sub-field of mathematics and engineering and play a broader role in all forms of human inquiry. Computer Science is a bi-college program, supported jointly by faculty at both Bryn Mawr and Haverford Colleges. The program welcomes students who wish to pursue a major in computer science. Additionally, the program also offers a Minor in Computer Science, a Concentration in Computer Science (at Haverford College) and a Minor in Computational Methods (at Bryn Mawr College). The program also strives to facilitate evolving interdisciplinary majors. For example, students can propose a major in cognitive science by combining coursework from computer science and disciplines such as psychology and philosophy. All majors, minors and concentrations offered by the program emphasize foundations and basic principles of information science, rather than engineering or data-processing applications. Both colleges believe this approach to be the most consistent with the principles of scientific education in the liberal arts. The aim is to provide students with skills that transcend short-term trends in computer hardware and software. Independent Major in Computer Science Students who wish to major in computer science do so by declaring an independent major. Students are encouraged to prepare a major course plan in consultation with their academic adviser in computer science. A typical course plan includes three introductory courses (110 or 205, 206 and 231), three core courses (240, 245 and one of 330, 340 or 345), six electives of a student’s choosing and a senior thesis. Students declare an independent major in the spring semester of their sophomore year. Such students should ensure that they have completed at least three courses in computer science by the end of their sophomore year (we highly recommend 110, 206 and 231). Minor in Computer Science Students in any major are encouraged to complete a minor in computer science. Completing a minor in computer science enables students to pursue graduate studies in computer science, in addition to their own major. The requirements for a minor in computer science at Bryn Mawr are 110 or 205, 206, 231, any two of 240, 245, 246, 330, 340 or 345, and two electives chosen from any course in computer science, approved by the student’s adviser in computer science. As mentioned above, these requirements can be combined with any major, depending on the student’s interest and preparation. Minor in Computational Methods This minor is designed to enable students majoring in any of the sciences (biology, chemistry, geology, physics, psychology), mathematics, economics, sociology, philosophy, archaeology and growth and structure of cities to learn computational methods and applications in their major area of study. The requirements for a minor in computational methods at Bryn Mawr are 110 or 205, 206, 231; one of 212, 225, 245, 246, 330, 340 or 361; any two computational courses depending on a student’s major and interests (there are over 35 such courses to choose from in biology, chemistry, computer science, economics, geology, mathematics, physics, psychology and sociology). Students can declare a minor at the end of their sophomore year or soon after. Students should prepare a course plan and have it approved by at least two faculty advisers. Students minoring in computational methods are encouraged to propose senior projects/theses that involve the application of computational modeling in their major field of study. 100b. The World of Computing An introduction to the use of the computer for problem solving in any discipline, including an introduction to programming in a structured language (currently Pascal) with emphasis on the development of general problem-solving skills and logical analysis. Applications are chosen from a variety of areas, emphasizing the nontechnical. (Dougherty, Lindell, Division II or Quantitative Skills) CMSC B110. Introduction to Computing An introduction to the nature, subject matter and branches of computer science as an academic discipline, and the nature, development, coding, testing, documenting and analysis of the efficiency and limitations of algorithms. Also includes the social context of computing (risks, liabilities, intellectual property, and infringement). (Towell, Xu, Division II or Quantitative Skills) 130a. Foundations of Rigorous Thinking Develops rigorous thinking skills through the linguistic foundations of mathematics: logic and sets. Emphasis on using symbology to represent abstract objects and the application of formal reasoning to situations in computer science. (Lindell) 205a. Introduction to Computer Science A rigorous year-long introduction to the fundamental concepts of computer science intended for students interested in doing more advanced work in technical and scientific fields. Includes the fundamental data structures of computer science and their algorithms. Examples and exercises will stress the mathematical aspects of the discipline, with a strong emphasis on programming and analytical problem-solving skills. Students without a strong (secondary school) mathematics or programming experience should take Computer Science 100 instead. (Wonnacott, Division II or Quantitative Skills) CMSC B206. Introduction to Data Structures Introduction to the fundamental algorithms and data structures of computer science: sorting, searching, recursion, backtrack search, lists, stacks, queues, trees, graphs, dictionaries. Introduction to the analysis of algorithms. Prerequisite: Computer Science 205 or 110, or permission of instructor. (Xu, Dougherty, Wonnacott, Division II or Quantitative Skills) 207b. Computing Across the Sciences This course presents an integrated interdisciplinary survey of computational techniques for investigating natural phenomena such as genomics, galactic dynamics, image analysis and molecular dynamics. It will include discussion of the applications of each technique in different scientific disciplines. Prerequisite: Mathematics 114 (or 120 or 121) and two semesters of an introductory course in any of the sciences. (Xu, Towell, Division II) 210a. Linear Optimization and Game Theory Covers in depth the mathematics of optimization problems with a finite number of variables subject to constraints. Applications of linear programming to the theory of matrix games and network flows are covered, as well as an introduction to nonlinear programming. Emphasis is on the structure of optimal solutions, algorithms to find them, and the underlying theory that explains both. (Greene, Division II or Quantitative Skills) Not offered in 2004-05. CMSC B212. Computer Graphics Presents the fundamental principles of computer graphics: data structures for representing objects to be viewed, and algorithms for generating images from representations. Prerequisite: Mathematics 203 or 215, or permission of instructor. (Xu) 225a. Fundamentals of Database Systems An introduction to the principles of relational database design and use, including the entity/relationship data model and the logical algebra/calculus model behind query languages. An integrated laboratory component covers declarative programming using the international standard SQL. Prerequisites: Computer Science 206 and 231. (Lindell, Division II) Not offered in 2004-05. CMSC B231. Discrete Mathematics An introduction to discrete mathematics with strong applications to computer science. Topics include set theory, functions and relations, prepositional logic, proof techniques, recursion, counting techniques, difference equations, graphs and trees. (Weaver, Division II or Quantitative Skills; cross-listed as Mathematics 231 and Philosophy 230) 235a. Information and Coding Theory Covers the mathematical theory of the transmission (sending or storing) of information. Included are encoding and decoding techniques, both for the purposes of data compression and for the detection and correction of errors. (Lindell) 240a. Principles of Computer Organization A lecture/laboratory course studying the hierarchical design of modern digital computers. Combinatorial and sequential logic elements; construction of microprocessors; instruction sets; assembly language programming. Lectures cover the theoretical aspects of machine architecture. In the laboratory, designs discussed in lecture are constructed in software. Prerequisite: Computer Science 206 or permission of instructor. (Wonnacott, Division II) CMSC B245. Principles of Programming Languages An introduction to a wide range of topics relating to programming languages with an emphasis on abstraction and design. Design issues relevant to the implementation of programming languages are discussed, including a review and in-depth treatment of mechanisms for sequence control, the run-time structure of programming languages and programming in the large. The course has a strong lab component where students get to construct large programs in at least three different imperative programming languages. (Towell, Division II or Quantitative Skills) CMSC B246. Programming Paradigms An introduction to the nonprocedural programming paradigms. The shortfalls of procedural programming derived from the von Neumann model of computer architectures are discussed. An in-depth study of the principles underlying functional programming, logic programming and object-oriented programming. This course has a strong lab component where students construct programs in several programming languages representative of the paradigms. Prerequisite: Computer Science 205a or 110. (staff, Division II or Quantitative Skills) CMSC B250. Computational Models in the Sciences (Wong, Division II or Quantitative Skills; cross-listed as Biology 250 and Geology 250) CMSC B330. Algorithms: Design and Practice This course examines the applications of algorithms to the accomplishments of various programming tasks. The focus will be on understanding of problem-solving methods, along with the construction of algorithms, rather than emphasizing formal proving methodologies. Topics include divide and conquer, approximations for NP-Complete problems, data mining and parallel algorithms. Prerequisites: Computer Science 206 and 231. (Kumar, Division II or Quantitative Skills) Not offered in 2004-05. 340b. Analysis of Algorithms Qualitative and quantitative analysis of algorithms and their corresponding data structures from a precise mathematical point of view. Performance bounds, asymptotic and probabilistic analysis, worst-case and average-case behavior. Correctness and complexity. Particular classes of algorithms such as sorting and searching are studied in detail. Prerequisites: Computer Science 206 and some additional mathematics at the 200 level, or permission of instructor. (Lindell) 345b. Theory of Computation Introduction to automata theory, formal languages and complexity. Introduction to the mathematical foundations of computer science: finite state automata, formal languages and grammars, Turing machines, computability, unsolvability and computational complexity. Prerequisites: Computer Science 206, and some additional mathematics at the 200 level, or permission of instructor. (Lindell) Not offered in 2004-05. CMSC B350. Compiler Design: Theory and Practice An introduction to compiler and interpreter design, with emphasis on practical solutions, using compiler-writing tools in UNIX and the C programming language. Topics covered include lexical scanners, context-free languages and pushdown automata, symbol table design, run-time memory allocation, machine language and optimization. (Wonnacott) Not offered in 2004-05. 355b. Operating Systems: Theory and Practice A practical introduction to modern operating systems, using case studies from UNIX, VMS, MSDOS and the Macintosh. Lab sessions will explore the implementation of abstract concepts, such as resource allocation and deadlock. Topics include file systems, memory allocation schemes, semaphores and critical sections, device drivers, multiprocessing and resource sharing. (Wonnacott) CMSC B361. Emergence A multidisciplinary exploration of the interactions underlying both real and simulated systems, such as ant colonies, economies, brains, earthquakes, biological evolution, artificial evolution, computers and life. These emergent systems are often characterized by simple, local interactions that collectively produce global phenomena not apparent in the local interactions. (Blank) Not offered in 2004-05. CMSC B371. Cognitive Science Cognitive science is the interdisciplinary study of intelligence in mechanical and organic systems. In this introductory course, we examine many topics from computer science, linguistics, neuroscience, mathematics, philosophy and psychology. Can a computer be intelligent? How do neurons give rise to thinking? What is consciousness? These are some of the questions we will examine. No prior knowledge or experience with any of the subfields is assumed or necessary. Prerequisite: permission of instructor. (Blank; cross-listed as Psychology 371) Not offered in 2004-05. CMSC B372. Introduction to Artificial Intelligence Survey of Artificial Intelligence (AI), the study of how to program computers to behave in ways normally attributed to "intelligence" when observed in humans. Topics include heuristic versus algorithmic programming; cognitive simulation versus machine intelligence; problem-solving; inference; natural language understanding; scene analysis; learning; decision-making. Topics are illustrated by programs from literature, programming projects in appropriate languages and building small robots. (Kumar, Division II; cross-listed as Philosophy 372) CMSC B376. Androids: Design and Practice This course examines the possibility of human-scale artificial mind and body. It discusses artificial-intelligence methods for allowing computers to interact with humans on their own turf: the real world. It examines the science of robotics (including vision, speech recognition and navigation) and their intelligent control (including planning, creativity and analogy-making). Prerequisite: permission of instructor. (staff) Not offered in 2004-05. CMSC B380. Recent Advances in Computer Science A topical course facilitating an in-depth study on a current topic in computer science. Prerequisite: permission of instructor. (staff, Division II) 392a. Advanced Topics: Parallel Processing This course provides an introduction to parallel architecture, languages and algorithms. Topics include SIMD and MIMD systems, private memory and shared memory designs; interconnection networks; issues in parallel language design including process creation and management, message passing, synchronization and deadlock; parallel algorithms to solve problems in sorting, search, numerical methods and graph theory. Prerequisite: Computer Science 240; 246 and 355 are also recommended. (Dougherty, Division II) Not offered in 2004-05. 394b. Advanced Topics in Discrete Mathematics and Computer Science (Lindell) Not offered in 2004-05. CMSC B399. Senior Project CMSC B403. Supervised Work/Independent Study In addition to the courses listed above, the following courses are also of interest (see descriptions under individual departments). General Studies 213. Introduction to Mathematical Logic 303. Advanced Mathematical Logic Mathematics (at Haverford) 222a. Scientific Computing 237a. Logic and the Mathematical Method 306. Mathematical Methods in the Physical Sciences 322. Solid-State Physics
{"url":"http://www.brynmawr.edu/catalog/2004-05_archive/compsci.php","timestamp":"2014-04-17T07:07:31Z","content_type":null,"content_length":"33999","record_id":"<urn:uuid:590bffa3-bf17-452c-a0a1-3c3a511c870c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
For an introductory, one or two semester, sophomore-junior level course in Probability and Statistics or Applied Statistics for engineering, physical science, and mathematics students. This text is rich in exercises and examples, and explores both elementary probability and basic statistics, with an emphasis on engineering and science applications. Much of the data have been collected from the author's own consulting experience and from discussions with scientists and engineers about the use of statistics in their fields. In later chapters, the text emphasizes designed experiments, especially two-level factorial design. CourseSmart textbooks do not include any media or print supplements that come packaged with the bound book. Table of Contents 1. Introduction 1.1 Why Study Statistics? 1.2 Modern Statistics 1.3 Statistics and Engineering 1.4 The Role of the Scientist and Engineer in Quality Improvement 1.5 A Case Study: Visually Inspecting Data to Improve Product Quality 1.6 Two Basic Concepts–Population and Sample 2. Organization and Description of Data 2.1 Pareto Diagrams and Dot Diagrams 2.2 Frequency Distributions 2.3 Graphs of Frequency Distributions 2.4 Stem-and-Leaf Displays 2.5 Descriptive Measures 2.6 Quartiles and Percentiles 2.7 The Calculation of x and s 2.8 A Case Study: Problems with Aggregating Data 3. Probability 3.1 Sample Spaces and Events 3.2 Counting 3.3 Probability 3.4 The Axioms of Probability 3.5 Some Elementary Theorems 3.6 Conditional Probability 3.7 Bayes’ Theorem 4. Probability Distributions 4.1 Random Variables 4.2 The Binomial Distribution 4.3 The Hypergeometric Distribution 4.4 The Mean and the Variance of a Probability Distribution 4.5 Chebyshev’s Theorem 4.6 The Poisson Approximation to the Binomial Distribution 4.7 Poisson Processes 4.8 The Geometric and Negative Binomial Distribution 4.9 The Multinomial Distribution 4.10 Simulation 5. Probability Densities 5.1 Continuous Random Variables 5.2 The Normal Distribution 5.3 The Normal Approximation to the Binomial Distribution 5.4 Other Probability Densities 5.5 The Uniform Distribution 5.6 The Log-Normal Distribution 5.7 The Gamma Distribution 5.8 The Beta Distribution 5.9 The Weibull Distribution 5.10 Joint Distributions–Discrete and Continuous 5.11 Moment Generating Functions 5.12 Checking If the Data Are Normal 5.13 Transforming Observations to Near Normality 5.14 Simulation 6. Sampling Distributions 6.1 Populations and Samples 6.2 The Sampling Distribution of the Mean (Οƒ known) 6.3 The Sampling Distribution of the Mean (Οƒ unknown) 6.4 The Sampling Distribution of the Variance 6.5 Representations of the Normal Theory Distributions 6.6 The Moment Generating Function Method to Obtain Distributions 6.7 Transformation Methods to Obtain Distributions 7. Inferences Concerning a Mean 7.1 Point Estimation 7.2 Interval Estimation 7.3 Maximum Likelihood Estimation 7.4 Tests of Hypotheses 7.5 Null Hypotheses and Tests of Hypotheses 7.6 Hypotheses Concerning One Mean 7.7 The Relation between Tests and Confidence Intervals 7.8 Power, Sample Size, and Operating Characteristic Curves 8. Comparing Two Treatments 8.1 Experimental Designs for Comparing Two Treatments 8.2 Comparisons–Two Independent Large Samples 8.3 Comparisons–Two Independent Small Samples 8.4 Matched Pairs Comparisons 8.5 Design Issues–Randomization and Pairing 9. Inferences Concerning Variances 9.1 The Estimation of Variances 9.2 Hypotheses Concerning One Variance 9.3 Hypotheses Concerning Two Variances 10. Inferences Concerning Proportions 10.1 Estimation of Proportions 10.2 Hypotheses Concerning One Proportion 10.3 Hypotheses Concerning Several Proportions 10.4 Analysis of r x c Tables 10.5 Goodness of Fit 11. Regression Analysis 11.1 The Method of Least Squares 11.2 Inferences Based on the Least Squares Estimators 11.3 Curvilinear Regression 11.4 Multiple Regression 11.5 Checking the Adequacy of the Model 11.6 Correlation 11.7 Multiple Linear Regression (Matrix Notation) 12. Analysis of Variance 12.1 Some General Principles 12.2 Completely Randomized Designs 12.3 Randomized-Block Designs 12.4 Multiple Comparisons 12.5 Analysis of Covariance 13. Factorial Experimentation 13.1 Two-Factor Experiments 13.2 Multifactor Experiments 13.3 2^n Factorial Experiments 13.4 The Graphic Presentation of 22 and 23 Experiments 13.5 Response Surface Analysis 13.6 Confounding in a 2^n Factorial Experiment 13.7 Fractional Replication 14. Nonparametric Tests 14.1 Introduction 14.2 The Sign Test 14.3 Rank-Sum Tests 14.4 Correlation Based on Ranks 14.5 Tests of Randomness 14.6 The Kolmogorov-Smirnov and Anderson-Darling Tests 15. The Statistical Content of Quality-Improvement Programs 15.1 Quality-Improvement Programs 15.2 Starting a Quality-Improvement Program 15.3 Experimental Designs for Quality 15.4 Quality Control 15.5 Control Charts for Measurements 15.6 Control Charts for Attributes 15.7 Tolerance Limits 16. Application to Reliability and Life Testing 16.1 Reliability 16.2 Failure-Time Distribution 16.3 The Exponential Model in Life Testing 16.4 The Weibull Model in Life Testing Appendix A Bibliography Appendix B Statistical Tables Appendix C Using the R Software Program Appendix D Answers to Odd-Numbered Exercises Purchase Info ? With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs. Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere. Buy Access Miller & Freund's Probability and Statistics for Engineers, CourseSmart eTextbook, 8th Edition Format: Safari Book $73.99 | ISBN-13: 978-0-321-64172-4
{"url":"http://www.mypearsonstore.com/bookstore/miller-freunds-probability-and-statistics-for-engineers-0321641728","timestamp":"2014-04-20T16:36:25Z","content_type":null,"content_length":"21466","record_id":"<urn:uuid:d9a8af17-ccc2-40c5-a8ef-f00d58b48b8c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Re: Definition of Large Cardinal Axiom Ali Enayat enayat at american.edu Sat Apr 17 15:27:29 EDT 2004 Professor Solovay caught my blooper in my recent e-mail (reproduced below),in which I used "Baire Property" instead of "Lebesgue measurable". To compensate for my carelessness, I looked up Shelah's paper to answer Solovay's query whether Shelah needed DC (dependent choice)in his proof. It turns out Shelah's proof needs only *countable choice*,i.e., Theorem A (Shelah) Con(ZF + "Every set of reals is Lebesgue measurable" + "countable Choice") implies Con(ZFC + "There exists an inacessible [Reference: "Can you take Solovay's inaccessible away?", Israel J. Math, 48 (1984), pp.1-47] I should also point out that a much earlier example of a "regularity property of reals" implying a large cardinal axiom (and I suspect the first) is due to Specker (cf. p.135 of Kanamori's THE HIGHER INFINITE), who Theorem B (Specker, 1957) Con(ZF+ countable Choice + "Every uncountable set of reals has a perfect subset") implies Con(ZFC + "There exists an inacessible cardinal"). The converse of Theorem B was proved by Solovay, in the same celebrated 1970 paper - and in the same model - in which he also established the converse of Theorem A (the main results of Solovay's paper were obtained in Ali Enayat > On Wed, 14 Apr 2004, Ali Enayat wrote: > [snip] > > > > (3) Some mathematical statements might *imply* a large cardinal axiom > > (1), or they might imply the truth of a large cardinal axiom in some > > model of set theory (such as Godel's constriuctible universe L). Often > > statements are also referred to as a large cardinal axiom. For example, > > statement "all subsets of reals have the property of Baire" is known to > > imply that "there is an inaccessible cardinal in L" (thanks to a > > Shelah in 1980). > This is not correct. Shelah proved > (a) Con(ZFC) iff Con(ZF + "All sets have the property of Baire"); > (b) Con(ZF + "Every set of reals is Lebesgue measurable" + DC) > implies Con(ZFC + "There exists an inacessible cardinal"). The converse > direction had been proved some years earlier by me. > I'm not sure whether or not Shelah needed DC in (b). > --Bob Solovay > > Ali Enayat > > Department of Mathematics and Statistics > > American University > > 4400 Massachusetts Ave, NW > > Washington, DC 20016-8050 > > (202) 885-3168 > > > > _______________________________________________ > > FOM mailing list > > FOM at cs.nyu.edu > > http://www.cs.nyu.edu/mailman/listinfo/fom > > Ali Enayat Department of Mathematics and Statistics American University 4400 Massachusetts Ave, NW Washington, DC 20016-8050 (202) 885-3168 More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-April/008065.html","timestamp":"2014-04-19T01:49:48Z","content_type":null,"content_length":"5811","record_id":"<urn:uuid:07387dcf-59d8-404c-9cfb-f39887af3ab9>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Quiz 10 problem 5 5. Linda drives from Ukiah to the East Bay. She averages 60mi/hr on the freeway, but when she gets to the East Bay, the freeway is clogged with rush hour traffic. Fortunately, she knows the East Bay, and knows how to get to her destination on surface streets. Unfortunately, she can only average 30mi/hr on the surface streets. The trip was 150 miles long and it took her 3 hours. How long was she on the freeway and how long was she on the surface streets. How far did she drive on the freeway and how far did she drive on the surface streets? The first step in solving word problems is to define the unknown. If we let the unknown be what they're looking for, in this case there are two unknowns: the time she was on the freeway, and the time she drove on the surface streets. We could set this problem up with two unknowns. Let x = the time on the freeway y = the time on the surface streets. The fact that the entire trip took 3 hours gives us the equation We can solve this equation for y to express y as a function of x. As a result, we do not need to use y. Wherever we need to use the time on the surface streets, we can use the expression 3 - x instead. Your author suggests that we make use of a d = rt table As soon as we get two columns filled in, we can fill in the third column by using the appropriate equation. In this case that is d = rt. Now we can express the fact that the total distance was 150 miles as Remove parentheses Combine the x terms and transpose the 90 to the other side of the equation Combine and divide by 30. She spent 2 hours on the freeway so she must have been on the surface streets for 1 hour. If she spent 2 hours on the freeway at 60 mi/hr, she would have traveled 120 miles on the freeway. If she spent an hour on the surface streets averaging 30 mi/hr, that would have been an extra 30 miles for a total distance of 150 miles.
{"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_40/Q40/Solns40QF97/SolnQ10F97/SQ10F97p5.html","timestamp":"2014-04-20T06:51:37Z","content_type":null,"content_length":"4761","record_id":"<urn:uuid:502529d8-2f3a-40bd-811d-3bdf8a4d50c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Dealing with Outliers How to Evaluate a Single Straggler, Maverick, Aberrant Value Q. How do you determine if a value is truly an outlier and how do you decide whether or not to proceed with the data analysis? A. One of the prickly problems in data analysis is dealing with outliers in a set of data. An outlier is an observation with a value that does not appear to belong with the rest of the values in the data set. Outliers are also known by other names: maverick, flier, straggler or aberrant value. Two questions usually arise: 1) Is the value in question truly an outlier? 2) Can I eliminate the value and proceed with the data analysis? Question 1 is one of outlier identification, and two essential tools are a graphical display of the data and a statistical test. An excellent graphic to look at the distribution of small data sets is the dot plot. For example, consider the data 5.3, 3.1, 4.9, 3.9, 7.8, 4.7 and 4.3, for which the dot plot is shown in Figure 1. Figure 1 β€” Dot plot for data, 5.3, 3.1, 4.9, 3.9, 7.8, 4.7 and 4.3. Here the value 7.8 appears to be an outlier because it falls well to the right of the others on the dot plot. In the plot, we are really looking at the gaps between the data values. Two of the more commonly used statistical tests for a single outlier in a single set of data are the Dixon test and the Grubbs test. The Dixon test uses ratios of data gaps in different ways depending on the number of values in the data set. In the example above, the sample size is 7, and the ratio used is the gap between the outlier (7.8) and its nearest neighbor (5.3) divided by the gap between the largest and smallest values in the set. Thus, the Dixon ratio is: (7.8 – 5.3)/(7.8 – 3.1) = 2.5/4.7 = 0.532 This value is compared with a critical value from a table, and the value is declared an outlier if it exceeds the critical value. The critical value depends on the sample size, n, and a chosen significance level, which is the risk of rejecting a valid observation. The table generally uses low risk significance levels like 1% or 5%. For n = 7 and a 5% risk, the critical value is 0.507. The Dixon ratio 0.532 exceeds this critical value, indicating that the value 7.8 is an outlier. The Grubbs test uses a test statistic, T, that is the absolute difference between the outlier, XO, and the sample average, For n = 7 and a 5% risk, the critical value is 1.938, and T = 1.99 exceeds this critical value, again indicating that the value 7.8 is an outlier. Getting to Question 2, it should be known that statistical tests are used to identify outliers, not to reject them from the data set. Technically, an observation should not be removed unless an investigation finds a probable cause to justify its removal. Some companies have defined procedures for such investigations, including retesting the material associated with the outlying observation, if possible. In some cases, the physical situation may define the problem. For the three observations, 98.7, 90.0 and 99.7, the Dixon ratio is 8.7/9.7 = 0.897 The critical value for n = 3 and 5% risk is 0.941, so the value 90.0 cannot be identified as an outlier! Part of the reason might be the close proximity of the other two values. However, if the values recorded are human body temperatures in degrees Fahrenheit, then an outlier test is certainly not required to conclude that something is amiss. This example also illustrates that it is difficult to identify outliers in small data sets, such as n < 5. ASTM E691, Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method, discourages such outlier tests for small groups of repeated test results within a single laboratory and suggests other methodologies for identifying aberrant data sets. If an investigation does not find a probable cause, then what should be done? One approach would be to conduct the data analysis both with and without the outlier. If the conclusions are different, then the outlier is seen to be influential, and this should be noted in the report. Another option is to use robust estimators for characterizing the data set, such as the sample median rather than the sample average. ASTM E178, Practice for Dealing with Outlying Observations, contains many statistical procedures for outlier testing. Other criteria are given in this standard for single outliers as well as tests for multiple outliers, and the standard also gives guidance on which test to use. A more comprehensive reference for outlier testing is the book, Outliers in Statistical Data, published by Wiley. Another useful and more practical reference is the American Society for Quality β€œBasic Reference in Quality Control, Statistical Techniques, Volume 16: How to Detect and Handle Outliers,” ASQC Quality Press. Other references are listed in ASTM practice E178. When there are multiple outliers in a data set, the investigation becomes more complicated, but test procedures are available for this situation. One problem is that one outlier may mask another outlier in a single outlier test. The Dixon test overcomes this by redefining the gaps to use as the sample size increases. This approach is well covered in E178 and other sources. It is important to note that the first order of business is to look at the data graphically for potentially more than one outlier, either in the same or opposite direction, prior to using the Dixon or Grubbs technique. These techniques are designed to detect a single outlier in a dataset, and hence are not suitable for multiple outlier detection. One robust and comprehensive technique to effectively identify multiple outliers is the generalized extreme studentized deviate many-outlier procedure, described in the ASQ Basic Reference, Volume 16. While multiple outliers are beyond the intended scope of this article, interested readers are referred to the above literature for guidance, or you may choose to consult a statistician. Thomas Murphy, T.D. Murphy Statistical Consulting LLC, is chair of Subcommittee E11.30 on Statistical Quality Control, part of ASTM Committee E11 on Quality and Statistics. Alex T. Lau, Engineering Services Canada, is chair of Subcommittee D02.94 on Quality Assurance and Statistical Methods, part of ASTM Committee D02 on Petroleum Products and Lubricants, and a contributing member of E11. Dean Neubauer is the DataPoints column coordinator and E11.90.03 publications chair. Go to other DataPoints articles.
{"url":"http://www.astm.org/SNEWS/ND_2008/datapoints_nd08.html","timestamp":"2014-04-17T18:57:36Z","content_type":null,"content_length":"25250","record_id":"<urn:uuid:145a12c3-9944-44e6-9d29-6ec2c48fab78>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Helmut Hasse (ˈhasΙ™; 25 August 1898 – 26 December 1979) was a Germany , officially the Federal Republic of Germany , is a federal parliamentary republic in Europe. The country consists of 16 states while the capital and largest city is Berlin. Germany covers an area of 357,021 km2 and has a largely temperate seasonal climate... A mathematician is a person whose primary area of study is the field of mathematics. Mathematicians are concerned with quantity, structure, space, and change.... working in algebraic number theory Algebraic number theory is a major branch of number theory which studies algebraic structures related to algebraic integers. This is generally accomplished by considering a ring of algebraic integers O in an algebraic number field K/Q, and studying their algebraic properties such as factorization,... , known for fundamental contributions to class field theory In mathematics, class field theory is a major branch of algebraic number theory that studies abelian extensions of number fields.Most of the central results in this area were proved in the period between 1900 and 1950... , the application of p-adic number In mathematics, and chiefly number theory, the p-adic number system for any prime number p extends the ordinary arithmetic of the rational numbers in a way different from the extension of the rational number system to the real and complex number systems... s to local classfield theory In mathematics, local class field theory, introduced by , is the study of abelian extensions of local fields. It is the analogue for local fields of global class field theory.... diophantine geometry In mathematics, diophantine geometry is one approach to the theory of Diophantine equations, formulating questions about such equations in terms of algebraic geometry over a ground field K that is not algebraically closed, such as the field of rational numbers or a finite field, or more general... Hasse principle In mathematics, Helmut Hasse's local-global principle, also known as the Hasse principle, is the idea that one can find an integer solution to an equation by using the Chinese remainder theorem to piece together solutions modulo powers of each different prime number... ), and to local zeta functions. He was born in Kassel is a town located on the Fulda River in northern Hesse, Germany. It is the administrative seat of the Kassel Regierungsbezirk and the Kreis of the same name and has approximately 195,000 inhabitants.- History :... , and died in Ahrensburg is a town in Schleswig-Holstein, Germany. It is part of the Hamburg Metropolitan Region, situated in Stormarn. Its population around 31,700 . Its outstanding sight is the Renaissance castle dating from 1595.... After serving in the navy in World War I World War I , which was predominantly called the World War or the Great War from its occurrence until 1939, and the First World War or World War I thereafter, was a major war centred in Europe that began on 28 July 1914 and lasted until 11 November 1918... , he studied at the University of GΓΆttingen, and then at Marburg under Kurt Hensel Kurt Wilhelm Sebastian Hensel was a German mathematician born in KΓΆnigsberg, Prussia.He was the son of the landowner and entrepreneur Sebastian Hensel, brother of the philosopher Paul Hensel, grandson of the composer Fanny Mendelssohn and the painter Wilhelm Hensel, and a descendant of the... , writing a dissertation in 1921 containing the Hasse–Minkowski theorem The Hasse–Minkowski theorem is a fundamental result in number theory which states that two quadratic forms over a number field are equivalent if and only if they are equivalent locally at all places, i.e. equivalent over every completion of the field... , as it is now called, on quadratic form In mathematics, a quadratic form is a homogeneous polynomial of degree two in a number of variables. For example,4x^2 + 2xy - 3y^2\,\!is a quadratic form in the variables x and y.... s over number fields. He then held positions at The University of Kiel is a university in the city of Kiel, Germany. It was founded in 1665 as the Academia Holsatorum Chiloniensis by Christian Albert, Duke of Holstein-Gottorp and has approximately 23,000 students today... The Martin Luther University of Halle-Wittenberg , also referred to as MLU, is a public, research-oriented university in the cities of Halle and Wittenberg within Saxony-Anhalt, Germany... and Marburg. He was Hermann Weyl Hermann Klaus Hugo Weyl was a German mathematician and theoretical physicist. Although much of his working life was spent in ZΓΌrich, Switzerland and then Princeton, he is associated with the University of GΓΆttingen tradition of mathematics, represented by David Hilbert and Hermann Minkowski.His... 's replacement at GΓΆttingen in 1934; politically he was a right-wing nationalist, and applied for membership in the Nazi Party in 1937 but this was denied to him because he had Jewish ancestors. After war work he returned to GΓΆttingen briefly in 1945 but was excluded by the British authorities. After brief appointments in Berlin from 1948 he settled permanently as professor in The University of Hamburg is a university in Hamburg, Germany. It was founded on 28 March 1919 by Wilhelm Stern and others. It grew out of the previous Allgemeines Vorlesungswesen and the Kolonialinstitut as well as the Akademisches Gymnasium. There are around 38,000 students as of the start of... He collaborated with many mathematicians: in particular with Emmy Noether Amalie Emmy Noether was an influential German mathematician known for her groundbreaking contributions to abstract algebra and theoretical physics. Described by David Hilbert, Albert Einstein and others as the most important woman in the history of mathematics, she revolutionized the theories of... Richard Brauer Richard Dagobert Brauer was a leading German and American mathematician. He worked mainly in abstract algebra, but made important contributions to number theory... simple algebra s; and with Harold Davenport Harold Davenport FRS was an English mathematician, known for his extensive work in number theory.-Early life:... Gauss sum In mathematics, a Gauss sum or Gaussian sum is a particular kind of finite sum of roots of unity, typicallyG := G= \sum \chi\cdot \psi... s ( Hasse–Davenport relation The Hasse–Davenport relations, introduced by , are two related identities for Gauss sums, one called the Hasse–Davenport lifting relation, and the other called the Hasse–Davenport product relation. The Hasse–Davenport lifting relation is an equality in number theory relating Gauss sums over... s) and with Cahit Arf Cahit Arf was a Turkish mathematician. He is known for the Arf invariant of a quadratic form in characteristic 2 in topology, the Hasse–Arf theorem in ramification theory, Arf semigroups, and Arf rings.-Biography:Cahit Arf was born on 11 October 1910 in Selanik , which was then... on the Hasse–Arf theorem In mathematics, specifically in local class field theory, the Hasse–Arf theorem is a result concerning jumps of a filtration of the Galois group of a finite Galois extension... See also β€’ Hasse diagram In order theory, a branch of mathematics, a Hasse diagram is a type of mathematical diagram used to represent a finite partially ordered set, in the form of a drawing of its transitive β€’ Hasse invariant of an elliptic curve β€’ Hasse invariant of a quadratic form In mathematics, the Hasse invariant of a quadratic form Q over a field K takes values in the Brauer group Br.The quadratic form Q may be taken as a diagonal form... β€’ Artin–Hasse exponential β€’ Hasse–Weil L-function β€’ Hasse norm theorem In number theory, the Hasse norm theorem states that if L/K is a cyclic extension of number fields, then if a nonzero element of K is a local norm everywhere, then it is a global norm.... β€’ Hasse's algorithm External links
{"url":"http://www.absoluteastronomy.com/topics/Helmut_Hasse","timestamp":"2014-04-17T06:55:44Z","content_type":null,"content_length":"26656","record_id":"<urn:uuid:c156f53e-b662-4b5c-a678-28fa2d72a56a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Difference equation (economic growth model) January 8th 2010, 07:26 AM Difference equation (economic growth model) $<br /> \Delta A / A = B (aL)^\gamma A^\theta / A<br />$ $\Delta L / L = n$ $0< \theta < 1$ $n, B , \gamma > 0$ I need to solve for the procentual growth in A in terms of the parameters. This should according to my textbook equal $\Delta A / A = \gamma n / (1-\theta)$ I guess it IS NOT a constant but CONVERGES to the previous constant. The only thing is, I can't solve it, work it out properly... Can anybody solve this for me or put me in the right direction ? I hope this was placed in the right subforum. Difference equations aren't differential equations, but I don't know in what subfield of Math difference equations would fit. (I'm no math student) This might also be pre university level math, I don't know. I have never studied difference equations in high school and took a fairly math intensive curriculum. Many thanks in advance! January 10th 2010, 10:32 AM What does the a mean? What does a mean in your equation? January 10th 2010, 11:37 AM Level of technology in the economy! January 10th 2010, 11:47 AM Thank you but I menth if it had a specifik value or could be substituted somehow. But probably not, it is not directly related to any other factor in the equation??? a is then a constant? how was it menth to vanish? January 10th 2010, 11:56 AM This is all the model specifies. "B" is what is called a "shift factor" in the literature and is constant. It augments technological growth. "L" is the population size which grows every year by factor "n". "a" is the fraction of the population who works in the research and development sector. I have found an intuitive solution to this problem though, which I will post here the day after tomorrow since I don't have the time right now to go into it, but I would have liked a clean mathematical approach... But to answer your question: there is no way to substitute any of the other variables or parameters into "A" or vice versa... A is not a constant... the procentual growth of A however, becomes a constant dynamicaly. January 11th 2010, 05:26 AM If $\Delta A / A = B (aL)^\gamma A^\theta / A$, doesn't it follow, then, that $\Delta A = B (aL)^\gamma A^\theta$? That is, that $\Delta A$ is simply a constant times A to a constant power. January 12th 2010, 08:51 AM The solution intuitively: Given the above equations. $<br /> \frac{\Delta (\Delta A / A)}{\Delta A / A} = \Delta(Ln (RHS))<br /> = \Delta Ln(B) + \gamma \Delta Ln(a) + \gamma \Delta Ln(L) - (1-\theta) \Delta Ln(A)<br />$ $<br /> = 0 + 0 + \gamma n - (1-\theta) \Delta A/A<br />$ $\Delta A/A > \frac{\gamma n}{(1-\theta)}$ it's procentual change will be negative, and while it's smaller, it's procentual change will be positive. So it is dynamicaly pushed towards that equilibrium value... That's the intuition.
{"url":"http://mathhelpforum.com/differential-equations/122901-difference-equation-economic-growth-model-print.html","timestamp":"2014-04-16T07:17:03Z","content_type":null,"content_length":"9740","record_id":"<urn:uuid:e34dd32f-c760-4649-ba1a-05bf27b7891a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Article | A Reply to Camilli/Bulkley Critique of β€˜An Evaluation of the Florida A-Plus Accountability and School Choice Program' Civic Report February 2001 An Evaluation of the Florida A-Plus Accountability and School Choice Program A Reply to β€œCritique of β€˜An Evaluation of the Florida A-Plus Accountability and School Choice Program’” by Gregory Camilli and Katrina Bulkley in Education Policy Analysis Archives, Volume 9 Number 7 March 4, 2001, http://epaa.asu.edu/epaa/v9n7/ By Jay P. Greene Senior Fellow, The Manhattan Institute for Policy Research March 5, 2001 The Camilli and Bulkley re-analysis of my evaluation of the Florida A-Plus choice and accountability program is neither factually nor conceptually accurate. They mischaracterize my findings to create a straw man that is easier to knock down. They then compare scores for different samples across time to bias the results downward. They follow this by disaggregating results by grade level to produce smaller samples, making the detection of significant effects more difficult. They then propose a correction for regression to the mean that absorbs much of the effect that could reasonably be attributed to the prospect of vouchers, making the effect sizes even smaller. And finally they measure effects for school level results as a proportion of estimated student level standard deviations to ensure that the positive results they have found (despite their best efforts) are made to seem ridiculously small. The Camilli and Bulkley re-analysis is almost a textbook for how to do a hatchet job on positive results that one wishes to make go away. First, they build the straw man. They attribute to me the claim that the effect of vouchers in Florida was between .80 and 2.23 standard deviations. They say: β€œThese gains for β€˜F’ schools were then translated into effect sizes for reading (.80), mathematics (1.25), and writing (2.23) (Greene, 2001a, endnotes 12-14). No doubt, as computed, these gains are statistically significant. They are also among the highest gains ever recorded for an educational intervention. Results like these, if true, would be nothing short of miraculous, far outpacing the reported achievement gains in Texas and North Carolina.” I obviously did not claim these as the effect sizes for vouchers. These numbers were taken from endnotes that simply described how large the year-to-year changes for F schools were. My claims for voucher effects are clearly described in the text as: β€œThe improvement on the reading FCAT attributable to the prospect of vouchers was a modest 0.12 standard deviations and fell short of being statistically significant. The voucher effect on math scores was a larger 0.30 standard deviations, which was statistically significant. And the prospect of vouchers improved school performance on the writing test by 0.41 standard deviations, an effect that is also statistically significant.” (p. 8) These effects were also clearly displayed in Table 3 and marked as β€œVoucher Effect Measured In Standard Deviations.” Second, they appear to have compared scores for different samples over time in a way that biases certain results downward. They are no more accurate in describing the test score data available on the Florida Department of Education (FDOE) web site, which they used for their own analyses, than they were in describing my findings. They say: β€œAn alternative method of choosing a sample is to use the results for all curriculum groups, and these data are available on the Florida Department of Education web pages.” This is not correct. The FDOE web site (http://www.firn.edu/doe/sas/) contains scores for all curriculum students for 2000 but only standard curriculum students for 1999. Comparing scores for these two different samples, as Camilli and Bulkley appear to have done, biases all gains downward because exceptional education students who were excluded in 1999 were included in 2000. My analysis of scores from standard curriculum students in both years is an apple-to-apple While their apple-orange comparison biases all gains downward, it does not fundamentally distort the gains of F schools relative to other schools. As I observed in endnote 10 of my report: β€œthe web site only has scores for standard curriculum students in 1999 and all students in 2000. This study used scores for standard curriculum students in both years. Earlier analyses on these results from the web site do not produce results that are substantively different from those reported here. This suggests that the inclusion or exclusion of test scores from special needs students has little bearing on the conclusions of this evaluation.” Given how much attention Camilli and Bulkley appear to pay to endnotes, one would have thought that this would have addressed their concern about whether the sample should have included all curriculum students or only standard curriculum students. Third, they make the case for disaggregating the results by grade level, the net effect of which is to produce smaller samples and less stable results. Their argument for disaggregating is based largely on the results in their Table 2, which purport to show that the year-to-year average changes in FCAT scores differ for different grade levels. Remember that their test score analyses are using an apple-orange comparison of standard curriculum students in 1999 to all curriculum students in 2000, so the changes in test scores reported in their Table 2 are incorrect and all biased downward. Also note that disaggregating by grade level produces samples with only a handful of F schools in grades 8 and 10, making any findings about the progress of schools in those grades that faced the prospect of vouchers unstable and insignificant. In addition to relying on incorrect comparisons of 1999 and 2000 test scores by grade level, Camilli and Bulkley make a theoretical argument for disaggregating results. They argue: β€œthe results of a policy implementation may be different at different grades, even if this is not an a priori expectation.” The results of policy implementation may be different in rural and urban areas. Why not disaggregate the results by grade level and urbanicity? The results of policy implementation may also be different in each of the 64 school districts in Florida. We could disaggregate to incredibly small samples if we wished. The obvious argument against disaggregating results, even when plausible differences in policy implementation may exist, is that we do not want to disaggregate results so that samples are too small to be reliable unless there is compelling evidence that requires disaggregation. Disaggregating by grade level gives us only a handful of failing schools to examine in grades 8 and 10; numbers that are too small to yield reliable results. Since Camilli and Bulkley do not provide compelling evidence for disaggregating results into such small units, we ought not to do so. Fourth, Camilli and Bulkley propose an alternative, and biased, way of addressing the possibility that regression to the mean might account for at least some of the improvement at schools that faced the prospect of vouchers. Their proposal is the effect of regression to the mean could be modeled as the slope of the regression line produced by estimating this year’s scores based on last year’s scores. The β€œtrue” gain for F schools would then only be the amount by which F schools improved beyond the improvement predicted by the regression line. That is, in their view the voucher effect can accurately be measured as the error term from the regression model. Among the many problems with the approach they propose for correcting for regression to the mean is that their estimate of the influence of regression to the mean, the slope of the regression line, is actually influenced by the magnitude of the true voucher effect. Let’s say that we found a program with the same amount of regression to the mean as in Florida but the true voucher effect were twice as large. If we then introduced the β€œcorrection” that Camilli and Bulkley propose we would wrongly attribute some of the true voucher effect to their adjustment for regression to the mean and underestimate the voucher effect. This would occur because the slope of the line that predicts one year’s scores from the previous scores would become more steep because the failing schools experienced larger true gains. Remember under our hypothetical the influence of regression to the mean is unchanged yet their β€œcorrection” for regression to the mean would change with a change in the true voucher effect. Obviously a correction for an error that increases when the true effect increases is seriously flawed because it is counting as error effects are that are true effects. Camilli and Bulkley’s correction for regression to the mean is similarly seriously flawed because the line that failing schools must over-perform in their analysis is itself influenced by the size of the true voucher effect among failing schools. A more reasonable correction for regression to the mean would estimate the slope of the line for predicting one year’s scores from the previous year’s scores excluding the failing schools and then judge the extent to which the schools that faced the prospects of vouchers over-performed that expectation. This way the correction for regression to the mean could not change under a scenario in which the true voucher effect was made larger while the true regression to the mean remained constant. I performed precisely this type of analysis in Table 5 to show that the true voucher effect is around 4 points in reading, 8 points in math, and .25 points on writing (which uses a different scale). Fifth, Camilli and Bulkley wish to covert all of these point gains into smaller changes in terms of standard deviations by using an estimated student-level standard deviation instead of the school-level standard deviation that I use. They do not describe how they estimate the standard deviation for individual student test scores, but it is 3.5 times as large as the standard deviation for the results reported as school averages. It is true that variation in the scores will be greater at the student level than at the school level, but it is not at all clear why one should use individual level variation for calculating effect sizes. The unit of analysis in my study is correctly the school level. Schools are assigned grades by the state, not students. Schools face the prospect that their students will be offered vouchers if they do not improve. Schools must develop strategies for improving their efforts so that test scores will rise. The results in my study are appropriately reported as school averages and the effect sizes are appropriately computed using school level variation in scores. The only obvious appeal for using student level variation to compute effect sizes is that it makes those effects three and one half times smaller. By this time it should be clear that making the positive effects from the A-Plus choice and accountability program smaller was probably the point of Camilli and Bulkley writing their piece, not attempting to identify the program’s true effects. At each point they distorted my findings, misrepresented the data, or employed analytical techniques that appear designed to minimize the positive results from the A-Plus program. Let me quickly note the other important finding of my report that they did not challenge: the Florida Comprehensive Assessment Test (FCAT) is a reliable measure of student performance because its results correlate highly with the results of a low-stakes standardized test administered around the same time. They note that the correlation at the school level is higher than it would be at the individual level (as I did in endnote 11), but they do not challenge the claim that Florida’s testing program produces reliable measures of student performance. This implicit concession on their part is important because the finding contradicts the claims of opponents of testing and accountability programs who regularly appear in the web-only Education Policy Analysis Archives.
{"url":"http://www.manhattan-institute.org/html/cr_aplus_greenes_reply.htm","timestamp":"2014-04-19T16:12:11Z","content_type":null,"content_length":"29874","record_id":"<urn:uuid:32bdb712-2be7-48d4-a393-7e7c56b26b83>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00094-ip-10-147-4-33.ec2.internal.warc.gz"}
Emperical Labs’ Lil Freq is a parametric equalizer with 8 sections of analog processing power. Claimed to be extremely β€œclean” and a reliable tool for de-essing and getting your desired sound. 24 more words SM1H Parametric Equations of Circles, Ellipses and Hyperbolae. This video is relevant to students undertaking the Year 12 subject of Specialist Mathematics Units 3 and 4 in the State of Victoria, SM1H Parametric Equations Part 2. This video is relevant to students undertaking the Year 12 subject of Specialist Mathematics Units 3 and 4 in the State of Victoria, Australia. SM1H Parametric Equations Part 1. This video is relevant to students undertaking the Year 12 subject of Specialist Mathematics Units 3 and 4 in the State of Victoria, Australia. It is also called independent t test. When do we use it? When there are: a) Differences between conditions b) One variable c) Two conditions… 484 more words It is also called paired samples t test because the test pairs the samples. When do we use it? When there are: a) Differences between conditions… 465 more words
{"url":"http://en.wordpress.com/tag/parametric/","timestamp":"2014-04-20T18:46:27Z","content_type":null,"content_length":"182687","record_id":"<urn:uuid:6f973458-869e-4ebf-8605-5e55432c84f2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
urgent plz help me in thissss tricky q September 12th 2006, 10:47 PM urgent plz help me in thissss tricky q can anybody help me??? in an election contested by 2 candidates,one secured 37% and lost the election by a margin of 16640 votes. Find the number of votes polled and the votes secured by the defeated September 12th 2006, 11:35 PM Candidate that lost got 37% Candidate that won got 63% Candidate lost by 16640 This is the difference between the two candidates So 16640 votes is 63 - 37 = 26% 16640/26 = 640 votes same as 1% From this can work out 100% for total votes and 37% for votes candidate that lost got September 13th 2006, 01:13 AM I always say that if you know what you are supposed to find in a word problem then you have solved 30 percent of the problem already. The last 70 percent is the solution---which is playing around or concentrating on how to get what you are asked to find. :-) In your word problem here, you are asked to find the total number of votes polled, and the votes secured by the defeated candidate. Easy. The problem is 30 percent solved already! You don't know yet those two so you assign variables or unknowns to them for now. Let V = total votes polled. And X = votes gathered by the defeated candidate. So you have two unknowns. You need two independent equations to solve for these two unknowns. The problem says the loser secured 37 percent of the total votes, so, X = 37% of V X = (0.37)V -----Eq.(1) The problem also says the loser lost by a margin of 16,640 votes. That means the difference between the votes secured by the winner and the votes secured by the loser is 16,640 votes. If the loser got X votes, and the total votes is V, then the winner got (V-X) votes. (V-X) -X = 16,640 V -2X = 16,640 -----Eq.(2) There, you have your two equations. Solved the problem! Substitute the X from Eq.(1) into Eq.(2), V -2(0.37)V = 16,640 V -(0.74)V = 16,640 (0.26)V = 16,640 V = (16,640)/(0.26) = 64,000 votes, total. -------answer. Plug that into Eq.(1), X = (0.37)(64,000) = 23,680 votes, secured by the loser. ----answer. Votes of loser = 23,680 votes. So votes of winner = 64,000 -23,680 = 40,320 votes. 40,320 -23,680 =? 16,640 16,640 =? 16,640 Yes, so, OK.
{"url":"http://mathhelpforum.com/algebra/5470-urgent-plz-help-me-thissss-tricky-q-print.html","timestamp":"2014-04-16T18:07:22Z","content_type":null,"content_length":"6779","record_id":"<urn:uuid:3c67a1ca-3ac5-4f90-a0a4-26224997b38f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Render shapefiles in opengl just like ArcGIS does 09-15-2013, 10:46 AM #1 Newbie Newbie Join Date Sep 2013 Anyone interested in rendering geographic data with opengl! please help me As you know we have two kinds of coordinate systems in GIS : Geographic coordinate systems and projected coordinate systems consider these situations: I have a shapefile that has projected coordinate system, I mean the coordinates of the vertexes are in PCS, so there's no problem I can extract vertices with GDAL/OGR library and then show them in OpenGL with orthographic projection. since the coordinates are in meters and they're projected on a 2D plane, there will be no problem I think. The datasource has a geographic coordinate system and I just want to render it in a projected coordinate system, so I have to do the transformation with PROJ.4 library and then render the transformed coordinates in OpenGL. Again I think there will be no problem, since the coordinates I want to show areb projected on a 2D plane. The datasource has a geographic coordinate system and I want to render them in the the same GCS on an OpenGL window. So I extract coordinates with GDAL/OGR in geographic coordinate system and without any transformations, I'll render them in OpenGL. How can I achieve this? I mean how can I set up an ellipsoidal plane in an OpenGL window just like ArcGIS does renders data in geographic coordinate system? The datasource has a geographic coordinate system and I want to render them in the the same GCS on an OpenGL window. So I extract coordinates with GDAL/OGR in geographic coordinate system and without any transformations, I'll render them in OpenGL. How can I achieve this? I mean how can I set up an ellipsoidal plane in an OpenGL window just like ArcGIS does renders data in geographic coordinate system? You can use PROJ's pj_geodetic_to_geocentric() to convert geodetic (latitude, longitude, altitude) to geocentric (X,Y,Z) coordinates, or you can convert them yourself easily enough (just use the equations for spherical coordinates; a 297/298 ellipsoid is visually indistinguishable from a sphere, and there's not much point in getting it exact unless you know the physical aspect ratio of the monitor's pixels to better than 1/300). You can use PROJ's pj_geodetic_to_geocentric() to convert geodetic (latitude, longitude, altitude) to geocentric (X,Y,Z) coordinates, or you can convert them yourself easily enough (just use the equations for spherical coordinates; a 297/298 ellipsoid is visually indistinguishable from a sphere, and there's not much point in getting it exact unless you know the physical aspect ratio of the monitor's pixels to better than 1/300). but I don't mean I want to convert the coordinates to geocentric surely in that case I can treat coordinates like 3D coordinates and render them in an opengl window which has projection set up with glFrustrum. I mean I want to see coordinates in decimal degrees whenever I move over the window and the coordinates are shown in status bar. Then you'll need to be more specific about what you actually want. Saying "like ArcGIS" doesn't help unless you've actually seen ArcGIS. E.g. you say I want to render them in the the same GCS on an OpenGL window. Taken literally, that would require an ellipsoidal monitor. Displaying them on a flat monitor requires a projection. It doesn't have to be a standardised projection, but it needs to map lat/lon to x/y. Also, to convert screen coordinates back to geographic coordinates for the status bar requires that you can compute the inverse projection (PROJ can do this for some projections, others require root-finding). 09-15-2013, 11:34 AM #2 Member Regular Contributor Join Date Jun 2013 09-15-2013, 12:26 PM #3 Newbie Newbie Join Date Sep 2013 09-15-2013, 01:00 PM #4 Member Regular Contributor Join Date Jun 2013 09-16-2013, 12:49 AM #5 Newbie Newbie Join Date Sep 2013
{"url":"http://www.opengl.org/discussion_boards/showthread.php/182712-Render-shapefiles-in-opengl-just-like-ArcGIS-does?p=1254855","timestamp":"2014-04-20T08:29:03Z","content_type":null,"content_length":"53041","record_id":"<urn:uuid:9631523b-3e49-42ec-a5e1-ae3bbddc45a2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
Voronoi Diagrams Date: 12/12/2000 at 03:46:12 From: Karl Subject: Voronoi diagram On a Voronoi diagram, how do you know which lines you need, and which parts of those lines you need? We have already searched on many sites and in our math book. Date: 12/13/2000 at 11:54:20 From: Doctor Floor Subject: Re: Voronoi diagram Hi, Karl, Thanks for your question. Using the following figure I will try to explain how to make a Voronoi cell, and which lines have to be used: I hope this answers your question. If you have more questions, or if things in my reply are unclear, please write back. Best regards, - Doctor Floor, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/51798.html","timestamp":"2014-04-17T19:00:24Z","content_type":null,"content_length":"7096","record_id":"<urn:uuid:4a23507e-ef0f-47fd-9f12-8e3b65d19ded>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
What is 29 degrees celsius in fahrenheit? You asked: What is 29 degrees celsius in fahrenheit? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/what_is_29_degrees_celsius_in_fahrenheit","timestamp":"2014-04-18T19:23:42Z","content_type":null,"content_length":"56685","record_id":"<urn:uuid:49d48aa3-139a-4de3-a799-f258a64bf4b8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
linear programs Results 1 - 10 of 12 , 1998 "... SeDuMi is an add-on for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This pape ..." Cited by 736 (3 self) Add to MetaCart SeDuMi is an add-on for MATLAB, that lets you solve optimization problems with linear, quadratic and semidefiniteness constraints. It is possible to have complex valued data and variables in SeDuMi. Moreover, large scale optimization problems are solved efficiently, by exploiting sparsity. This paper describes how to work with this toolbox. - Annals of Operations Research , 1996 "... 1 Introduction Consider the linear programming (LP) problem in the standard form: (LP) minimize cT x ..." - MATH. PROGRAMMING , 2004 "... We present polynomial-time interior-point algorithms for solving the Fisher and Arrow-Debreu competitive market equilibrium problems with linear utilities and n players. Both of them have the arithmetic operation complexity bound of O(n 4 log(1/Ι›)) for computing an Ι›-equilibrium solution. If the p ..." Cited by 36 (7 self) Add to MetaCart We present polynomial-time interior-point algorithms for solving the Fisher and Arrow-Debreu competitive market equilibrium problems with linear utilities and n players. Both of them have the arithmetic operation complexity bound of O(n 4 log(1/Ι›)) for computing an Ι›-equilibrium solution. If the problem data are rational numbers and their bit-length is L, then the bound to generate an exact solution is O(n 4 L) which is in line with the best complexity bound for linear programming of the same dimension and size. This is a significant improvement over the previously best bound O(n 8 log(1/Ι›)) for approximating the two problems using other methods. The key ingredient to derive these results is to show that these problems admit convex optimization formulations, efficient barrier functions and fast rounding techniques. We also present a continuous path leading to the set of the Arrow-Debreu equilibrium, similar to the central path developed for linear programming interior-point methods. This path is derived from the weighted logarithmic utility and barrier functions and the Brouwer fixed-point theorem. The defining equations are bilinear and possess some primal-dual structure for the application of the Newton-based path-following method. , 1998 "... We consider an infeasible-interior-point algorithm, endowed with a finite termination scheme, applied to random linear programs generated according to a model of Todd. Such problems have degenerate optimal solutions, and possess no feasible starting point. We use no information regarding an optimal ..." Cited by 11 (3 self) Add to MetaCart We consider an infeasible-interior-point algorithm, endowed with a finite termination scheme, applied to random linear programs generated according to a model of Todd. Such problems have degenerate optimal solutions, and possess no feasible starting point. We use no information regarding an optimal solution in the initialization of the algorithm. Our main result is that the expected number of iterations before termination with an exact optimal solution is O(n ln(n)). Keywords: Linear Programming, Average-Case Behavior, Infeasible-Interior-Point Algorithm. Running Title: Probabilistic Analysis of an LP Algorithm 1 Dept. of Management Sciences, University of Iowa. Supported by an Interdisciplinary Research Grant from the Center for Advanced Studies, University of Iowa. 2 Dept. of Mathematics, Valdosta State University. Supported by an Interdisciplinary Research Grant from the Center for Advanced Studies, University of Iowa. 3 Dept. of Mathematics, University of Iowa. Supported by ... , 2004 "... In this paper we present a restructuring of the computations in Lenstra’s methods for solving mixed integer linear programs. We show that the problem of finding a good branching hyperplane can be formulated on an adjoint lattice of the Kernel lattice of the equality constraints without requiring any ..." Cited by 8 (0 self) Add to MetaCart In this paper we present a restructuring of the computations in Lenstra’s methods for solving mixed integer linear programs. We show that the problem of finding a good branching hyperplane can be formulated on an adjoint lattice of the Kernel lattice of the equality constraints without requiring any dimension reduction. As a consequence the short lattice vector finding algorithms, such as Lenstra, Lenstra, LovΓ‘sz (LLL) [15] or the generalized basis reduction algorithm of LovΓ‘sz and Scarf [18] are described in the space of original variables. Based on these results we give a new natural heuristic way of generating branching hyperplanes, and discuss its relationship with recent reformulation techniques of Aardal and Lenstra [1]. We show that the reduced basis available at the root node has useful information on the branching hyperplanes for the generalized branch-and-bound tree. Based on these results algorithms are also given for solving mixed convex integer programs. , 1997 "... this paper wehave selected the primal-dual logarithmic barrier algorithm to present our ideas, because it and its modified versions are considered, in general, to be the most efficient in practice. The computational results presented in this paper were obtained using implementations of this algorith ..." Cited by 2 (0 self) Add to MetaCart this paper wehave selected the primal-dual logarithmic barrier algorithm to present our ideas, because it and its modified versions are considered, in general, to be the most efficient in practice. The computational results presented in this paper were obtained using implementations of this algorithm. It is to be noted, however, that this choice has notational consequences only. Practically,anyinterior point method, even nonlinear ones can be discussed in a similar linear algebra framework. Let us consider the linear programming problem , 2004 "... We describe Fortran subroutines for network flow optimization using an interior point network flow algorithm, that, together with a Fortran language driver, make up PDNET. The algorithm is described in detail and its implementation is outlined. Usage of the package is described and some computationa ..." Cited by 2 (1 self) Add to MetaCart We describe Fortran subroutines for network flow optimization using an interior point network flow algorithm, that, together with a Fortran language driver, make up PDNET. The algorithm is described in detail and its implementation is outlined. Usage of the package is described and some computational experiments are reported. Source code for the software can be downloaded at "... This paper studies a primal-dual interior/exterior-point path-following approach for linearprogramming that is motivated on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primal-dual optimality equations Fu(x, y, z) = 0. Under nonde ..." Cited by 2 (1 self) Add to MetaCart This paper studies a primal-dual interior/exterior-point path-following approach for linearprogramming that is motivated on using an iterative solver rather than a direct solver for the search direction. We begin with the usual perturbed primal-dual optimality equations Fu(x, y, z) = 0. Under nondegeneracy assumptions, this nonlinear system is well-posed,i.e. it has a nonsingular Jacobian at optimality and is not necessarily ill-conditioned as the iterates approach optimality. We use a simple preprocessing step to eliminate boththe primal and dual feasibility equations. This results in a single bilinear equation that maintains the well-posedness property. We then apply both a direct solution techniqueas well as a preconditioned conjugate gradient method (PCG), within an inexact Newton framework, directly on the linearized equations. This is done without forming the usualnormal equations, NEQ, or augmented system. Sparsity is maintained. The work of aniteration for the PCG approach consists almost entirely in the (approximate) solution of this well-posed linearized system. Therefore, improvements depend on efficient preconditioning. - SIAM Journal on Optimization "... In this paper we study linear optimization problems with multi-dimensional linear positive second-order stochastic dominance constraints. By using the polyhedral properties of the secondorder linear dominance condition we present a cutting-surface algorithm, and show its finite convergence. The cut ..." Cited by 2 (1 self) Add to MetaCart In this paper we study linear optimization problems with multi-dimensional linear positive second-order stochastic dominance constraints. By using the polyhedral properties of the secondorder linear dominance condition we present a cutting-surface algorithm, and show its finite convergence. The cut generation problem is a difference of convex functions (DC) optimization problem. We exploit the polyhedral structure of this problem to present a novel branch-and-cut algorithm that incorporates concepts from concave minimization and binary integer programming. A linear programming problem is formulated for generating concavity cuts in our case, where the polyhedra is unbounded. We also present duality results for this problem relating the dual multipliers to utility functions, without the need to impose constraint qualifications, which again is possible because of the polyhedral nature of the problem. Numerical examples are presented showing the nature of solutions of our model.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1061979","timestamp":"2014-04-21T13:56:11Z","content_type":null,"content_length":"36197","record_id":"<urn:uuid:a6a20c45-8955-4aa5-96ea-80155fbef778>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Algebra Tutors Santa Cruz, CA 95060 Math Graduate student, UCSC, experience from algebra to calculus. I have been tutoring math since I was an undergraduate. I have experience tutoring precalculus, trigonometry, calculus, linear algebra and differential equations. Currently I work as a teaching assistant in the UCSC math department. I am known as a dedicated TA and... Offering 5 subjects including calculus
{"url":"http://www.wyzant.com/Santa_Cruz_CA_Linear_Algebra_tutors.aspx","timestamp":"2014-04-20T01:34:28Z","content_type":null,"content_length":"59942","record_id":"<urn:uuid:9d601918-523e-4799-ae8b-841e6025b85a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Factor X The factor theorem states that a polynomial f(x) is divisible by (x-a) if f(a)=0. The given polynomial is Since it is divisible by x, it follows that f(x)=0, therefore, the given polynomial is equal to zero, and c=0. True, because, if c is a non-zero constant, the polynomial is not divisible by x. Character is who you are when no one is looking.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=30746","timestamp":"2014-04-17T16:07:28Z","content_type":null,"content_length":"13287","record_id":"<urn:uuid:aa8a0956-9752-43cf-96d0-237d485dbb13>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of axiom of choice Computing Dictionary Axiom of Choice definition (AC, or "Choice") An set theory If X is a set of sets, and S is the union of all the elements of X, then there exists a function f:X -> S such that for all non-empty x in X, f(x) is an element of x. In other words, we can always choose an element from each set in a set of sets, simultaneously. Function f is a "choice function" for X - for each x in X, it chooses an element of x. Most people's reaction to AC is: "But of course that's true! From each set, just take the element that's biggest, stupidest, closest to the North Pole, or whatever". Indeed, for any set of sets, we can simply consider each set in turn and pick an arbitrary element in some such way. We can also construct a choice function for most simple infinite sets of sets if they are generated in some regular way. However, there are some infinite sets for which the construction or specification of such a choice function would never end because we would have to consider an infinite number of separate cases. For example, if we express the real number line R as the union of many "copies" of the rational numbers , Q, namely Q, Q+a, Q+b, and infinitely (in fact uncountably) many more, where a, b, etc. are irrational numbers no two of which differ by a rational, and Q+a == q+a : q in Q we cannot pick an element of each of these "copies" without AC. An example of the use of AC is the theorem which states that the union of countable sets is countable. I.e. if X is countable and every element of X is countable (including the possibility that they're finite), then the sumset of X is countable. AC is required for this to be true in general. Even if one accepts the axiom, it doesn't tell you how to construct a choice function, only that one exists. Most mathematicians are quite happy to use AC if they need it, but those who are careful will, at least, draw attention to the fact that they have used it. There is something a little odd about Choice, and it has some alarming consequences, so results which actually "need" it are somehow a bit suspicious, e.g. the Banach-Tarski paradox . On the other side, consider Russell's Attic AC is not a Zermelo FrΓ€nkel set theory (ZF). GΓΆdel and Paul Cohen proved that AC is independent of ZF, i.e. if ZF is consistent, then so are ZFC (ZF with AC) and ZF(~C) (ZF with the negation of AC). This means that we cannot use ZF to prove or disprove AC.
{"url":"http://dictionary.reference.com/browse/axiom+of+choice","timestamp":"2014-04-20T20:58:29Z","content_type":null,"content_length":"93307","record_id":"<urn:uuid:281d4d5b-bd15-45ac-a4f0-6b294d1e4bb6>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
August 6, 2008 This Week's Finds in Mathematical Physics (Week 268) John Baez This Week will be all about Frobenius algebras and modular tensor categories. But first, here's a beautiful photo of Io, the volcanic moon of Jupiter that I introduced back in "week266": 1) NASA Photojournal, A new year for Jupiter and Io, http://photojournal.jpl.nasa.gov/catalog/PIA02879 Io looks awfully close to Jupiter here! It's actually 2.5 Jupiter diameters away... but that's close enough to cause the intense tidal heating that leads to sulfur volcanoes. I told you about Frobenius algebras in "week174" and "week224", but I think it's time to talk about them again! In the last few weeks, I've run into them - and their generalizations - in a surprising variety of ways. First of all, Jamie Vicary visited me here in Paris and explained how certain Frobenius algebras can be viewed as classical objects living in a quantum world - governed by quantum logic. Mathematicians in particular are used to thinking of the quantum world as a mathematical structure resting on foundations of classical logic: first comes set theory, then Hilbert spaces on top of that. But what if it's really the other way around? What if classical mathematics is somehow sitting inside quantum theory? The world is quantum, after all. There are a couple of papers so far that discuss this provocative idea: 2) Bob Coecke and Dusko Pavlovic, Quantum measurements without sums, in The Mathematics of Quantum Computation and Technology, eds. Chen, Kauffman and Lomonaco, Chapman and Hall/CRC, New York, pp. 559-596. Also available as arXiv:quant-ph/0608035. 3) Jamie Vicary, Categorical formulation of quantum algebras, available as arXiv:0805.0432. Second, Paul-AndrΓ© MelliΓ¨s, the computer scientist and logician who's my host here, has been telling me how logic can be nicely formulated in certain categories - "*-autonomous categories" - which can be seen as categorified Frobenius algebras. Here the idea goes back to Ross Street: 4) Ross Street, Frobenius monads and pseudomonoids, J. Math. Physics 45 (2004) 3930-3948. Available as http://www.math.mq.edu.au/~street/Frob.pdf Paul-AndrΓ© is teaching a course on this and related topics; you can see the slides for his course here: 5) Paul-AndrΓ© MelliΓ¨s, Groupoides quantiques et logiques tensorielles: une introduction, course notes at http://www.pps.jussieu.fr/~mellies/teaching.html See especially the fourth class. But to get you ready for this material, I should give a quick introduction to the basics! If you're a normal mathematician, the easiest definition of "Frobenius algebra" is something like this. For starters, it's an "algebra": a vector space with an associative product that's linear in each argument, and an identity element 1. But what makes it "Frobenius" is that it's got a nondegenerate bilinear form g satisfying this axiom: g(ab,c) = g(a,bc) I'm calling it "g" to remind geometers of how nondegenerate bilinear forms are used as "metrics", like the metric tensor at a point of a Riemannian or Lorentzian manifold. But beware: we'll often work with complex instead of real vector spaces. And, we won't demand that g(a,b) = g(b,a), though this holds in many examples. Let's see some examples! For starters, we could take the algebra of n Γ— n matrices and define g(a,b) = tr(ab) where "tr" is the usual trace. Or, we could perversely stick any nonzero number in this formula, like g(a,b) = -37 tr(ab) Or, we could take a bunch of examples like this and take their direct sum. This gives us the most general "semisimple" Frobenius algebra. So, semisimple Frobenius algebras are pathetically easy to classify. There's also a vast wilderness of non-semisimple ones, which will never be classified. But for a nice step in this direction, try Prop. 2 in this paper: 6) Steve Sawin, Direct sum decompositions and indecomposable TQFTs, J. Math. Phys. 36 (1995) 6673-6680. Also available as q-alg/9505026. This classifies all commutative Frobenius algebras that are "indecomposable" - not a direct sum of others. Note the mention of topological quantum field theories, or TQFTs. Here's why. Suppose you have an n-dimensional TQFT. This gives vector spaces for (n-1)-dimensional manifolds describing possible choices of "space", and operators for n-dimensional manifolds going between these, which describe possible choices of "spacetime". So, it gives you some vector space for the (n-1)-sphere, say A. And, this vector space is a commutative Frobenius algebra! Let me sketch the proof. I'll use lots of hand-wavy reasoning, which is easy to make rigorous using the precise definition of a TQFT. For starters, there's the spacetime where two spherical universes collide and fuse into one. Here's what it looks like for n = 2: This gives the vector space A a multiplication: m: A βŠ— A β†’ A a βŠ— b |β†’ ab Next there's the spacetime where a spherical universe appears from nothing - a "big bang": This gives A an identity element, which we call 1: i: C β†’ A 1 |β†’ 1 Here C stands for the complex numbers, but mathematicians could use any field. Now we can use topology to show that A is an algebra - namely, that it satisfies the associative law: (ab)c = a(bc) and the left and right unit laws: 1a = a = 1 But why is it a Frobenius algebra? To see this, let's switch the future and past in our previous argument! The spacetime where a spherical universe splits in two gives A a "comultiplication": Ξ”: A β†’ A βŠ— A The spacetime where a spherical universe disappears into nothing - a "big crunch" - gives A a trace, or more precisely a "counit": e: A β†’ C And, a wee bit of topology shows that these make A into a "coalgebra", satisfying the "coassociative law" and the left and right "counit laws": Everything has just been turned upside down! It's easy to see that the multiplication on A is commutative, at least for n > 1: Similarly, the comultiplication is "cocommutative" - just turn the above proof upside down! But why is A a Frobenius algebra? The point is that the algebra and coalgebra structures interact in a nice way. We can use the product and counit to define a bilinear form: g(a,b) = e(ab) This is just what we did in our matrix algebra example, where e was a multiple of the trace. We can also think of g as a linear operator g: A βŠ— A β†’ C But now we see this operator comes from a spacetime where two universes collide and then disappear into nothing: To check the Frobenius axiom, we just use associativity: g(ab,c) = e((ab)c) = e(a(bc)) = g(a,bc) But why is g nondegenerate? I'll just give you a hint. The bilinear form g gives a map from A to the dual vector space A*: a |β†’ g(a,-) Physicists would call this map "lowering indices with the metric g". To show that g is nondegenerate, it's enough to find an inverse for this map, which physicists would call "raising indices". This should be a map going back from A* to A. To build a map going back like this, it's enough to get a map h: C β†’ A βŠ— A and for this we use the linear operator coming from this spacetime: The fact that "raising indices" is the inverse of "lowering indices" then follows from the fact that you can take a zig-zag in a piece of pipe and straighten it out! So, any n-dimensional TQFT gives a Frobenius algebra, and in fact a commutative Frobenius algebra for n > 1. In general there's more to the TQFT than this Frobenius algebra, since there are spacetimes that aren't made of the building blocks I've drawn. But in 2 dimensions, every spacetime can be built from these building blocks: the multiplication and unit, comultiplication and counit. So, with some work, one can show that A 2D TQFT IS THE SAME AS A COMMUTATIVE FROBENIUS ALGEBRA. This idea goes back to Dijkgraaf: 7) Robbert H. Dijkgraaf, A Geometric Approach To Two-Dimensional Conformal Field Theory, PhD thesis, University of Utrecht, 1989. and a formal proof was given by Abrams: 8) Lowell Abrams, Two-dimensional topological quantum field theories and Frobenius algebra, Jour. Knot. Theory and its Ramifications 5 (1996), 569-587. This book is probably the best place to learn the details: 9) Joachim Kock, Frobenius Algebras and 2d Topological Quantum Field Theories, Cambridge U. Press, Cambridge, 2004. but for a goofier explanation, try this: 10) John Baez, Winter 2001 Quantum Gravity Seminar, Track 1, weeks 11-17, http://math.ucr.edu/home/baez/qg-winter2001/ To prove the equivalence of 2d TQFTs and commutative Frobenius algebras, it's handy to use a different definition of Frobenius algebra, equivalent to the one I gave. I said a Frobenius algebra was an algebra with a nondegenerate bilinear form satisfying g(ab,c) = g(a,bc). But this is equivalent to having an algebra that's also a coalgebra, with multiplication and comultiplication linked by the "Frobenius equations": (Ξ” βŠ— 1[A]) (1[A] βŠ— m) = Ξ” m = (m βŠ— 1[A]) (1[A] βŠ— Ξ”) These equations are a lot more charismatic in pictures! We can also interpret them conceptually, as follows. If you have an algebra A, it becomes an (A,A)-bimodule in an obvious way... well, obvious if you know what this jargon means, at least. A βŠ— A also becomes an (A,A)-bimodule, like this: a (b βŠ— c) d = ab βŠ— cd Then, a Frobenius algebra is an algebra that's also a coalgebra, where the comultiplication is an (A,A)-bimodule homomorphism! This scary sentence has the Frobenius equations hidden inside it. The Frobenius equations have a fascinating history, going back to Lawvere, Carboni and Walters, Joyal, and others. Joachim Kock's website includes some nice information about this. Read what Joyal said about Frobenius algebras that made Eilenberg ostentatiously rise and leave the room! 11) Joachim Kock, Remarks on the history of the Frobenius equation, http://mat.uab.es/~kock/TQFT.html#history The people I just mentioned are famous category theorists. They realized that Frobenius algebra can be generalized from the category of vector spaces to any "monoidal category" - that is, any category with tensor products. And if this monoidal category is "symmetric", it has an isomorphism between X βŠ— Y and Y βŠ— X for any objects X and Y, which lets us generalize the notion of a commutative Frobenius object. For a nice intro to these ideas, try the slides of this talk: 12) Ross Street, Frobenius algebras and monoidal category, talk at the annual meeting of the Australian Mathematical Society, September 2004, available at http://www.maths.mq.edu.au/~street/FAMC.pdf These ideas allow for a very slick statement of the slogan I mentioned: A 2D TQFT IS THE SAME AS A COMMUTATIVE FROBENIUS ALGEBRA. For any n, there's a symmetric monoidal category nCob, with: β€’ compact oriented (n-1)-manifolds as objects; β€’ compact oriented n-dimensional cobordisms as morphisms. The objects are choices of "space", and the morphisms are choices of "spacetime". The sphere is a very nice object in nCob; let's call it A. Then all the pictures above show that A is a Frobenius algebra in nCob! It's commutative when n > 1. And when n = 2, that's all there is to say! More precisely: 2Cob IS THE FREE SYMMETRIC MONOIDAL CATEGORY ON A COMMUTATIVE FROBENIUS ALGEBRA. So, to define a 2d TQFT, we just need to pick a commutative Frobenius algebra in Vect (the category of vector spaces). By "freeness", this determines a symmetric monoidal functor Z: 2Cob β†’ Vect and that's precisely what a 2d TQFT is! If you don't know what a symmetric monoidal functor is, don't worry - that's just what I'd secretly been using to translate from pictures of spacetimes to linear operators in my story so far. You can get a precise definition from those seminar notes of mine, or many other places. Now let's talk about some variations on the slogan above. We can think of the 2d spacetimes we've been drawing as the worldsheets of "closed strings" - but ignoring the geometry these worldsheets usually have, and keeping only the topology. So, some people call them "topological closed strings". We can also think about topological open strings, where we replace all our circles by intervals. Just as the circle gave a commutative Frobenius algebra, an interval gives a Frobenius algebra where the multiplication comes from two open strings joining end-to-end to form a single one: This open string Frobenius algebra is typically noncommutative - draw the picture and see! But, it's still "symmetric", meaning: g(a,b) = g(b,a) This is very nice. But physically, open strings like to join together and form closed strings, so it's better to consider closed and open strings together in one big happy family... or category. The idea of doing this for topological strings was developed by Moore and Segal: 13) Greg Moore, Lectures on branes, K-theory and RR charges, Clay Math Institute Lecture Notes (2002), available at http://www.physics.rutgers.edu/~gmoore/clay1/clay1.html Lauda and Pfeiffer developed this idea and proved that this category has a nice description in terms of Frobenius algebras: 14) Aaron Lauda and Hendryk Pfeiffer, Open-closed strings: two-dimensional extended TQFTs and Frobenius algebras, Topology Appl. 155 (2008) 623-666. Also available as arxiv:math.AT/0510664. Here's what they prove, encoded as a mysterious slogan: THE CATEGORY OF OPEN-CLOSED TOPOLOGICAL STRINGS IS THE FREE SYMMETRIC MONOIDAL CATEGORY ON A "KNOWLEDGEABLE" FROBENIUS ALGEBRA. If you like the pictures I've been drawing so far, you'll love this paper - since that's where I got most of these pictures! And, it's just the beginning of a longer story where Lauda and Pfeiffer build 2d TQFTs using state sum models: 15) Aaron Lauda and Hendryk Pfeiffer, State sum construction of two-dimensional open-closed topological quantum field theories, J. Knot Theory and its Ramifications 16 (2007), 1121-1163. Also available as arXiv:math/0602047. This generalizes a construction due to Fukuma, Hosono and Kawai, explained way back in "week16" and also in my seminar notes mentioned above. Then Lauda and Pfeiffer use this machinery to study knot 16) Aaron Lauda and Hendryk Pfeiffer, Open-closed TQFTs extend Khovanov homology from links to tangles, available as math/0606331. Alas, explaining this would be a vast digression. I want to keep talking about basic Frobenius stuff. I guess I should say a bit more about semisimple versus non-semisimple Frobenius algebras. Way back at the beginning of this story, I said you can get a Frobenius algebra by taking the algebra of n Γ— n matrices and defining g(a,b) = k tr(ab) for any nonzero constant k. Direct sums of these give all the semisimple Frobenius algebras. But any algebra acts on itself by left multiplication: L[a]: b |β†’ ab so for any algebra we can try to define g(a,b) = tr(L[a] L[b]) This bilinear form is nondegenerate precisely when our algebra is "strongly separable": 17) Marcelo Aguiar, A note on strongly separable algebras, available at http://www.math.tamu.edu/~maguiar/strongly.ps.gz Over the complex numbers, or any field of characteristic zero, an algebra is strongly separable iff it's finite-dimensional and semisimple. The story is trickier over other fields - see that last paper of Lauda and Pfeiffer if you're interested. Now, for n Γ— n matrices, g(a,b) = tr(L[a] L[b]) is n times the usual tr(ab). But it's better, in a way. The reason is that for any strongly separable algebra, g(a,b) = tr(L[a] L[b]) gives a Frobenius algebra with a cute extra property: if we comultiply and then multiply, we get back where we started! This is easy to see if you write the above formula for g using diagrams. Frobenius algebras with this cute extra property are sometimes called "special". If we use a commutative special Frobenius algebra to get a 2d TQFT, it fails to detect handles! That seems sad. But these papers: 18) Stephen Lack, Composing PROPs, Theory and Applications of Categories 13(2004), 147-163. Available at http://www.tac.mta.ca/tac/volumes/13/9/13-09abs.html 19) R. Rosebrugh, N. Sabadini and R.F.C. Walters, Generic commutative separable algebras and cospans of graphs, Theory and Applications of Categories 15 (Proceedings of CT2004), 164-177. Available at makes that sad fact seem good! Namely: Cospan(FinSet) IS THE FREE SYMMETRIC MONOIDAL CATEGORY ON A COMMUTATIVE SPECIAL FROBENIUS ALGEBRA. Here Cospan(FinSet) is the category of "cospans" of finite sets. The objects are finite sets, and a morphism from X to Y looks like this: X Y \ / F\ /G \ / v v If you remember the "Tale of Groupoidication" starting in "week247", you'll know about spans and how to compose spans using pullback. This is just the same only backwards: we compose cospans using But here's the point. A 2d cobordism is itself a kind of cospan: X Y \ / F\ /G \ / v v with two collections of circles included in the 2d manifold S. If we take connected components, we get a cospan of finite sets. Now we've lost all information about handles! And the circle - which was a commutative Frobenius algebra - becomes a mere one-point set - which is a special commutative Frobenius algebra. Now for a few examples of non-semisimple Frobenius algebras. First, take the exterior algebra Ξ›V over an n-dimensional vector space V, and pick any nonzero element of degree n - what geometers would call a "volume form". There's a unique linear map e: Ξ›V β†’ C which sends the volume form to 1 and kills all elements of degree < n. This is a lot like "integration" - and so is taking a trace. So, you should want to make Ξ›V into a Frobenius algebra using this g(a,b) = e(a ^ b) where ^ is the product in the exterior algebra. It's easy to see this is nondegenerate and satisfies the Frobenius axiom: g(ab,c) = e(a ^ b ^ c) = g(a,bc) So, it works! But, this algebra is far from semisimple. If you know about cohomology, you should want to copy this trick replacing the exterior algebra by the deRham cohomology of a compact oriented manifold, and replacing e by "integration". It still works. So, every compact manifold gives us a Frobenius algebra! If you know about algebraic varieties, you might want to copy this trick replacing the compact manifold by a complex projective variety. I'm no expert on this, but people seem to say that it only works for Calabi-Yau varieties. Then you can do lots of cool stuff: 20) Kevin Costello, Topological conformal field theories and Calabi-Yau categories, available as arxiv:math/0412149. Here a "Calabi-Yau category" is just the "many-object" version of a Frobenius algebra - a Calabi-Yau category with one object is a Frobenius algebra. There's much more to say about this wonderful paper, but I'm afraid for now you'll have to read it... I'm getting worn out, and I want to get to the new stuff I just learned! But before I do, I can't resist rounding off one corner I cut. I said that Frobenius algebras show up naturally by taking string theory and watering it down: ignoring the geometrical structure on our string worldsheets and remembering only their topology. A bit more precisely, 2d TQFTs assign linear operators to 2d cobordisms, but conformal field theories assign operators to 2d cobordisms equipped with conformal structures. Can we describe conformal field theories using Frobenius algebras? 21) Ingo Runkel, Jens Fjelstad, Jurgen Fuchs, Christoph Schweigert, Topological and conformal field theory as Frobenius algebras, available as arXiv:math/0512076. But, you need to use Frobenius algebras inside a modular tensor category! I wish I had more time to study modular tensor categories, and tell you all about them. They are very nice braided monoidal categories that are not symmetric. You can use them to build 3d topological quantum field theories, and they're also connected to other branches of math. For example, you can modular tensor categories consisting of nice representations of quantum groups. You can also can get them from rational conformal field theories - which is what the above paper by Runkel, Fjelstad, Fuchs and Schweigert is cleverly turning around. You can also get them from von Neumann algebras! If you want to learn the basics, this book is great - there's a slightly unpolished version free online: 22) B. Bakalov and A. Kirillov, Jr., Lectures on Tensor Categories and Modular Functors, American Mathematical Society, Providence, Rhode Island, 2001. Preliminary version available at http:// But if a book is too much for you, here's a nice quick intro. It doesn't say much about topological or conformal field theory, but it gives a great overview of recent work on the algebraic aspects of tensor categories: 23) Michael MΓΌger, Tensor categories: a selective guided tour, available as arXiv:0804.3587. Here's a quite different introduction to recent developments, at least up to 2004: 24) Damien Calaque and Pavel Etingof, Lectures on tensor categories, available as arXiv:math/0401246. Still more recently, Hendryk Pfeiffer has written what promises to be a fundamental paper describing how to think of any modular tensor category as the category of representations of an algebraic gadget - a "weak Hopf algebra": 25) Hendryk Pfeiffer, Tannaka-Krein reconstruction and a characterization of modular tensor categories, available as arXiv:0711.1402. And here's a paper that illustrates the wealth of examples: 26) Seung-moon Hong, Eric Rowell, Zhenghan Wang, On exotic modular tensor categories, available as arXiv:07108.5761. The abstract of this makes me realize that people have bigger hopes of understanding all modular tensor categories than I'd imagined: It has been conjectured that every (2+1)-dimensional TQFT is a Chern-Simons-Witten (CSW) theory labelled by a pair (G,k), where G is a compact Lie group, and k in H^4(BG,Z) is a cohomology class. We study two TQFTs constructed from Jones' subfactor theory which are believed to be counterexamples to this conjecture: one is the quantum double of the even sectors of the E[6] subfactor, and the other is the quantum double of the even sectors of the Haagerup subfactor. We cannot prove mathematically that the two TQFTs are indeed counterexamples because CSW TQFTs, while physically defined, are not yet mathematically constructed for every pair (G,k). The cases that are constructed mathematically include: β–‘ G is a finite group - the Dijkgraaf-Witten TQFTs; β–‘ G is a torus T^n; β–‘ G is a connected semisimple Lie group - the Reshetikhin-Turaev TQFTs. We prove that the two TQFTs are not among those mathematically constructed TQFTs or their direct products. Both TQFTs are of the Turaev-Viro type: quantum doubles of spherical tensor categories. We further prove that neither TQFT is a quantum double of a braided fusion category, and give evidence that neither is an orbifold or coset of TQFTs above. Moreover, the representation of the braid groups from the half E[6] TQFT can be used to build universal topological quantum computers, and the same is expected for the Haagerup case. Anyway, now let me say what Vicary and MelliΓ¨s have been explaining to me. I'll give it in a highly simplified form... and all mistakes are my own. First, from what I've said already, every commutative special Frobenius algebra over the complex numbers looks like C βŠ• C βŠ• ... C βŠ• C It's a direct sum of finitely many copies of C, equipped with its god-given bilinear form g(a,b) = tr(L[a] L[b]) So, this sort of Frobenius algebra is just an algebra of complex functions on a finite set. A map between finite sets gives an algebra homomorphism going back the other way. And the algebra homomorphisms between two Frobenius algebras of this sort all come from maps between finite sets. So, the category with: β€’ commutative special complex Frobenius algebras as objects; β€’ algebra homomorphisms as morphisms is equivalent to FinSet^op. This means we can find the category of finite sets - or at least its opposite, which is just as good - lurking inside the world of Frobenius algebras! Coecke, Pavlovic and Vicary explore the ramifications of this result for quantum mechanics, using Frobenius algebras that are Hilbert spaces instead of mere vector spaces. This lets them define a "†-Frobenius algebra" to be one where the comultiplication and counit are adjoint to the the multiplication and unit. They show that making a finite-dimensional Hilbert space into a commutative special †-Frobenius algebra is the same as equipping it with an orthonormal basis. There's no general way to duplicate quantum states - "you can't clone a quantum" - but if you only want to duplicate states lying in a chosen orthonormal basis you can do it. So, you can think of commutative special †-Frobenius algebras as "classical data types", which let you duplicate information. That's what the comultiplication does: duplicate! Any commutative special †-Frobenius algebra has a finite set attached to it: namely, the set of basis elements. So, we now see how to describe finite sets starting from Hilbert spaces and introducing a notion of "classical data type" formulated purely in terms of quantum concepts. The papers by Coecke, Pavlovic and Vicary go a lot further than my summary here. Jamie Vicary even studies how to categorify everything I've just mentioned! A subtlety: it's a fun puzzle to show that in any monoidal category, morphisms between Frobenius algebras that preserve all the Frobenius structure are automatically isomorphisms. See the slides of Street's talk if you get stuck: he shows how to construct the inverse, but you still get the fun of proving it works. So, the category with: β€’ commutative special complex Frobenius algebras as objects; β€’ Frobenius homomorphisms as morphisms is equivalent to the groupoid of finite sets. We get FinSet^op if we take algebra homomorphisms, and I guess we get FinSet if we take coalgebra homomorphisms. Finally, a bit about categorified Frobenius algebras and logic! I'm getting a bit tired, so I hope you believe that the concept of Frobenius algebra can be categorified. As I already mentioned, Frobenius algebras make sense in any monoidal category - and then they're sometimes called "Frobenius monoids". Similarly, categorified Frobenius algebras make sense in any monoidal bicategory, and then they're sometimes called "Frobenius pseudomonoids". These were introduced in Street's paper "Frobenius monads and pseudomonoids", cited above - but if you like pictures, you may also enjoy learning about them here: 27) Aaron Lauda, Frobenius algebras and ambidextrous adjunctions, Theory and Applications of Categories 16 (2006), 84-122, available at http://tac.mta.ca/tac/volumes/16/4/16-04abs.html Also available as arXiv:math/0502550. I explained some of the basics behind this paper in "week174". But now, I want to give a definition of *-autonomous categories, which simultaneously makes it clear that they're natural structures in logic, and that they're categorified Frobenius algebras! Suppose A is any category. We'll call its objects "propositions" and its morphisms "proofs". So, a morphism f: a β†’ b is a proof that a implies b. Next, suppose A is a symmetric monoidal category and call the tensor product "or". So, for example, given proofs f: a β†’ b, f': a' β†’ b' we get a proof f or f': a or a' β†’ b or b' Next, suppose we make the opposite category A^op into a symmetric monoidal category, but with a completely different tensor product, that we'll call "and". And suppose we have a monoidal functor: not: A β†’ A^op So, for example, we have not(a or b) = not(a) and not(b) or at least they're isomorphic, so there are proofs going both ways. Now we can apply "op" and get another functor I'll also call "not": not: A^op β†’ A Using the same name for this new functor could be confusing, but it shouldn't be. It does the same thing to objects and morphisms; we're just thinking about the morphisms as going backwards. Next, let's demand that this new functor be monoidal! This too is quite reasonable; for example it implies that not(a and b) = not(a) or not(b) or at least they're isomorphic. Next, let's demand that this pair of functors: A A^op be a monoidal adjoint equivalence. So, for example, there's a one-to-one correspondence between proofs not(a) β†’ b and proofs not(b) β†’ a Now for the really fun part. Let's define a kind of "bilinear form": g: A Γ— A β†’ Set where g(a,b) is the set of proofs not(a) β†’ b And let's demand that g satisfy the Frobenius axiom! In other words, let's suppose there's a natural isomorphism: g(a or b, c) β‰… g(a, b or c) Then A is a "*-autonomous category"! And this is a sensible notion, since it amounts to requiring a natural one-to-one correspondence between proofs not(a or b) β†’ c and proofs not(a) β†’ b or c So, categorified Frobenius algebras are a nice framework for propositional logic! In case it slipped by too fast, let me repeat the definition of *-autonomous category I just gave. It's a symmetric monoidal category A with a monoidal adjoint equivalence called "not" from A (with one tensor product, called "or") to A^op (with another, called "and"), such that the functor g: A Γ— A β†’ Set (a,b) |β†’ hom(not(a),b) is equipped with a natural isomorphism g(a or b, c) β‰… g(a, b or c) I hope I didn't screw up. I want this definition to be equivalent to the usual one, which was invented by Michael Barr quite a while ago: 28) Michael Barr, *-Autonomous Categories, Lecture Notes in Mathematics 752, Springer, Berlin, 1979. By now *-autonomous categories become quite popular among those working at the interface of category theory and logic. And, there are many ways to define them. Brady and Trimble found a nice one: 29) Gerry Brady and Todd Trimble, A categorical interpretation of C. S. Peirce's System Alpha, Jour. Pure Appl. Alg. 149 (2000), 213-239. Namely, they show a *-autonomous category is the same as a symmetric monoidal category A equipped with a contravariant adjoint equivalence not: A β†’ A which is equipped with a "strength", and where the unit and counit of the adjunction respect this strength. Later, in his paper "Frobenius monads and pseudomonoids", Street showed that *-autonomous categories really do give Frobenius pseudomonoids in a certain monoidal bicategory with: β€’ categories as objects; β€’ profunctors (also known as distributors) as morphisms; β€’ natural transformations as 2-morphisms. Alas, I'm too tired to explain this now! It's a slicker way of saying what I already said. But the cool part is that this bicategory is like a categorified version of Vect, with the category of finite sets replacing the complex numbers. That's why in logic, the "nondegenerate bilinear form" looks like g: A Γ— A β†’ Set So: Frobenius algebras are lurking all over in physics, logic and quantum logic, in many tightly interconnected ways. There should be some unified explanation of what's going on! Do you have any Finally, here are two books on math and music that I should read someday. The first seems more elementary, the second more advanced: 30) Trudi Hammel Garland and Charity Vaughan Kahn, Math and Music - Harmonious Connections, Dale Seymour Publications, 1995. Review by Elodie Lauten on her blog Music Underground, http:// 31) Serge Donval, Histoire de l'Acoustique Musicale (History of Musical Acoustics), Editions Fuzaeau, Bressuire, France, 2006. Review at Music Theory Online, http://mto.societymusictheory.org/ Addenda: I thank Bob Coecke, Robin Houston, Steve Lack, Paul-AndrΓ© MelliΓ¨s, Todd Trimble, Jamie Vicary, and a mysterious fellow named Stuart for some very helpful corrections. You can't really appreciate the pictorial approach to Frobenius algebras until you use it to prove some things. Try proving that every homomorphism of Frobenius algebras is an isomorphism! Or for something easier, but still fun, start by assuming that a Frobenius algebra is an algebra and coalgebra satisfying the Frobenius equations and use this to prove the following facts: For more discussion, visit the n-Category CafΓ©. In particular, you'll see there's a real morass of conflicting terminology concerning what I'm calling "special" Frobenius algebras and "strongly separable" algebras. But if we define them as I do above, they're very nicely related. More precisely: an algebra is strongly separable iff it can be given a comultiplication and counit making it into a special Frobenius algebra. If we can do this, we can do it in a unique way. Conversely, the underlying algebra of a special Frobenius algebra is strongly separable. For more details, see: 32) nLab, Frobenius algebra, http://ncatlab.org/nlab/show/Frobenius+algebra 33) nLab, Separable algebra, http://ncatlab.org/nlab/show/separable+algebra 'Interesting Truths' referred to a kind of theorem which captured subtle unifying insights between broad classes of mathematical structures. In between strict isomorphism - where the same structure recurred exactly in different guises - and the loosest of poetic analogies, Interesting Truths gathered together a panoply of apparently disparate systems by showing them all to be reflections of each other, albeit in a suitably warped mirror. - Greg Egan, Incandescence Β© 2008 John Baez
{"url":"http://math.ucr.edu/home/baez/week268.html","timestamp":"2014-04-18T08:02:32Z","content_type":null,"content_length":"41962","record_id":"<urn:uuid:2836dba8-04b3-4378-a498-534d822609fa>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: November 1995 [00075] [Date Index] [Thread Index] [Author Index] Re: Q: Help processing large arrays? β€’ To: mathgroup at smc.vnet.net β€’ Subject: [mg2571] Re: Q: Help processing large arrays? β€’ From: wagner at bullwinkle.cs.Colorado.EDU (Dave Wagner) β€’ Date: Tue, 21 Nov 1995 09:25:20 -0500 β€’ Organization: University of Colorado, Boulder In article <DIBxnD.D4G at wri.com>, robert fuentes <robert at mps.lfwc.lockheed.com> wrote: > In this case, I have a 381 by 289 image that I need to run a >Sobel operator over (i.e., convolve with a 3 by 3 Sobel operator). This >involves taking a 3 by 3 array and moving it over the image pixel by >pixel, applying the multipliers of the Sobel operator to the >corresponding 3 by 3 area of the image and replacing the value of the >image with the sum of the operation. Here is my code that I use, it seems >to be very slow (680+ seconds on a P90 with 32Mb ram): >RPConvolve[image_,rows_,cols_,kernel_,ki_,kj_] := > Module[ > {iadjust,jadjust,mistart,mistop,mjstart,mjstop, > divider,nuimage,i,j,maskarea,istart,jstart,m,n}, >iadjust = (ki-1)/2; >jadjust = (kj-1)/2; >mistart = iadjust+1; >mistop = rows - iadjust; >mjstop = cols - jadjust; >nuimage = image; > For[j=mjstart,j<=mjstop,j++, > maskarea = Table[image[[m,n]], > {m,i-iadjust,i+iadjust}, > {n,j-jadjust,j+jadjust}]; > nuimage[[i,j]] = Apply[Plus,Flatten[maskarea*kernel]]; > ]; > ]; >thanks for any help, >Robert Fuentes >robert at mps.lfwc.lockheed.com To start with, a simple change you can make that will speed things up by a small constant factor is to replace the pair of For loops with a single Do loop: Do[ (* body of code *), {i, mistart, mistop}, {j, mjstart, mjstop}] This isn't changing your programming paradigm but Do's are more efficient than For's. Now if you want to make a radical paradigm shift, you can try this technique, which is due to Richard Gaylord. It's sort of like "bringing the mountain to Mohammed" rather than vice-versa: Instead of moving the convolution kernel along each pixel of the image, move the entire image over the convolution kernel! (You will need gobs of memory to do this.) The basic idea is, suppose your kernel is a b c d e f g h i What you want to do first is multiply the entire image by e, which can be done in Mathematica using the simple statement e*image. Next, you want to rotate the entire image down one row and multiply that by b. Since the image is a list of rows, you rotate the entire image down using a simple RotateRight[image, 1]. Then, rotate up one row (RotateLeft[image, 1]) and multiply by h. Rotate left one column (Map[RotateLeft[#,1]&, image]) and multiply by d. Similarly for the other elements of the kernel. When you're done, you'll have 9 partially convolved images (hence the need for memory). Add them up, and you have your entire convolved image. Of course, you might want to do something a bit more sophisticated than simple rotation in order to account for the edge effects. By the way, note that there are no explicit loops, and nowhere does the size of the image appear in the code. Functional programming at its finest! (The size of the convolution kernel, however, is "hard-wired" in.) Richard uses this technique to simulate physical systems. See his book on the subject. Dave Wagner Principia Consulting (303) 786-8371 dbwagner at princon.com
{"url":"http://forums.wolfram.com/mathgroup/archive/1995/Nov/msg00075.html","timestamp":"2014-04-17T00:53:07Z","content_type":null,"content_length":"37397","record_id":"<urn:uuid:e7f8a1f8-6d77-490b-abda-aef05b8b857b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
The "Stata Logs" collect the transcripts of six sessions using the statistical package Stata. Each session reproduces the results of (practically) all the analyses in one of the Chapters of my lecture notes on Generalized Linear Models. The material is organized by Chapters and Sections using exactly the same numbering system as the notes, so section 2.8 of the logs deals with the analysis of covariance models described in section 2.8 of the notes. The transcripts are formatted versions of actual Stata logs run using version 11. The text boxes set in a typewriter font contain commands or instructions to Stata, followed by the resulting output. You can tell the commands apart because they appear on lines beginning with a dot, or on continuation lines beginning with a greater than sign. The rest of the text set in the standard font represents comments or annotations, except for references to Stata commands which are also set in a typewriter-style font. The overall format is similar to that used in the Stata manuals themselves. The best way to use these transcripts is sitting by a computer, trying the different commands as you read along, probably with a printed copy of the notes by the side. I also recommend that you try to answer the few questions and exercises posed along the way. If you follow this procedure you will notice that sometimes I use the continuation comment /// to indicate that a command continues on another line. If you are using Stata interactively, just keep typing on the same line. While interactive use is probably good for learning, for more serious work I recommend that you prepare your commands in a "do file" and then ask Stata to run it. If nothing else, this will help document your work and ensure that you can reproduce your results. These logs were all produced using do files. Stata 8 introduced a graphical interface that lets you use menus and dialogs to specify your analyses. This feature can help beginners learn the commands, but I recommend that you get used to typing your commands from the outset, so you make an easy transition to do files. On the same vein, Stata 10 introduced a Graphics Editor that lets you modify a graph using a point-and-click interface. While this is convenient, once you have edited a graph interactively you can't easily reproduce it. Stata 11 moved further along these lines by introducing a Variables Manager that lets you modify variable and value labels and other properties of your variables using a dialog. For serious research, however I recommend that you do all work using commands stored in a do file. The purpose of these notes is to illustrate the use of Stata in statistical analysis, not to provide a primer or tutorial. I have, however, written a short tutorial that you can find at http:// data.princeton.edu/stata. Please consult the Stata online help and manuals for more details. Revision History The "Stata Logs" were first published in January 1993 and targeted version 3. Revisions were completed to target newer releases roughly every couple of years. The current version targets version 11. Continue with 2. Linear Models
{"url":"http://data.princeton.edu/wws509/stata/","timestamp":"2014-04-17T15:39:13Z","content_type":null,"content_length":"9599","record_id":"<urn:uuid:53bed459-da5a-4c88-88c7-4ba1133f6ca1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
UNIX FORTRAN Timing Two functions ETIME and DTIME. let FORTRAN programs measure time. This is useful for perfomance tuning. They are used as follows: REAL TIMEARRAY (2) ETIME returns a real number which is the total CPU time used for this process it began executing. DTIME returns a real number which is the running time for this process since the last call to DTIME. Both procedures use TIMEARRAY to return the user time and system time separately; TIMEARRAY(1) reports the user time, and TIMEARRAY(2) supports the system time. For example, the statements REAL DELAPSE TIMEARRAY(2), X DO 10 I=1, 100000 10 CONTINUE set DELAPSE to the time required to compute the loop DO 10 1=1,100000.
{"url":"http://www.phys.ufl.edu/~coldwell/Progdet/UNIX%20FORTRAN%20Timing.htm","timestamp":"2014-04-19T09:24:40Z","content_type":null,"content_length":"8548","record_id":"<urn:uuid:c2f688fa-ab6e-42ba-aa35-216bd07ea78b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Middletown, DE Math Tutor Find a Middletown, DE Math Tutor ...Not being able to find the right material or the time that you need to complete an assignment leads to greater frustration. Guiding students to become more focused on their work helps them a great deal, but often this is just a start. Students may not know how to study for tests because they ne... 30 Subjects: including calculus, writing, statistics, linear algebra ...I also tutor elementary and middle school students in the areas of language arts, science and math. I am available to tutor your child in the afternoons or evenings. I have been teaching for the past 24 years in grade k-12. 24 Subjects: including algebra 1, prealgebra, reading, writing ...I tutor freshman students with math difficulties. I believe and learned that motivation is the key to inspire anyone to succeed. I also do not do the tutees' homework because it would hurt them in the long run. 11 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I am good with rendering, texturing, modeling, rigging, IK-FK, and lighting. I also can help with camera animation within Maya, can help a student understand the subtle ins and outs of Maya's node-based system, and can help install Maya on both PC and MAC platforms. I have experience troubleshooting Maya multiple site-license network issues as well. 42 Subjects: including geometry, precalculus, statistics, ACT Math I have more years as a teacher/tutor than can be included in a simple resume. My first job, at the age of 13, was a math tutor for my school district. I am currently doing environmental chemistry research full time, so I do not often get the chance to share my knowledge and passion for science and math with young students. 14 Subjects: including geometry, precalculus, trigonometry, SAT math Related Middletown, DE Tutors Middletown, DE Accounting Tutors Middletown, DE ACT Tutors Middletown, DE Algebra Tutors Middletown, DE Algebra 2 Tutors Middletown, DE Calculus Tutors Middletown, DE Geometry Tutors Middletown, DE Math Tutors Middletown, DE Prealgebra Tutors Middletown, DE Precalculus Tutors Middletown, DE SAT Tutors Middletown, DE SAT Math Tutors Middletown, DE Science Tutors Middletown, DE Statistics Tutors Middletown, DE Trigonometry Tutors
{"url":"http://www.purplemath.com/Middletown_DE_Math_tutors.php","timestamp":"2014-04-18T18:49:06Z","content_type":null,"content_length":"23841","record_id":"<urn:uuid:2c2f0a6a-5e43-4841-91be-e89f723b459b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Closure Property Sub Closure property states that when we add or multiply numbers of a given set, resulting number should also be from that set only. If the result obtained after multiplication or the addition Topics doesn't belong to the set then it means that the set does not follow closure property. It is the closed set satisfying the closure property. If a set has the closure property under a given operation, then we say that the set is closed under the operation. Closure Property demonstrates whether the system of numbers is closed or not under different mathematical operations. It is easier to understand a property by looking at examples than it is by simply talking about it in an abstract way. So given below are few examples. 1. 5 $\times$ 4 = 20 is closed under multiplication as 5, 4, 20 are real numbers. 2. The set of odd natural numbers, 1, 3, 5, 7,….., is closed by the addition because the sum of any two of them is another odd number. 3. 10 - 20 = -10. Here 10 and 20 are whole numbers and -10 is not. Difference of two numbers yields a result that is not a whole number. The given set of whole numbers does not possess the closure property for subtraction. Closure property of real number addition states that the sum of any two real numbers equals another real number. Let x, y be two integers. If x + y is also an integer, then the set of integers is said to be closed under the operation of addition. For any two elements x, y $\epsilon$ Z , x + y $\epsilon $ Z is the closure property of addition. The set of integers is closed under the operation of addition because the sum of any two integers is always another integer and is therefore in the set of integers. Given below are some examples for closer property under addition 5 + 4 = 9 as 5 + 4 = 9 $\epsilon$ Z -1 + (-2) = -3 $\epsilon$ Z 3 + (-5) = -2 $\epsilon$ Z 8 + 4 = 12 $\epsilon$ Z Real numbers holds closure property of subtraction like closure property of addition. But it is not true for the whole numbers. In closure property of subtraction the difference of any two whole numbers will not always be a whole number. That is, system of whole numbers is not closed under subtraction. 5 - 2 = 3 9 - 9 = 0 5 - 15 = - 10 5 - 9 = -4 From the above first and second example we see that the difference of two numbers is always a whole number. But from examples, 3 and 4, we notice that the the difference of whole numbers is not a whole number, the result obtained from 3 and 4 examples is an integer. Closure property of real number multiplication states that the product of any two real numbers equals another real number. The product of two numbers doesn't change even if the order of the numbers is changed. Let x, y be two integers. If x $\times$ y is also an integer, then the set of integers is said to be closed under the operation of multiplication. Given below are some examples for closer property under multiplication. For any two elements x, y $\epsilon$ Z , x $\times$ y $\epsilon $ Z is the closure property of multiplication. 4 $\times$ 3 = 12 12 $\times$ 6 = 72 9 $\times$ 12 = 108 When two elements in the set are combined the result is also included in the set. In the above example observe that the factors and products are also real numbers. The closure property of real number multiplication states that when we multiply real numbers with other real numbers the result is also real. The closure property of division same as subtraction under closer property, system of whole numbers is not closed under division. That is, division of any two whole numbers is not always a whole = 0.5 = 0.33 = 0.25 = 3 From the above first three examples we see that the division of two numbers is not always a whole number. The result obtained is a fraction. Last result obtained is a whole number. We can say that division of two whole numbers is not always a whole number.
{"url":"http://math.tutorcircle.com/number-sense/closure-property.html","timestamp":"2014-04-17T18:25:26Z","content_type":null,"content_length":"25504","record_id":"<urn:uuid:a059d173-91f5-4903-a468-6d8b416e3049>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Vectors Linear Dependent? November 21st 2008, 03:54 AM #1 Junior Member Sep 2008 Freiburg i. Brgs. Germany Vectors Linear Dependent? I have to check as this vectors are linear dependent? My calculation says no, they are linear independent because every coefficient in the linear combination is zero but I don't know as I did my work well. Could you say yer or no to my answer? The Vectors are in $\mathbb{R}^{4}$ $\begin{pmatrix}1 \\ 3 \\ 2 \\0 \end{pmatrix}, \ \begin{pmatrix}-2 \\ -1 \\ 0 \\3 \end{pmatrix}, \ \begin{pmatrix}5 \\ -2 \\ 4 \\5 \end{pmatrix}, \ \begin{pmatrix}0 \\ 1 \\ 3 \\5 \end{pmatrix}$ My calculations do not agree with yours. Try the scalars $\left\langle {\frac{{35}}<br /> {{38}},\frac{{45}}{{38}},\frac{{11}}{{38}}, - 1} \right\rangle$ in that order. What do you get? okay I tried it and got zero. They are linear dependent. Now I have to search my mistake. Cause I understand what the exercise is, I guess the mistake could be only in the way I solved my linear equation system. The mistake must be in my matrix? Is this correct? Where is the mistake in the system of linear equations? Here is my solution of the linear system of equations. Could you tell me where I did the mistake? I didn't find the problem but it must be wrong because Platos post is true. Here are my solutions: Thank you November 21st 2008, 05:12 AM #2 November 21st 2008, 06:50 AM #3 Junior Member Sep 2008 Freiburg i. Brgs. Germany November 23rd 2008, 03:20 AM #4 Junior Member Sep 2008 Freiburg i. Brgs. Germany
{"url":"http://mathhelpforum.com/advanced-algebra/60817-vectors-linear-dependent.html","timestamp":"2014-04-21T08:06:03Z","content_type":null,"content_length":"38808","record_id":"<urn:uuid:5c2f089c-f019-47e2-8d29-257a440ab772>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Many-Worlds Theory Alas, it has completely fallen out of my brain. No problem, Hurkyl. Some 1000 threads back for both of us, you were explaining MWT and decoherence. The notion of a toy observer came upβ€”I think for the first timeβ€”as a possible means to model decoherence in quantum mechanical observers. Maybe I should start over. To make sense of MWT we need to understand an observer as a quantum mechanical system. To understand this, we could construct a toy observer. To do this we need to model the nature of human or machine information processing. To do this we need: 1) An underlying principle of classical information processing. We need this because it will tell us that classical information processing that information is discarded. The so-called reversible classical gates are reversible only in principle. Even a not gate discards information. This will invoke decoherence as a sufficient, though not necessary, element of classical information processing. (more on this later.) and 2) A schema for building classical logic gates or neurons out of multiple quantum gates. Now, if we are to model gates, or neurons, or the neurons of neuronetworks out of quantum gates, we had better darned well understand quantum gates. I cannot believe that two quantum bits can interact, where qbit C changes qbit T, without qbit T changing qbit C. Say we have a c-not and C=1. What has happened to the information of the fomer state of T? If we are questioning the validity in the operation of a c-not gate, it’s not enough to say the value of T can be reacquired by acting a second c-not gate on T'. I think something has been left out of the popular description of a c-not gate. I will have to go out on a limb in the following, because the fact of the matter is, I don’t know if the following is true or not. Let me know. a) A quantum gate is reversible. b)There is a relative phase between two qbits. c) If the phase information is not preserved, the gate cannot be reversed. For simplicity, assume the inputs to a c-not are both pure states; either |1> or |0>. Just as in boolean logic, there are 4 possible outcomes. If not, reversibility is violated; quantum mechanics would not obey time reverse symmetry. d) I'm going to make a wild stab at the truth table of a c-not as follows. [tex](c,t) \rightarrow (c',t')[/tex] [tex](0,0) e^{\delta} \rightarrow (0,0) e^{\delta}[/tex] [tex](0,1) e^{\delta} \rightarrow (0,1) e^{\delta}[/tex] [tex](1,0) e^{\delta} \rightarrow (1,1) e^{\delta + \phi}[/tex] [tex](1,1) e^{\delta} \rightarrow (1,0) e^{\delta + \phi}[/tex] [itex]\delta[/itex] is the relative phase of two qbits. [itex]\delta[/itex] is an unphysical gauge, that we could just as well set to zero. [itex]\phi[/itex] is the change in the relative phase of c and t. We should be free to attach the phase to either output bit, as long as we are consistant. A second c-not gate acting on the primed qubits would have the truth table: [tex](c',t') \rightarrow (c'',t'')[/tex] [tex](0,0) e^{\delta} \rightarrow (0,0) e^{\delta}[/tex] [tex](0,1) e^{\delta} \rightarrow (0,1) e^{\delta}[/tex] [tex](1,0) e^{\delta + \phi} \rightarrow (1,1) e^{\delta + 2\phi} [/tex] [tex](1,1) e^{\delta + \phi} \rightarrow (1,0) e^{\delta + 2\phi} [/tex] If [itex]\phi = i \pi[/itex], two c-not gates in series will restore the primed states to their original unprimed states. So, have I told any fibs yet? The above contention is testable with two electrons (C,T), entangled in a c-not, sent on separate paths encompasing a solenoid, then reentangled with a second c-not. Varying the strength of the solenoid field should cause the resultant spin states (C'',T'') to vary. Sending both electrons around the same side of the solenoid should obtain (C'',T'')=(C,T), independent of the strength of the solenoid field.
{"url":"http://www.physicsforums.com/showthread.php?t=291481&page=5","timestamp":"2014-04-19T09:37:16Z","content_type":null,"content_length":"89417","record_id":"<urn:uuid:d52a2ec0-6c35-457d-98bf-fb1245c3ff0d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
compounded interest December 15th 2007, 08:38 AM #1 Oct 2007 compounded interest Suppose you have a choice of keeping $500 for five years in a savings account with a 2% interest rate, or in a five year certificate of deposit with and interest rate of 4.5%. Calculate how much interest you would earn with each option over five years time with continuous compounding. for the first part, would it be A(5)=500*e(.02*5/100) that doesnt seem right to me, but im not sure what else to do. Suppose you have a choice of keeping $500 for five years in a savings account with a 2% interest rate, or in a five year certificate of deposit with and interest rate of 4.5%. Calculate how much interest you would earn with each option over five years time with continuous compounding. for the first part, would it be A(5)=500*e(.02*5/100) that doesnt seem right to me, but im not sure what else to do. Check what you are doing and your arithmetic. The formula is wrong, and if that is what you evaluated you did it wrong, nor is it the result of evaluating the correct formula. what formula should i be using? December 15th 2007, 08:45 AM #2 May 2006 December 15th 2007, 08:49 AM #3 Oct 2007 December 15th 2007, 08:53 AM #4 May 2006
{"url":"http://mathhelpforum.com/business-math/24909-compounded-interest.html","timestamp":"2014-04-19T08:55:12Z","content_type":null,"content_length":"36826","record_id":"<urn:uuid:ea911692-56b4-43dd-bc00-f27c2b5b5149>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Numerical Phase Velocity Next: Finite Difference Schemes for Up: Von Neumann Analysis of Previous: Vector Schemes Numerical Phase Velocity For a given amplification factor , the numerical phase velocity at frequency is defined by where is the Euclidean norm of the vector . This expression gives the speed of propagation for a plane wave of wavenumber , according to the numerical scheme for which is an amplification factor. For the wave equation model problem, the speed of any plane wave solution will simply be , but the numerical phase velocity will in general be different, and in particular, wave speeds will be directionally-dependent to a certain degree, depending on the type of scheme used. For all these schemes, the numerical phase velocity for at least one of the amplification factors will approach the correct physical velocity near the spatial DC frequency, by consistency of the numerical scheme with the wave equation^. Stefan Bilbao 2002-01-22
{"url":"https://ccrma.stanford.edu/~bilbao/master/node204.html","timestamp":"2014-04-20T18:32:09Z","content_type":null,"content_length":"5215","record_id":"<urn:uuid:eb2a231e-f1c5-49ca-9716-5d16e1520af4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Model Regression involves the study of equations. First we talk about some simple equations or linear models. The simplest mathematical model or equation is the equation of straight line. Example: Suppose a shop keeper is selling pencils. He sells one pencil for 2 cents. Table as shown gives the number of pencils sold and the sale price of the pencils. β”‚Number of pencils sold β”‚β”‚β”‚β”‚β”‚β”‚β”‚ β”‚Sales Prices (Cents) β”‚β”‚β”‚β”‚β”‚β”‚β”‚ Let us examine the two variables given in table. For the sake of our convenience, we can give some names to the variables given in the table. Let The information written above can be presented in some other forms as well. For example we can write an equation describing the above relation between It is called mathematical equation or mathematical model in which It is read as β€œ The main features of the graph in the figure are: 1. The graph lies in the first quadrant because all the values of 2. It is an exact straight line. But all graphs are not in the form of a straight line. It could be some curve also. 3. All the points (pair of 4. The line passes through the origin. 5. Take any point It is called the slope of the line and in general it is denoted by β€œ Example: Suppose a carpenter wants to make some wooden toys for the small children. He has purchased some wood and some other material for $ β”‚ Number of Toys β”‚β”‚β”‚β”‚β”‚β”‚β”‚ β”‚ Cost of Toys β”‚β”‚β”‚β”‚β”‚β”‚β”‚ It is called equation of straight line. It is also mathematical model of deterministic nature. Let us make the graph of the data in given table. Figure as shown is the graph of the data in table. Let us note some important features of the graph obtained in figure. 1. The line 2. Take any point This ratio is denoted by β€œLinear Model”, we shall not mean a mathematical model as described above.
{"url":"http://www.emathzone.com/tutorials/basic-statistics/linear-model.html","timestamp":"2014-04-18T03:22:31Z","content_type":null,"content_length":"41023","record_id":"<urn:uuid:209a2150-5254-4f41-aa8e-c5eb32ca62f5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
β€’ (>>>) :: Category cat => cat a b -> cat b c -> cat a c β€’ (<<<) :: Category cat => cat b c -> cat a b -> cat a c β€’ first :: Arrow a => forall b c d. a b c -> a (b, d) (c, d) β€’ second :: Arrow a => forall b c d. a b c -> a (d, b) (d, c) β€’ (***) :: Arrow a => forall b c b' c'. a b c -> a b' c' -> a (b, b') (c, c') β€’ (&&&) :: Arrow a => forall b c c'. a b c -> a b c' -> a b (c, c') β€’ loop :: ArrowLoop a => forall b d c. a (b, d) (c, d) -> a b c β€’ class Category a => Arrow a where β€’ class Arrow a => ArrowLoop a where β–‘ loop :: a (b, d) (c, d) -> a b c β€’ class (Arrow a, ArrowLoop a) => ArrowInit a where β€’ arr :: ExpQ -> ExpQ β€’ init :: ExpQ -> ExpQ β€’ constant :: (ArrowInit a, Lift b) => b -> a () b β€’ norm :: ASyn t t1 -> ExpQ β€’ normOpt :: ASyn t t1 -> ExpQ (>>>) :: Category cat => cat a b -> cat b c -> cat a c Left-to-right composition (<<<) :: Category cat => cat b c -> cat a b -> cat a c Right-to-left composition first :: Arrow a => forall b c d. a b c -> a (b, d) (c, d) Send the first component of the input through the argument arrow, and copy the rest unchanged to the output. second :: Arrow a => forall b c d. a b c -> a (d, b) (d, c) A mirror image of first. The default definition may be overridden with a more efficient version if desired. (***) :: Arrow a => forall b c b' c'. a b c -> a b' c' -> a (b, b') (c, c') Split the input between the two argument arrows and combine their output. Note that this is in general not a functor. The default definition may be overridden with a more efficient version if desired. (&&&) :: Arrow a => forall b c c'. a b c -> a b c' -> a b (c, c') Fanout: send the input to both argument arrows and combine their output. The default definition may be overridden with a more efficient version if desired. class Category a => Arrow a where The basic arrow class. Minimal complete definition: arr and first, satisfying the laws assoc ((a,b),c) = (a,(b,c)) The other combinators have sensible default definitions, which may be overridden for efficiency. first :: a b c -> a (b, d) (c, d) Send the first component of the input through the argument arrow, and copy the rest unchanged to the output. second :: a b c -> a (d, b) (d, c) A mirror image of first. The default definition may be overridden with a more efficient version if desired. (***) :: a b c -> a b' c' -> a (b, b') (c, c') Split the input between the two argument arrows and combine their output. Note that this is in general not a functor. The default definition may be overridden with a more efficient version if desired. (&&&) :: a b c -> a b c' -> a b (c, c') Fanout: send the input to both argument arrows and combine their output. The default definition may be overridden with a more efficient version if desired. Arrow (->) Arrow ASyn Monad m => Arrow (Kleisli m) class Arrow a => ArrowLoop a where The loop operator expresses computations in which an output value is fed back as input, although the computation occurs only once. It underlies the rec value recursion construct in arrow notation. loop should satisfy the following laws: left tightening right tightening assoc ((a,b),c) = (a,(b,c)) unassoc (a,(b,c)) = ((a,b),c) loop :: a (b, d) (c, d) -> a b c ArrowLoop (->) ArrowLoop ASyn MonadFix m => ArrowLoop (Kleisli m) Beware that for many monads (those for which the >>= operation is strict) this instance will not satisfy the right-tightening law required by the ArrowLoop class.
{"url":"http://hackage.haskell.org/package/CCA-0.1.4/docs/Control-CCA.html","timestamp":"2014-04-18T05:44:40Z","content_type":null,"content_length":"20528","record_id":"<urn:uuid:d29e96ac-290b-4872-952e-1e4ca09e85b8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
A rudimentary NFL season simulation A rudimentary NFL season simulation Following a post by Tango a couple of weeks ago on the playoff systems of the various sports, I thought I'd try writing a simulation. This is an update of that work in progress. Actually, I've only started on the NFL, and I haven't even done playoffs yet, just the regular season. But I thought I'd at least share what I've got so far. In the simulation, each of the 32 teams was assigned a "true talent," from a normal distribution with mean .500 and standard deviation .143. No team was allowed to have talent higher than .900 or lower than .100; if they did, they were moved to .900 or .100. Then, all 32 teams were moved the same amount (arithmetically) in the same direction to get the overall talent to average exactly .500. (I think this method actually reduces the expected SD below .143, but I didn't bother fixing that.) The 16-game schedule is random, instead of unbalanced (with the restriction that a team can't face any another team more than twice). There are no tie games. There is no home field advantage (although that would be easy to add in). The chance of winning each game is determined by the log5 method. There are no ties in games. Ties in the standings (division or wild card) are broken As I said, I stopped there for now; haven't done playoffs yet. That's the next step, along with home field advantage. Anyway, here are some results. Each result is out of 100,000 seasons. Every result came from a different run of the simulation. Results varied a fair bit per run, but I think everything is reasonably I checked for all teams out of 3,200,000 (32 teams, 100,000 seasons) that finished more than 8 games above or below their talent. That's hard to do, obviously. Also, the worse or better you are, the harder it is. It's (relatively) easier for an 8-8 team to go 16-0 than for a 3-13 team to go 11-5. Amplifying that is the fact that there are a lot more 8-8 teams than 3-13 teams. However, offsetting that, a little bit, is the fact that the 3-13 team can also go 12-4 or 13-3 or better. In any case ... there were 43 cases where a team differed from its talent by 8 games or more. Of those, 26 were teams that outperformed, and 17 were teams that underperformed. The biggest differential was in season 98,534, where the Broncos a team that had talent of 4.56 wins (out of 16), but went 14-2, for a differential of 9.44 games. That was the only team with a differential of 9 or more. Part of the reason it did so well was that it faced inferior opponents. You'd expect any given team's opponents to average 8.00 games of talent. But in that season, the Broncos' opponents' talent was only 7.45 games. Not a huge difference, but still. Actually, when it comes to extreme events, a small difference in opponents makes a big difference in probability. Of the 43 teams in the sample, 38 of them had records that went in the direction "aided" by the opposition (in the sense that the underperforming teams played better-than-expected opponents, and vice versa). That's 38-5 in favor. The worst team in the sample was the season 63,924 Jets, a 3.03 team that went 12-4 (playing 7.11-win opponents). The best team in the sample was a Bucs team that was expected to win 12.04 games, but instead went 4-12 (playing 8.58-win opponents). I also took a look at teams that went 0-16. Those results probably aren't as realistic, because they're heavily dependent on the shape of the tail of the talent distribution ... and we really don't know what that is. Recall that we chose a normal distribution that gets truncated at .100 (1.6 wins). Both those choices -- normal, and truncated -- are arbitrary and probably not close enough to real life. (Also, teams could drop below .100 in talent from the adjustment that sets all league-years to .500.) In addition, the other shortcuts in the simulation probably skew the results too. The mainstream results are probably right, but the extremes are extremely sensitive to some of the assumptions. With those caveats: there were 5,663 of those 0-16 teams out of 3.2 million, and their average talent was .181, which is just under 3-13. I suspect the talent of actual flesh-and-blood 0-16 teams is higher than that, but I really don't know. 16-0 should be exactly symmetrical, so I won't show that separately. I checked for four-way ties where every team has the same record. That happened 878 times out of 800,000, or about once every century. There were five seasons out of 100,000 where two divisions had a four-way tie. Actually, that might be a little high ... the test runs had only 1 or 2 such seasons. Anyway, before I start on the playoffs, and repeating this for other sports leagues, I'm looking for feedback on what I've got so far. Any suggestions? And, if you want me to run the sim and check for something in particular, let me know in the comments. It's real easy to add a couple of lines of code to check for something specific. Labels: distribution of talent, football, NFL, simulation 16 Comments: Links to this post:
{"url":"http://blog.philbirnbaum.com/2011/10/rudimentary-nfl-season-simulation.html","timestamp":"2014-04-21T14:39:35Z","content_type":null,"content_length":"51249","record_id":"<urn:uuid:8b510f26-1151-4639-9d37-fc07d4ae2510>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Tags: quantum mechanics Note: Results do not include pending, unpublished, and some private items. Quantum mechanics (QM), also known as quantum physics or quantum theory, is a branch of physics providing a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energy and matter. It departs from classical mechanics primarily at the atomic and subatomic scales, the so-called quantum realm. In advanced topics of QM, some of these behaviors are macroscopic and only emerge at very low or very high energies or temperatures. Learn more about quantum dots from the many resources on this site, listed below. More information on Quantum mechanics can be found here.
{"url":"http://nanohub.org/tags/quantummechanics/resources?sort=title","timestamp":"2014-04-18T18:14:06Z","content_type":null,"content_length":"56663","record_id":"<urn:uuid:060cf750-4d3f-46c8-8d01-043a11a6b9d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. β€’ across MIT Grad Student Online now β€’ laura* Helped 1,000 students Online now β€’ Hero College Math Guru Online now Here's the question you clicked on: Write the product in standard form. β€’ one year ago β€’ one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... β€’ Teamwork 19 Teammate β€’ Problem Solving 19 Hero β€’ Engagement 19 Mad Hatter β€’ You have blocked this person. β€’ βœ” You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50536d6fe4b02986d3703f25","timestamp":"2014-04-21T16:12:45Z","content_type":null,"content_length":"98722","record_id":"<urn:uuid:ce43bec8-b376-46ad-aeb3-c8c457592b45>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Santa Western, CA Algebra 2 Tutor Find a Santa Western, CA Algebra 2 Tutor ...What's more, I passed the exam without taking a standard prep course (e.g., BarBri, Kaplan, etc.), but utilizing certain BarBri prep materials. I devised and fastidiously followed a rigorous study plan that canvassed all material and contoured my mastery of the subject matter to the precise dime... 58 Subjects: including algebra 2, English, reading, writing ...I know my stuff and you can, too! I'm available both online and in-person. I have zillions of references, ranging from students and parents to high school teachers, college counselors, and 26 Subjects: including algebra 2, reading, English, writing ...Excellent references and resume available. TESTIMONIAL β€œGreg is a gifted teacher with an exceptional ability to connect with people and particularly, kids. He has that unusual combination of absolute fluency with the subject matter, ability to customize the teaching approach to the individual s... 24 Subjects: including algebra 2, chemistry, writing, geometry ...I have extensive experience tutoring students of all ages (elementary through college), and my past experience includes teaching at Kumon and being a private Dean's tutor at Caltech. I am currently a physics major at Caltech, and I have learned to clearly and logically explain concepts to studen... 15 Subjects: including algebra 2, reading, physics, calculus ...I've been tutoring privately on and off for about ten years. I tutored math, Spanish, sociology, and some science courses for six years at a community college and I have also tutored middle and high school students either privately or through after-school programs. I've passed the CBEST as well as the Spanish language proficiency test for the Culver City Unified School District. 10 Subjects: including algebra 2, chemistry, Spanish, algebra 1 Related Santa Western, CA Tutors Santa Western, CA Accounting Tutors Santa Western, CA ACT Tutors Santa Western, CA Algebra Tutors Santa Western, CA Algebra 2 Tutors Santa Western, CA Calculus Tutors Santa Western, CA Geometry Tutors Santa Western, CA Math Tutors Santa Western, CA Prealgebra Tutors Santa Western, CA Precalculus Tutors Santa Western, CA SAT Tutors Santa Western, CA SAT Math Tutors Santa Western, CA Science Tutors Santa Western, CA Statistics Tutors Santa Western, CA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Century City, CA algebra 2 Tutors Cimarron, CA algebra 2 Tutors Dowtown Carrier Annex, CA algebra 2 Tutors Highland Park, LA algebra 2 Tutors La Canada, CA algebra 2 Tutors La Tuna Canyon, CA algebra 2 Tutors Magnolia Park, CA algebra 2 Tutors Playa, CA algebra 2 Tutors Rancho La Tuna Canyon, CA algebra 2 Tutors Sherman Village, CA algebra 2 Tutors Toluca Terrace, CA algebra 2 Tutors Vermont, CA algebra 2 Tutors West Toluca Lake, CA algebra 2 Tutors Westwood, LA algebra 2 Tutors Wilcox, CA algebra 2 Tutors
{"url":"http://www.purplemath.com/Santa_Western_CA_algebra_2_tutors.php","timestamp":"2014-04-21T07:41:23Z","content_type":null,"content_length":"24593","record_id":"<urn:uuid:35f0d38d-806a-44d7-a40f-08659ef540e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
South Richmond Hill Algebra 2 Tutor Find a South Richmond Hill Algebra 2 Tutor ...The position required the ability to explain a variety of math subjects to high school students, which ranged from basic math to advance topics like Pre-Calculus and AP Calculus. During my undergraduate career at UIC, I taught general chemistry and classical physics as a supplemental instructor ... 13 Subjects: including algebra 2, chemistry, calculus, physics ...I am very patient and adapt to the student personality to help him/her achieve their goal. I am a highly qualified biology tutor who helped students very successfully achieve a high grade. I am a PhD graduate from Weill Cornell Medical School. 18 Subjects: including algebra 2, chemistry, physics, calculus ...My current students have and are doing exceptionally well and most have been moved to advanced programs. Students leave my home only when they have gained a thorough understanding and confidence in the topic!! I train students for school tests, state tests, regents, SAT. I assure good scores!!I am an M.S (Telecommunications Engineer). Math is my primary subject. 9 Subjects: including algebra 2, calculus, SAT math, algebra 1 ...I received my BA in Latin and Comparative Literature. My academic career has centered on Latin and analytical writing, which I developed in high school (attending BHSEC in Manhattan) and honed in college. I am confident in my command of style, grammar, and critical thinking, and in my ability to teach these skills. 11 Subjects: including algebra 2, English, writing, algebra 1 ...Therefore, I actively teach and encourage my students to express their answers algebraically, graphically or by drawing pictures and in word sentences, as needed. My patient, polite and easy-going manner coupled with my ability to model various methods for understanding and β€œseeing” things, acce... 16 Subjects: including algebra 2, chemistry, calculus, geometry
{"url":"http://www.purplemath.com/South_Richmond_Hill_Algebra_2_tutors.php","timestamp":"2014-04-17T21:25:46Z","content_type":null,"content_length":"24596","record_id":"<urn:uuid:1dc8475a-ecc6-459e-a12b-b3d486e29dcc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
April 14th 2009, 04:15 PM #1 Junior Member Nov 2008 True or False: If p(x) is a polynomial function, then one of the antiderivatives of p has a graph that passes through the origin. I believe this is true, but I'm not positive... Let $P(x) = a_0 + a_1 x + a_2 x^2 + \cdots + a_nx^n$ be some arbitrary polynomial function. what would its antiderivative look like? does it pass through the origin? April 14th 2009, 04:22 PM #2
{"url":"http://mathhelpforum.com/calculus/83759-antiderivatives.html","timestamp":"2014-04-19T21:08:57Z","content_type":null,"content_length":"33099","record_id":"<urn:uuid:48f91773-60b9-404b-9f43-25e419dcedac>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Dual Contouring is a method for meshing implicit surfaces, or surfaces which are defined as the level-set of scalar function. Conceptually it is similar to Marching Cubes , except that meshing is performed on a dual mesh, and requires that the scalar function be able to provide gradients or surface normals in addition to the function value. The primary advantage of dual contouring is that it is able to reproduce sharp features, which is something that most other implicit surface meshing algorithms are unable to do. I also consider Dual Contouring to be simpler to implement from scratch than Marching Cubes, since you don't need to form large tables of stencils. The overall algorithm is very simple. First the region to be meshed is divided into convex non-overlapping cells (like uniform cubes or tetrahedra). The scalar function that is being meshed is then evaluated at the vertices of those cells, and each vertex is labeled as being either inside or outside of the surface to be meshed. Those cells that have a mix of inside and outside vertices must then contain a portion of the surface. It is this point where Dual Contouring begins to differ from Marching Cubes: instead of applying a stencil based on which vertices are inside or outside, Dual Contouring generates a single 'dual' vertex per cell straddling the surface. It then connects every dual vertex with it's neighboring dual vertices to form the final mesh. The only difficultly arises in choosing the location of the dual vertices to best represent the underlying surface being meshed. To choose where to place vertices, dual contouring uses not only the intersection of the surface with the edges of the mesh, but also the surface normals at those locations. It then solves for the dual vertex location which minimizes the error in satisfying planes that pass through the edge intersections with the same normals as at the intersections. This corresponds to minimizing the following error: $E(\mathbf{d}) = \sum_{i=1}^{n}((\mathbf{d}-\mathbf{p_i})\cdot \mathbf{N_i})^2$ is the dual vertex position, is the location of the i'th (of ) edge intersection and is the corresponding normal for the i'th intersection. This is just a least squares system $\left[\begin{array}{ccc}N_{1_x} & N_{1_y} & N_{1_z} \\N_{2_x} & N_{2_y} & N_{2_z} \\ & \vdots & \\N_{n_x} & N_{n_y} & N_{n_z}\end{array}\right] \mathbf{d} = \left[ \begin{array}{c}N_1 \cdot p_1 \\ N_2 \cdot p_2 \\ \vdots \\N_n \cdot p_n \end{array} \right]$ which can be solved using a QR decomposition, or by forming the normal equations. One tricky aspect of this is that although there are always at least as many intersected edges as unknowns, it may not be the case that the row contributed by each edge is linearly independent from the other rows. In flat regions of the surface the normals will be nearly if not exactly the same, and the set of equations will be (nearly) singular. One way to handle this is to add special-case code to check that the system is not singular, and if it is, solve a different system in 1D for the offset of the dual vertex in the normal direction that minimizes the fitting error. It is also possible to add a regularization term with low weight that prevents the system from being singular, but does not affect the results too badly in the event the system is well conditioned. I have done this by adding a (small) multiple of identity to the normal equations coefficient matrix, and the same small multiple of the original dual vertex position (i.e. the cell centroid) to the right hand side. The image above shows an example of this in action. I generated an isosurface representation of the fandisk model, then ran it through dual contouring to produce the result you see above. Note that the method faithfully reproduces the sharp edges of this CAD model. I appear to have some minor bugs in the implementation however, as there are some artifacts on concave sharp edges. The regularization approach seems to work well and eliminates the need for special case code. It also provides a knob to trade off mesh quality for approximation error: if the regularization term is small, then the method will try to reproduce the underlying surface faithfully. On the other hand, if the regularization term is large, dual vertices will be placed close to the centroid of the cells that generate them, producing a high quality mesh, but with staircasing artifacts where approximation error has been introduced.
{"url":"http://jamesgregson.blogspot.com/2011/04/dual-contouring.html?m=1","timestamp":"2014-04-17T09:34:54Z","content_type":null,"content_length":"63994","record_id":"<urn:uuid:30c4a6ba-10b2-4759-8824-18536b3fd01b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Santa Western, CA Algebra 2 Tutor Find a Santa Western, CA Algebra 2 Tutor ...What's more, I passed the exam without taking a standard prep course (e.g., BarBri, Kaplan, etc.), but utilizing certain BarBri prep materials. I devised and fastidiously followed a rigorous study plan that canvassed all material and contoured my mastery of the subject matter to the precise dime... 58 Subjects: including algebra 2, English, reading, writing ...I know my stuff and you can, too! I'm available both online and in-person. I have zillions of references, ranging from students and parents to high school teachers, college counselors, and 26 Subjects: including algebra 2, reading, English, writing ...Excellent references and resume available. TESTIMONIAL β€œGreg is a gifted teacher with an exceptional ability to connect with people and particularly, kids. He has that unusual combination of absolute fluency with the subject matter, ability to customize the teaching approach to the individual s... 24 Subjects: including algebra 2, chemistry, writing, geometry ...I have extensive experience tutoring students of all ages (elementary through college), and my past experience includes teaching at Kumon and being a private Dean's tutor at Caltech. I am currently a physics major at Caltech, and I have learned to clearly and logically explain concepts to studen... 15 Subjects: including algebra 2, reading, physics, calculus ...I've been tutoring privately on and off for about ten years. I tutored math, Spanish, sociology, and some science courses for six years at a community college and I have also tutored middle and high school students either privately or through after-school programs. I've passed the CBEST as well as the Spanish language proficiency test for the Culver City Unified School District. 10 Subjects: including algebra 2, chemistry, Spanish, algebra 1 Related Santa Western, CA Tutors Santa Western, CA Accounting Tutors Santa Western, CA ACT Tutors Santa Western, CA Algebra Tutors Santa Western, CA Algebra 2 Tutors Santa Western, CA Calculus Tutors Santa Western, CA Geometry Tutors Santa Western, CA Math Tutors Santa Western, CA Prealgebra Tutors Santa Western, CA Precalculus Tutors Santa Western, CA SAT Tutors Santa Western, CA SAT Math Tutors Santa Western, CA Science Tutors Santa Western, CA Statistics Tutors Santa Western, CA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Century City, CA algebra 2 Tutors Cimarron, CA algebra 2 Tutors Dowtown Carrier Annex, CA algebra 2 Tutors Highland Park, LA algebra 2 Tutors La Canada, CA algebra 2 Tutors La Tuna Canyon, CA algebra 2 Tutors Magnolia Park, CA algebra 2 Tutors Playa, CA algebra 2 Tutors Rancho La Tuna Canyon, CA algebra 2 Tutors Sherman Village, CA algebra 2 Tutors Toluca Terrace, CA algebra 2 Tutors Vermont, CA algebra 2 Tutors West Toluca Lake, CA algebra 2 Tutors Westwood, LA algebra 2 Tutors Wilcox, CA algebra 2 Tutors
{"url":"http://www.purplemath.com/Santa_Western_CA_algebra_2_tutors.php","timestamp":"2014-04-21T07:41:23Z","content_type":null,"content_length":"24593","record_id":"<urn:uuid:35f0d38d-806a-44d7-a40f-08659ef540e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Haskell/Truth values Equality and other comparisonsEdit So far we have seen how to use the equals sign to define variables and functions in Haskell. Writing in a source file will cause occurrences of r to be replaced by 5 in all places where it makes sense to do so according to the scope of the definition. Similarly, causes occurrences of f followed by a number (which is taken as f's argument) to be replaced by that number plus three. In Mathematics, however, the equals sign is also used in a subtly different and equally important way. For instance, consider this simple problem: Example: Solve the following equation: When we look at a problem like this one, our immediate concern is not the ability to represent the value $5$ as $x+3$, or vice-versa. Instead, we read the $x+3=5$ equation as a proposition, which says that some number $x$ gives 5 as result when added to 3. Solving the equation means finding which, if any, values of $x$ make that proposition true. In this case, using elementary algebra we can convert the equation into $x=5-3$ and finally to $x=2$, which is the solution we were looking for. The fact that it makes the equation true can be verified by replacing $x$ with 2 in the original equation, leading us to $2+3=5$, which is of course true. The ability of comparing values to see if they are equal turns out to be extremely useful in programming. Haskell allows us to write such tests in a very natural way that looks just like an equation. The main difference is that, since the equals sign is already used for defining things, we use a double equals sign, ==. To see it at work, you can start GHCi and enter the proposition we wrote above Prelude> 2 + 3 == 5 GHCi returns "True" lending further confirmation of $2 + 3$ being equal to 5. As 2 is the only value that satisfies the equation, we would expect to obtain different results with other numbers. Prelude> 7 + 3 == 5 Nice and coherent. Another thing to point out is that nothing stops us from using our own functions in these tests. Let us try it with the function f we mentioned at the start of the module: Prelude> let f x = x + 3 Prelude> f 2 == 5 Just as expected, since f 2 is just 2 + 3. In addition to tests for equality, we can just as easily compare two numerical values to see which one is larger. Haskell provides a number of tests including: < (less than), > (greater than), <= (less than or equal to) and >= (greater than or equal to), which work just like == (equal to). For a simple application, we could use < alongside the area function from the previous module to see whether a circle of a certain radius would have an area smaller than some value. Prelude> let area r = pi * r^2 Prelude> area 5 < 50 Boolean valuesEdit At this point, GHCi might look like some kind of oracle (or not) which can tell you if propositions are true or false. That's all fine and dandy, but how could that help us to write programs? And what is actually going on when GHCi "answers" such "questions"? To understand that, we will start from a different but related question. If we enter an arithmetical expression in GHCi the expression gets evaluated, and the resulting numerical value is displayed on the screen: Prelude> 2 + 2 If we replace the arithmetical expression with an equality comparison, something similar seems to happen: Prelude> 2 == 2 But what is that "True" that gets displayed? It certainly does not look like a number. We can think of it as something that tells us about the veracity of the proposition 2 == 2. From that point of view, it makes sense to regard it as a value – except that instead of representing some kind of count, quantity, etc. it stands for the truth of a proposition. Such values are called truth values, or boolean values^[1]. Naturally, there are only two possible boolean values – True and False. An introduction to typesEdit When we say True and False are values, we are not just making an analogy. Boolean values have the same status as numerical values in Haskell, and indeed you can manipulate them just as well. One trivial example would be equality tests on truth values: Prelude> True == True Prelude> True == False True is indeed equal to True, and True is not equal to False. Now, quickly: can you answer whether 2 is equal to True? Prelude> 2 == True No instance for (Num Bool) arising from the literal `2' at <interactive>:1:0 Possible fix: add an instance declaration for (Num Bool) In the first argument of `(==)', namely `2' In the expression: 2 == True In the definition of `it': it = 2 == True The correct answer is you can't, because the question just does not make sense. It is impossible to compare a number with something that is not a number, or a boolean with something that is not a boolean. Haskell incorporates that notion, and the ugly error message we got is, in essence, stating exactly that. Ignoring all of the obfuscating clutter (which we will get to understand eventually) what the message tells us is that, since there was a number (Num) on the left side of the ==, some kind of number was expected on the right side. But a boolean value (Bool) is not a number, and so the equality test exploded into flames. The general concept, therefore, is that values have types, and these types define what we can or cannot do with the values. In this case, for instance, True is a value of type Bool, just like False (as for the 2, while there is a well-defined concept of number in Haskell the situation is slightly more complicated, so we will defer the explanation for a little while). Types are a very powerful tool because they provide a way to regulate the behaviour of values with rules which make sense, making it easier to write programs that work correctly. We will come back to the topic of types many times as they are very important to Haskell, starting with the very next module of this book. Infix operatorsEdit What we have seen so far leads us to the conclusion that an equality test like 2 == 2 is an expression just like 2 + 2, and that it also evaluates to a value in pretty much the same way. That fact is actually given a passing mention on the ugly error message we got on the previous example: In the expression: 2 == True Therefore, when we type 2 == 2 in the prompt and GHCi "answers" True it is just evaluating an expression. But there is a deeper truth involved in this process. A hint is provided by the very same error message: In the first argument of `(==)', namely `2' GHCi called 2 the first argument of (==). In the previous module we used the term argument to describe the values we feed a function with so that it evaluates to a result. It turns out that == is just a function, which takes two arguments, namely the left side and the right side of the equality test. The only special thing about it is the syntax: Haskell allows two-argument functions with names composed only of non-alphanumeric characters to be used as infix operators, that is, placed between their arguments. The only caveat is that if you wish to use such a function in the "standard" way (writing the function name before the arguments, as a prefix operator) the function name must be enclosed in parentheses. So the following expressions are completely equivalent: Prelude> 4 + 9 == 13 Prelude> (==) (4 + 9) 13 Writing the expression in this alternative style further drives the point that (==) is a function with two arguments just like areaRect in the previous module was. What's more, the same considerations apply to the other relational operators we mentioned (<, >, <=, >=) and to the arithmetical operators (+, *, etc.) – all of them are just functions. This generality is an illustration of one of the strengths of Haskell – there are few "special cases", and that helps to keep things simple. In general, we could say that all tangible things in Haskell are either values, variables or Boolean operationsEdit One nice and useful way of seeing both truth values and infix operators in action are the boolean operations, which allows us to manipulate truth values as in logic propositions. Haskell provides us three basic functions for that purpose: β€’ (&&) performs the and operation. Given two boolean values, it evaluates to True if both the first and the second are True, and to False otherwise. Prelude> (3 < 8) && (False == False) Prelude> (&&) (6 <= 5) (1 == 1) β€’ (||) performs the or operation. Given two boolean values, it evaluates to True if either the first or the second are True (or if both are true), and to False otherwise. Prelude> (2 + 2 == 5) || (2 > 0) Prelude> (||) (18 == 17) (9 >= 11) β€’ not performs the negation of a boolean value; that is, it converts True to False and vice-versa. Prelude> not (5 * 2 == 10) One relational operator we didn't mention so far in our discussions about comparison of values is the not equal to operator. It is also provided by Haskell as the (/=) function, but if we had to implement it a very natural way of doing so would be: Note that it is perfectly legal syntax to write the operators infix, even when defining them. Another detail to note is that completely new operators can be created out of ASCII symbols (basically, those that are found on the keyboard). Earlier on in this module we proposed two questions about the operations involving truth values: what was actually going on when we used them and how they could help us in the task of writing programs. While we now have a sound initial answer for the first question, the second one could well look a bit nebulous to you at this point, as we did little more than testing one-line expressions here. We will tackle this issue by introducing a feature that relies on boolean values and operations and allows us to write more interesting and useful functions: guards. To show how guards work, we are going to implement the absolute value function. The absolute value of a number is the number with its sign discarded^[3]; so if the number is negative (that is, smaller than zero) the sign is inverted; otherwise it remains unchanged. We could write the definition as: $|x| = \begin{cases} x, & \mbox{if } x \ge 0 \\ -x, & \mbox{if } x < 0. \end{cases}$ The key feature of the definition is that the actual expression to be used for calculating $|x|$ depends on a set of propositions made about $x$. If $x \ge 0$ we use the first expression, but if $x < 0$ we use the second one instead. If we are going to implement the absolute value function in Haskell we need a way to express this decision process. That is exactly what guards help us to do. Using them, the implementation could look like this:^[4] Example: The abs function. abs x | x < 0 = 0 - x | otherwise = x Remarkably, the above code is almost as readable as the corresponding mathematical definition. In order to see how the guard syntax fits with the rest of the Haskell constructs, let us dissect the components of the definition: β€’ We start just like in a normal function definition, providing a name for the function, abs, and saying it will take a single parameter, which we will name x. β€’ Instead of just following with the = and the right-hand side of the definition, we entered a line break, and, following it, the two alternatives, placed in separate lines.^[5] These alternatives are the guards proper. An important observation is that the whitespace is not there just for aesthetic reasons, but it is necessary for the code to be parsed correctly. β€’ Each of the guards begins with a pipe character, |. After the pipe, we put an expression which evaluates to a boolean (also called a boolean condition or a predicate), which is followed by the rest of the definition – the equals sign and the right-hand side which should be used if the predicate evaluates to True. β€’ The otherwise deserves some additional explanation. If none of the preceding predicates evaluate to True, the otherwise guard will be deployed by default. In this case, if x is not smaller than zero, it must be greater than or equal to zero, so the final predicate could have just as easily been x >= 0; otherwise is used here for the sake of convenience and readability. There is no syntactical magic behind otherwise. It is defined alongside the default variables and functions of Haskell as simply This definition makes for a catch-all guard since evaluation of the guard predicates is sequential, and so the always true otherwise predicate will only be reached if none of the other ones evaluates to True (that is, assuming you place it as the last guard!). In general it is a good idea to always provide an otherwise guard, as if none of the predicates is true for some input a rather ugly runtime error will be produced. You might be wondering why we wrote 0 - x and not simply -x to denote the sign inversion. Truth is, we could have written the first guard as and it would have worked just as well. The only issue is that this way of expressing sign inversion is actually one of the few "special cases" in Haskell, in that this - is not a function that takes one argument and evaluates to 0 - x, but just a syntactical abbreviation. While very handy, this shortcut occasionally conflicts with the usage of (-) as an actual function (the subtraction operator), which is a potential source of annoyance (for one of several possible issues, try writing three minus minus four without using any parentheses for grouping). In any case, the only reason we wrote 0 - x explicitly on the example was so that we could have an opportunity to make this point clear in this brief digression. where and GuardsEdit where clauses are particularly handy when used with guards. For instance, consider this function, which computes the number of (real) solutions for a quadratic equation, $ax^2 + bx + c = 0$: numOfSolutions a b c | disc > 0 = 2 | disc == 0 = 1 | otherwise = 0 disc = b^2 - 4*a*c The where definition is within the scope of all of the guards, sparing us from repeating the expression for disc. Last modified on 26 May 2012, at 04:41
{"url":"http://en.m.wikibooks.org/wiki/Haskell/Truth_values","timestamp":"2014-04-20T03:11:08Z","content_type":null,"content_length":"43300","record_id":"<urn:uuid:ded2fdc1-f393-4673-b932-083fd6e2a63f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Quick summary of Graduate Student Enrollment survey results As the table shows, most of the trends are negative. The largest decreases are in the numbers of applications and this trend is present for schools of all sizes. Admittances fluctuate the most, probably as a result of attempts to counter the declining numbers of applications. The final outcome in the numbers of students entering also shows large differences among the four groups of schools, with the middle groups loosing significantly and the largest and small schools showing some gains. The combined sample indicates a 3.2 % decrease, which is equivalent to 12 fewer students entering the discipline each year. If only those schools are considered which have reported all of the three categories of student numbers and for all five years, the changes are smaller. The trends are –5.0, +0.8 and –0.9 % per year, with correlation coefficients of -0.97, 0.29 and –0.42, respectively, for the numbers of application, admittances and entries. These trends are free from perturbations introduced by variations in the numbers of schools reporting, but reduce the sample size to 22 schools. Figure 2 shows the ratios of admitted-to-applied, entering-to-applied and entering-to-admitted for the combined data set, and for individual schools within each quartile. The first two ratios show increases, reflecting decreases in numbers of applications, while the entering-to-admitted ratio decreased from 49% to 47%. There was sharp drop in this ratio in 1998/1999 as a result of a simultaneous drop in the average number of entering students and an increase in the numbers admitted. 6. The GRE scores are shown in Figs. 3a, 3b and 3c. No significant time trends are apparent in these data. The verbal scores decreased by 3.2 points per year but the correlation coefficient for this is only 0.34. Perhaps more important than the time trends is that there are impressive differences among the groups of schools. As shown in Figs. 3b and 3c the largest and medium schools have students with higher GRE scores than the other two groups. These differences are quite evident when the 5 years are averaged, as shown in Table 3:
{"url":"http://www.ucar.edu/governance/meetings/jun00/summary.html","timestamp":"2014-04-16T13:34:39Z","content_type":null,"content_length":"31772","record_id":"<urn:uuid:ec33a9d5-eb0b-4371-9ca6-f1657ff0beef>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
NASA: Practical Uses of Math And Science (PUMAS) PUMAS (poo' β€’ mas) -- is a collection of brief examples showing how math and science topics taught in K-12 classes can be used in interesting settings, including every day life. The examples are written primarily by scientists, engineers, and other content experts having practical experience with the material. They are aimed mainly at classroom teachers, and are available to all interested parties via the PUMAS web site. Our goal is to capture, for the benefit of pre-college education, the flavor of the vast experience that working scientists have with interesting and practical uses of math and science. - Ralph Kahn Pumas Editor and Founder Featured PUMAS Example When a Ruler Is Too Short by Stephen J. Edberg Surveyors are often seen in the middle of the street making careful measurements of angles with their transits, and distances with their steel tapes. For points than can be easily reached, such a survey is convenient. But when the target is inaccessible – a mountain summit or a distant star – known distances can be combined with measured angles to determine a distance or altitude. The method relies on parallax, the way an object appears to move, relative to a more distant background, when viewed from different angles. In 1838, Friedrich Wilhelm Bessel became the first to successfully apply this method to a star, measuring an angle of <0.5 second of arc for the summer star 61 Cygni. (One second of arc is the angle you get when you divide one degree into 3600 equal parts. For comparison, the Moon’s diameter as seen from Earth is about 0.5 degree, or 1800 arcsec.) A new NASA mission, SIM PlanetQuest, applying the same technique to determine stellar distances, will measure angles to an accuracy of one microsecond (one millionth of a second) of arc! (view this example) View the Examples There are currently 84 examples in the PUMAS Collection. View the full listing, organized by Example title. We are always looking for neat examples of Practical Uses of Math And Science. Please contribute!
{"url":"http://pumas.gsfc.nasa.gov/","timestamp":"2014-04-16T12:02:55Z","content_type":null,"content_length":"9469","record_id":"<urn:uuid:74c5dfad-50e3-434a-a80b-da7094e25f43>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Tree-average distances on certain phylogenetic networks have their weights uniquely determined A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child. Keywords: digraph, distance, metric, hybrid, network, tree-child, normal network, phylogeny
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3395585/?report=abstract","timestamp":"2014-04-18T14:57:53Z","content_type":null,"content_length":"22776","record_id":"<urn:uuid:66802afa-53db-47a9-bede-b16e02d62544>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Calendar weeks Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Calendar weeks From Nick Cox <njcoxstata@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Calendar weeks Date Wed, 15 Feb 2012 09:37:59 +0000 0. I hadn't noticed -epiweek- (SSC) -- please remember to explain _where_ user-written programs you refer to come from -- but you could just change the code in a cloned version. 1. See for a general review Nicholas J. Cox Stata tip 68: Week assumptions The Stata Journal Volume 10 Number 4: pp. 682-685 2. Suppose you have daily dates and a year variable then week numbering within a year for weeks starting on Mondays is given by bysort year (dailydate) : gen weekno = sum(dow(dailydate) == 1) With this rule any days before the first Monday in a given year are labelled 0. If you want week numbering always to start at 1 the fix will be something like replace weekno = weekno + 1 if dow(mdy(1,1,year)) != 1 3. You don't say what you expect to happen for weeks that begin in one calendar year and finish in the next. The start of the week for the current day is just gen monday = dailydate - cond(dow(dailydate) == 0, 6, dow(dailydate) - 1) In many ways the simplest system is to label weeks by the Mondays that start them; that cuts out the overlap issue. On Wed, Feb 15, 2012 at 9:04 AM, Charles Vellutini <charles.vellutini@ecopa.com> wrote: > Sorry if this has been asked before (could not find it though): is there a function to compute the calendar week number (that is: the week starting on Monday and ending on Sunday) for any given date, or alternatively to compute the Monday of the week that date belongs to -- as opposed to the standard Stata approach to weeks (where the first week of the year is made of the first 7 days of the years, regardless of the day of the week of the first day of the year, and so on, if I am not mistaken). > I am aware of user-written -epiweek- but that defines a week as starting on Sunday and ending on Saturday, so that won't work for me. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-02/msg00704.html","timestamp":"2014-04-18T10:52:46Z","content_type":null,"content_length":"9170","record_id":"<urn:uuid:1bd92a57-260e-4578-9e1c-8ffc98e30a69>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. β€’ across MIT Grad Student Online now β€’ laura* Helped 1,000 students Online now β€’ Hero College Math Guru Online now Here's the question you clicked on: how do u know when to use sin and cos? β€’ one year ago β€’ one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. sin for opposite over hypotenus cos for adjacent over hypotenus Best Response You've already chosen the best response. I know but like in a problem...i dont get which one i should use Best Response You've already chosen the best response. Use a sine or cosine ration to find the value of each variable. Round decimals to the nearest tenth. Best Response You've already chosen the best response. Best Response You've already chosen the best response. These three please Best Response You've already chosen the best response. i dont get them Best Response You've already chosen the best response. use that. sorry. i ahve to go soon. Best Response You've already chosen the best response. it doesnt help please solve these for me Best Response You've already chosen the best response. |dw:1358230939592:dw| Refer to the reference angle. Sin => opposite side (to the reference angle) / hypotenuse In this diagram, can you determine what sine theta is? Best Response You've already chosen the best response. ? no Best Response You've already chosen the best response. what is the opposite side to the reference angle theta? Best Response You've already chosen the best response. Best Response You've already chosen the best response. and what is the hypotenuse? Best Response You've already chosen the best response. 2 btw what is theta? Best Response You've already chosen the best response. Sorry :| that was supposed to be z :| If you know x and z, you can find theta. Because sin (theta) = x/z Got it? Best Response You've already chosen the best response. Ammara, so in an equation, Callisto is saying is that in the figure above, \[\sin(\theta) = \frac{ x }{ z }\] So, in your figure, the first problem: you know the hypotenuse (18) and the angle opposite x (32 degrees). So you know that, \[\sin(32) = \frac{ x }{ 18 }\] Then solve for x :) Best Response You've already chosen the best response. And that's the sine of 32 degrees of course, not radians. Best Response You've already chosen the best response. what do i do for why then? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Well, you know that the angle opposite x is 32 degrees, and that the angle opposite the hypotentuse is 90 degrees. You also know the total sum of all interior angles of a triangle - so from that you can figure out the remaining angle you don't know, and do the same thing. Best Response You've already chosen the best response. i dont get it....how to i solve for y? Best Response You've already chosen the best response. and does x=9.5? Best Response You've already chosen the best response. Or alternatively, use the cosine which is the adjacent angle over the hypotenuse: \[\cos(\theta) = \frac{ y }{ z }\] Substitute for the adjacent angle (32 degrees again) and for the hypotenuse (18), and solve for y the same way you solved for x above. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Ok how about the next triangle... Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. so in the left triangle were solving for a. Best Response You've already chosen the best response. how would u solve it? please solve it for me....... Best Response You've already chosen the best response. Correct. And you know that the cosine of the angle (48 degrees) is the same as 10 divided by a - careful here, because a is the hypotenuse: \[\cos(\Theta) = \frac{ 10 }{ a }\] So, \[\cos(48) = \ frac{ 10 }{ a } \] Best Response You've already chosen the best response. how would i solve be now...i got 14.9 for a Best Response You've already chosen the best response. That's correct. Now solve the last figure? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. You got it. Can you do the last triangle? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... β€’ Teamwork 19 Teammate β€’ Problem Solving 19 Hero β€’ Engagement 19 Mad Hatter β€’ You have blocked this person. β€’ βœ” You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f4f07ee4b0246f1fe36891","timestamp":"2014-04-18T21:29:52Z","content_type":null,"content_length":"243951","record_id":"<urn:uuid:2b4078c9-52bf-4009-8f13-7a229c5caadd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Trick to Multiply by N-1 in Base N In base 10, multiplying digits by 9 produces a nice pattern. This Demonstration extends the trick, showing a simple way to multiply by in base . [1] B. A. Kordemsky, The Moscow Puzzles , London: Penguin, 1990, p. 91.
{"url":"http://demonstrations.wolfram.com/TrickToMultiplyByN1InBaseN/","timestamp":"2014-04-16T04:36:05Z","content_type":null,"content_length":"41731","record_id":"<urn:uuid:6aa2a17b-e38d-4c3a-8fd4-3dec16b61de8>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Transfinite Theory as an Extension of the Natural Numbers It is definition (although well motivated). The usual notions of counting tend to break down when speaking of infinite sets. Two sets have the same cardinality iff there exists a 1-1 correspondence between the sets. So, for example, the set of all positive integers and the set of all even positive integers. One is a proper subset of the other yet they have the same cardinality. Map 1 to 2, 2 to 4, 3 to 6, etc. Asking a question about how many elements each set contains doesn't really make sense anymore in the way we traditionally think of counting a finite number of things.
{"url":"http://www.physicsforums.com/showpost.php?p=3785319&postcount=2","timestamp":"2014-04-18T18:23:33Z","content_type":null,"content_length":"7262","record_id":"<urn:uuid:998d80e0-b951-4161-9cc8-d186d8fd72ed>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Matlab function for joint probability for more than 3 random variables. Maybe up to 10 rvs? Replies: 1 Last Post: Oct 17, 2013 9:32 AM Messages: [ Previous | Next ] Matlab function for joint probability for more than 3 random variables. Maybe up to 10 rvs? Posted: Oct 17, 2013 8:49 AM Hi matlab users, I have a question on trying to calculate the probability density with 3 or more random variables. A function that does something like what "histc" does for 2 random variables, but except it can be done for as many as the user want? and returns the number of count (occurence) for the various combinations of the random variable (i.e. rv1, rv2, rv3.....). where for example for illustrative purposes each rv has 2 states. and if we use 3 random variables that gives us a total of 2*2*2=8 states; where the function would return the count for each 8 states. Except that there are more than 2 states for each random variable and there are 3 or more random variables. Is already a built in function to do it? or an easier way to do it? Thanks a bunch.:) Date Subject Author 10/17/13 Matlab function for joint probability for more than 3 random variables. Maybe up to 10 rvs? tsan toso 10/17/13 Re: Matlab function for joint probability for more than 3 random variables. Maybe up to 10 rvs? Steven Lord
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2602380","timestamp":"2014-04-16T19:06:51Z","content_type":null,"content_length":"17936","record_id":"<urn:uuid:60db78ea-8a36-47c9-beee-41f5a7d82b3f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Another Revolution in Physics Crackpottery: Electromagnetic Gravity It’s that time again – yes, we have yet another wacko reinvention of physics that pretends to have math on its side. This time, it’s β€œThe Electro-Magnetic Radiation Pressure Gravity Theory”, by β€œEngineer Xavier Borg”. (Yes, he signs all of his papers that way – it’s always with the title β€œEngineer”.) This one is as wacky as Neal Adams and his PMPs, except that the author seems to be less At first I wondered if this were a hoax – I mean, β€œEngineer Borg”? It seems like a deliberately goofy name for someone with a crackpot theory of physics… But on reading through his web-pages, the quantity and depth of his writing has me leaning towards believing that this stuff is legit. It’s hard to decide how to take this apart, because there’s just so much of it, and it’s all so silly! What Engineer Borg is on about is his revolution of physics. The central idea of his theory is that relativity is wrong – sort of. That is, on the one hand, he frequently cites relativistic effects as being valid and correct; but on the other hand, the fundamental idea of his theory is that all motion in the universe consists of orbits within orbits within orbits, all eventually centered on a fixed, unmoving body at the exact center of the universe. This is, of course, fundamentally gibberish… Relativity is fundamentally based, mathematically, on a particular kind of symmetry – and what that symmetry means is there is no preferred frame of reference. Take that away, and relativity falls apart. But Engineer Borg doesn’t let that concern him. After all, he’s got a whole new version of physics, and so he probably has his own version of relativity too. After all, he’s reinvented just about everything else. He rejects the idea of particles of matter – the particle/wave duality is, to Engineer Borg, utter nonsense. Everything is electromagnetic waves. What we see as β€œparticles” are really just electromagnetic β€œstanding waves”; so, you see particles don’t really exist. They’re just a coincidence – a wave pattern that happens to be persistent because of resonance, or interference – or, well, anything that produces a standing wave. Nothing can actually move; what appears to be particles is just waves, and if the β€œstanding wave” pattern is slightly unstable, you’ll get a moving wave – aka a moving particle. Does this make sense? No… The kinds of wave interference that he’s talking about just don’t work. He’s trying to create a basic source of all of these waves, and then claiming that they form perfectly stable interference and resonance patterns, even as things move around and interact. According to Engineer Borg, every possible interaction between these wonderful wave things always remains stable. After all, they have to, because otherwise, the theory wouldn’t work. Is there any math to support it? No. He waves lots of equations around at pointless times, but can’t be bothered to show how the math works for the actual hard stuff. So, what creates gravity? After all, that’s the part of his theory that we started out with, right? Well, he’s actually got two different explanation of that. But hey, consistency is a just a crutch for small minds, right? First, his introduction: This paper aims at providing a satisfying theory for the yet unkown mechanism for gravity. High frequency electromagnetic waves sourced by the fixed energetic core of the universe, referred to as Kolob, sometimes also referred to as zero point energy, is predicted from a steady state universe in oscillatory motion and pervades all space. Radiation pressure (Poynting vector) imbalance of such highly penetrating extragalactic incoming radiation, acting through all matter is held responsible for pushing matter together. It comes back to his β€œuniversal” frame of reference gibberish. He believes that there’s a fixed point which is the exact center of the universe, and that there’s this thing called Kolob at that point, which is radiating waves that create everything. One of his gravity theories is similar to Einsteinean gravity, but rewritten to be a part of his standing wave nonsense: To visualise the effect of non-linear electromagnetic element volume (space-time) at a centre of gravity, imagine the surface of a rubber sheet with a uniform grid drawn on it, and visualise the grid when the rubber is pulled down at a point below its surface. Such bending of space-time is a result of this non-linearity of the parameters present in the dielectric volume. One method of generating a non-linear dielectric volume is to expose the whole dielectric volume under concern to a non -linear electric field, with the β€˜centre of gravity’ being the centre of highest electric field flux density. An example of this is our planet, which has a non-linear electric field gradient with its highest gradient near the surface. Linear gravity does not exist, gravitational force is always non-linear (an-isotropic) pointing towards its centre. That is earth’s g=9.8 at ground level, but decreases at higher altitudes. Linear gravity results in a linear space-time and is the same as zero gravity. Similarly, an electromagnetic element exposed to a linear force field will reconstruct the objects in it at zero energy transfer. However, when exposed to a non-linear force field, an object moving within it will experience a force imbalance in the direction of the highest force flux density. So the attraction of matter to centres of gravity is not a result of matter itself, but of the spacetime β€˜stretching’ and β€˜compression’ infront and behind the moving object. A massless dielectric, that is space itself, would still be β€˜accelerated’ towards the point of easier reconstruction. The mass movement is just an indication of movement of its electromagnetic constituents. You see, the particles don’t really exist, because they’re just waves. But still, the non-existent particles continue to warp spacetime – just like relativity says they do – because of a β€œnon-linear electric field gradient”. Does this create gravity? Not really. It doesn’t work. But if you claim that gravity actually isn’t a fixed force, but varies, and ignore the stability of things like orbits, then you can wave your hands, throw around a lot of jargon, and pretend that it works. Then there’s his other theory of gravity – this is ignores that whole dielectric field thing, and turns it into a direct pushing force from those waves radiated by Kolob: This paper aims at providing a satisfying theory for the yet unkown mechanism for gravity. High frequency electromagnetic waves sourced by the fixed energetic core of the universe, referred to as Kolob, sometimes also referred to as zero point energy, is predicted from a steady state universe in oscillatory motion and pervades all space. Radiation pressure (Poynting vector) imbalance of such highly penetrating extragalactic incoming radiation, acting through all matter is held responsible for pushing matter together. So, the β€œzero point energy”, which he elsewhere says is the same thing as the cosmological constant – the force that is causing the universe to expand – is really creating a kind of pressure, which pushes matter together. Does he have any math for how this works? Well, sort of. It’s actually really funny math. You see, the main reason that we know that electromagnetic waves must be the actual force behind gravity is… They both follow inverse-square relationships: Despite the precise predictions of the equations of gravity when compared to experimental measurements, no one yet understands its connections with any other of the known forces. We also know that the equations for gravitational forces between two masses are VERY similar to those for electrical forces between charges, but we wonder why. The equations governing the three different force fields are: β–‘ Electrostatic Force F = KQ[1]Q[2]/R^2 …. K=1/4Ο€eo, Q= charge, R = distance β–‘ Gravitational Force F = GM[1]M[2]/R^2 …. G= gravitational constant, M= mass, R = distance β–‘ Magnetic Force F = UM[1]M[2]/R^2 …. U=1/u, M= magnetic monopoles strength, R = distance We learn that electrostatic forces are generated by charges, gravitational forces are generated by masses, and magnetic fields are generated by magnetic poles. But can this be really true? How could three mechanisms be so similar yet so different. Yeah… That’s pretty much it. They’re all basic inverse square relationships, therefore they must ultimately be the same thing. It all makes sense because he’s also reinvented the entire system of units – replacing SI with his own system called ST, which has only two units, S (space/distance) and T (time). All energy has unit T/S; all forces are in units (T/S^2). The three equations end up being exactly the same in Borg’s system, because he’s redefined the units so that charge, magnetic field, and mass are all the same – so the only difference between the equations are the constants G, U, and K. Why does that make sense? Well, because according to Engineer Borg, units analysis is fundamental to figuring out how things work. Any two things with the same unit are the same thing. So, since in Borg physics, all forces are T/S^2, that means that all forces are the same thing: Analysing the three force field equations, one immediately observes that each one has got its own constant of proportionality, but otherwise, seem to be analogous to one another. Looking at the SI units of force that is kg*m/s2 doesn’t help much, but here is where the new ST system of units comes to rescue. The similarity between them can be best explained by analyzing the space time dimensions of force itself. The dimensions of ANY force field in ST units are T/S2…. T=time, S=distance. So, we see that the inverse square law (S-2) is not something directly related to magnetism, electric fields or gravity, but is contained in the definition of force itself. The spacetime diagram shows how one can β€˜pinch’ space in the time direction in the presence of a force field. The geometric relation between space and time, or the relation between time and disk surface area is the same relation between energy (T/S) and distance (S). This is also confirmed by the mechanical law Force = Energy/ distance. This means that all forces can be accounted for by electromagnetic energy, in other words the effect of ANY force field must be electromagnetic in nature. It is therefore logically evident that the gravitation mechanism is also electromagnetic as for all other forces. Yup, that’s it, it must be electromagnetic, because everything is electromagnetic, because the units match. And since it’s electromagnetic, and everything electromagnetic is ultimately created by β€œzero point energy” radiated by Kolob, that means that it’s all part of the grand revolving universe centered around Kolob. And don’t forget, because Engineer Borg can’t stress this enough: the math all works, because the units match. 1. #1 Paul King February 20, 2007 The reference to Kolob probably means that he is a Mormon (or ex-Mormon). It’s the planet where God is supposed to live according to LDS doctrine. 2. #2 s. zeilenga February 20, 2007 Can I just say, I love gravity? 3. #3 Blake Stacey February 20, 2007 A thousand years is as a day in the Lord’s sight, because years and days both have units of T. It’s all so clear to me now! 4. #4 DieFundie February 20, 2007 I take umbrage at your abject dismissal of the idea of particles being standing waves. Whatever other hokum this person slings vis-a-vis β€œthe center of the universe”, this idea is spot-on. See Electron Holography by Akira Tonomura. Assuming that no particle exists makes Quantum Electrodynamics function. Indeed the particle does not β€œtake all possible paths”, leading to such ridiculous assumptions as the many-worlds theory. However, all the possible paths do contribute to the observed path. Interference is the key. 5. #5 Michael Zappe February 20, 2007 I do have to say, he actually stops short of most GR researchers. It’s nice to just reduce everything to centimeters… 6. #6 MarkP February 20, 2007 I’ve always found it damning of these alternative theories that not only do they disagree with mainstream science, they disagree with each other as well. Adams disagrees with Borg, and neither has anything to say in support of chiropractic, and none of the β€œknowledge” of those three has any positive implications for astrology. Oh, the alternative theorists will often vocalize support for each other, but the fact remains that their theories are like little islands of alternate unique realities, forever seperated. Contrast that to mainstream science, which paints a vast continuum where one can slide easily from one discipline to another, and show the consistency in each case, in any direction. This meta-argument has the advantage of not requiring specialized information and terminology often tossed about by cranks in order to bamboozle and confuse. 7. #7 SLC February 20, 2007 I know this is probably completely irrelevant to Mr. Borg but how does he explain what keeps the atomic nucleus together when all the charged particles therein have the same charge? If only electromagnetic forces exist, the atomic nucleus could not exist. 8. #8 Blake Stacey February 20, 2007 So, over at Ed Brayton’s corner of ScienceBlogs, they’re discussing β€œConservapedia”, a wiki which is meant to counter the β€œconsistent anti-American and anti-Christian bias in Wikipedia entries”. On their list of grievances, they say the following: Wikipedia has many entries on mathematical concepts, but lacks any entry on the basic concept of an elementary proof. Elementary proofs require a rigor lacking in many mathematical claims promoted on Wikipedia. They define an β€œelementary proof” thusly: he term β€œelementary proof” or β€œelementary techniques” in mathematics means use of only real numbers rather than complex numbers, which relies on manipulation of the imaginary square root of (-1). Elementary proofs are preferred because they are do not require additional assumptions inherent in complex analysis, such as that there is a unique square root of (-1) that will yield consistent results. Mathematicians also consider elementary techniques to include objects, operations, and relations. Sets, sequences and geometry are not included. The prime number theorem has long been proven using complex analysis (Riemann’s zeta function), but in 1949 and 1950 an elementary proof by Paul Erdos and Atle Selberg earned Selberg the highest prize in math, the Fields medal. Sets are not elementary? Of course, the complaint in Conservapedia’s list is groundless, since a Wikipedia article entitled β€œElementary proof” does exist. It’s a stub, which begins like this: In mathematics a proof is said to be elementary if uses only ideas from within its field and closely related issues. The term is most commonly used in number theory to refer to proofs that make no use of complex analysis. My mind is boggling, so I’ll leave this alone now. 9. #9 Mark C. Chu-Carroll February 20, 2007 Borg claims that there are no particles. According to him, an atom is not made up of a nucleus of protons and neutrons, surrounded by shells of electrons. He says that the entire atom is a single electromagnetic standing wave. 10. #10 a little night musing February 20, 2007 Blake: wait, wait, wait: Wikipedia has many entries on mathematical concepts, but lacks any entry on the basic concept of an elementary proof. Elementary proofs require a rigor lacking in many mathematical claims promoted on Wikipedia. Non-elementary proofs have a well-known liberal bias, is it? Ohhh, my head hurts. Back to the Borg theory (I’m loving thinking of it that way): Mark, I’m just a poor geometer-playing-number-theorist and ex-physics-student, but is there any content at all to this part: Linear gravity does not exist, gravitational force is always non-linear (an-isotropic) pointing towards its centre. That is earth’s g=9.8 at ground level, but decreases at higher altitudes. Linear gravity results in a linear space-time and is the same as zero gravity. Similarly, an electromagnetic element exposed to a linear force field will reconstruct the objects in it at zero energy transfer. However, when exposed to a non-linear force field, an object moving within it will experience a force imbalance in the direction of the highest force flux density. Specifically, what does he mean by β€œnon’linear” (= β€œanisotropic”???) here? Clearly not what I mean when I say non-linear, unless I’ve missed something. Or, I guess what I’m asking (and keeping in mind how very long it’s been since I had any sleep) is, is this as garbled a passage of mathy-sounding blather as it seems? Or is there some (however small) germ of an idea here that might make it worth trying to figure out what the guy is saying? Oh my, I seem to have just asked you to write more on this subject. I apologize for that. A β€œyes” or β€œno” or β€œperhaps” will suffice, thanks. And you can count on my vote, FWIW, for the Koufax awards. Shall we have a big β€œget out the vote” drive? 11. #11 Mark C. Chu-Carroll February 20, 2007 I think that it’s all just blather. I don’t think that there’s any way of making that actually make sense the way he thinks it does. 12. #12 a little night musing February 20, 2007 Oh, just one more comment on the Conservapedia/Wikipedia thing and then I’ll shut up: Taking this out with someone else, I mentioned the elementary proof of the prime number theorem (due to ErdΓΆs and Selberg) which is β€œelementary” in the sense of not using the Riemann zeta function, but hardly β€œelementary” in the sense of β€œsimple”. And then I thought, well, if by β€œmore rigorous” you mean β€œmore demanding, wearing the reader out more throughly,”* then yes, the elementary proof is indeed β€œmore rigorous.” Somehow I doubt that’s what they had in mind. * Speaking strictly for myself, of course. YMMV. 13. #13 Blake Stacey February 20, 2007 Linear is one of those Humpty-Dumpty words which obfuscatory people take to mean whatever they want it to mean. Furthermore, because it has a set of valid uses only loosely related through etymology, one can make an β€œargument” by taking a statement which is valid under definition 1 and criticize it using definition 3. Consider, for example, the word science, which in common usage can take (at least) four fairly distinct meanings: 1. The scientific method β€” hypothesis, experiment, etc. β€” by which we discover truths about the world and grant them tentative acceptance based on evidence; 2. The body of facts which we have uncovered using this method; 3. The community of people engaged in employing this method; 4. The application of the facts (sense 2), which we can distinguish by the term technology. A typical non-argument might then go, β€œScience is the tool of the military-industrial complex [true under #4], and therefore [changing slyly to #1] empirical investigation is only the metanarrative of the patriarchal warfare culture.” One finds the same confusion, intentional or not, when β€œlinear equation” is bundled up with β€œlinear thought”. Glib assertions of social constructivists notwithstanding, one can deploy intuition, guesswork and other less-than-strictly β€œlinear” thought techniques in solving a problem which uses only linear equations. I’d argue that a big part of teaching physics involves stimulating students to do exactly this. There should be a word, a sonorous Lewis Carroll-style word, for this particular kind of mumble-ty-mumble. 14. #14 SLC February 20, 2007 Re ChuCarroll If there are no particles, how does he explain the periodic table? 15. #15 David Harmon February 20, 2007 Meems like there’s another of these… every time you turn around. 16. #16 Blake Stacey February 20, 2007 If there are no particles, how does he explain PYGMIES + DWARFS?! 17. #17 Chris' Wills February 21, 2007 Just a guess, as I couldn’t find an answer on the Borg site, but perhaps Xavier is a German/Austrian. This would explain the use of Engineer as a title akin to Doctor. In Germany, people with the requisite qualifications are refered to as Herr Engineer (Mr Engineer) just as medical doctors are addressed as Herr Doctor. In the anglo-saxon countries this seems odd as we use the term engineer for many different levels/types of jobs from natural daylight enhancement engineer (otherwise known as a window cleaner) upto Nuclear engineer. So we use it for those with no academic training at all upto those with PHds in Engineering. In Germany only a suitably qualified person is allowed (by law) to use the title engineer, France has the same type of rules, and it is legally enforceable. If Mr Borg is in fact a suitably qualified person from Germany or a country with a similair history then using Engineer as a title is not strange for his culture, just from our perspective. P.S. How do you stay calm when you come across such lunacy? 18. #18 Matthias Schinacher February 21, 2007 Just to expand on the comment by Chris’ Wills. The use of β€œDr.” as part of the name is common in Germany/Austria and people with a Ph.D do use this on a regular basis even outside any professional setting. This used to be the same for β€œEngineer” in the past in Germany, nowadays an engineer would use his/her academic title usually only if the qualification is relevant (like in a job interview). But in Austria, to this day, people are madly fond of any title or degree, and people will often refer to others or themselfs by way of job-title or degree. So my guess: the guy should be from Austria. 19. #19 Chris' Wills February 21, 2007 Thanks Matthias, It has been a long time since I was in Germany so the update is appreciated. I did notice that in his text he prefixed Ing to his name (Ingeneer?) so perhaps he is French or from Quebec? Though it isn’t common, in my experience, for French engineers to use it as a title anymore. 20. #20 Davis February 21, 2007 There should be a word, a sonorous Lewis Carroll-style word, for this particular kind of mumble-ty-mumble. I wouldn’t say it’s especially sonorous, but typically this is called β€œequivocation.” 21. #21 Davis February 21, 2007 Specifically, what does he mean by β€œnon’linear” (= β€œanisotropic”???) here? Clearly not what I mean when I say non-linear, unless I’ve missed something. What little meaning there is in that blather is completely trivial. Gravity is nonlinear (in the sense in which we math-folk think of the term), in that it’s proportional to 1/r^2, where r=the distance between two masses. It sounds like that bit is trying to dress up a simple fact in complicated-sounding language. A clear sign of crankiness. 22. #22 Mark C. Chu-Carroll February 21, 2007 I think that there’s a little bit more to his crankiness about non-linearity than just terminology. He seems to have this idea that units analysis has deep meaning – that things with the same unit must be the same thing, and that exponents in units must have geometric (or at least topological) meanings. So to him, the fact that gravity is an inverse square force between two bodies rather that a linear means that the interaction between the two bodies must mean that there is something geometrically non-linear about the relationship between the bodies. According to him, if gravity were really just an attraction between two bodies, where the forces on the bodies were simple linear forces towards their mutual center of mass, then gravity would have to be expressed in linear units – but since it’s quadratic, that means that there’s something wrong with the force model of gravity. It’s all nonsense, but I think that he’s doing more that just blindly flinging jargon; I think that he thinks that there is a lot of depth in the meaning of the non-linearity of gravity. 23. #23 TorbjΓΆrn Larsson February 21, 2007 How could three mechanisms be so similar yet so different. Yes, isn’t it an impressive that longrange simple forces are constrained to have the same relationship to distance? It doesn’t look like a coincidence. Unless, of course, you consider other forces, like shortrange ones. 24. #24 TorbjΓΆrn Larsson February 21, 2007 You seem to draw conclusions from meager support here. I take umbrage at your abject dismissal of the idea of particles being standing waves. How does holography lead you to identify particles with standing waves? Indeed the particle does not β€œtake all possible paths”, leading to such ridiculous assumptions as the many-worlds theory. How does path integrals lead anyone to MW theory? MW theory is an interpretation among others. It seems an interpretation of entanglement was Everett’s original motivation for his proposal. 25. #25 Norm Breyfogle February 22, 2007 Mark CC wrote: β€œThere should be a word, a sonorous Lewis Carroll-style word, for this particular kind of mumble-ty-mumble.” I came across this word on a # of websites the other day; it seems to fit here: SYLLABICATION: paΒ·taΒ·physΒ·ics PRONUNCIATION: pΓ€t-fzks NOUN: (used with a sing. verb) The French absurdist concept of a philosophy or science dedicated to studying what lies beyond the realm of metaphysics, intended as a parody of the methods and theories of modern science and often expressed in nonsensical language. ETYMOLOGY: French pataphysique, alteration of Greek ta epi ta metaphusika, the (works) after the metaphysics (pseudo-title of a work by Aristotle modeled on ta meta ta phusika, the (works) after the Physics, Aristotle’s Metaphysics) : epi, after; see epi- + metaphusika, metaphysics; see metaphysics. OTHER FORMS: pataΒ·physiΒ·cal (–kl) –ADJECTIVE I’ve also seen the words β€˜patapsychology and β€˜pataphor, carrying meanings you can guess. In fact, for its appropriateness, I’ve changed my essay β€œMetacosmology” (which I posted on Mark CC’s blog titled β€œMath is bad because it isn’t Christian”) to β€œβ€˜Patacosmology.” 26. #26 Norm Breyfogle February 22, 2007 Here’s my attempt to insert a link to the Wikipedia article on this and related words: 27. #27 Norm Breyfogle February 22, 2007 Interestingly (and in my opinion, perhaps appropriately) I’ve seen string theory and even quantum theory referred to as a kind of mathematical β€˜pataphore (or as β€˜pataphysics). Of course, this may be interesting to me merely because I have difficulty understanding quantum dynamics and I’m absolutely hopeless with string theory … 28. #28 Jonathan Vos Post February 22, 2007 Maxwell’s Silver Hammer Joan was quizzical, studied pataphysical Science in the home Late nights all alone with a test tube Maxwell Edison majoring in medicine Calls her on the phone β€œCan I take you out to the pictures But as she’s getting ready to go A knock comes on the door… Bang, bang, Maxwell’s silver hammer Came down upon her head Clang, clang, Maxwell’s silver hammer Made sure that she was dead… [Maxwell's Silver Hammer by The Beatles Album: Abbey Road Date: 1969] I always wonder if Maxwell Edison’s making an oblique reference to James Clerk Maxwell… Relates to Thomas Edison, who in an ad hoc way was applying Maxwell-Faraday results… Or am I being too nonlinear? 29. #29 Chris' Wills February 22, 2007 On the importance he places on the units, this may come from his engineering training. A long long time ago, when I was reading engineering the lecturers repeatedly told us to check the dimensions on both sides of the equation; by this they meant check that the units on both sides of the equation are the same (a good check as it happens, as was doing rough magnitude checks (no calculators allowed in my day)). This could explain the importance that he places on the units and the relationship he makes between the units and geometry. Different uses of the word dimensions. Oh, he also appears to claim to have created a perpetual motion machine (more out than goes in), though as he only claims a COP of 1.75 it is doubtful that he has. 30. #30 Norm Breyfogle February 22, 2007 Jonathan Vos Post: I’ll be … I didn’t know the word pataphysical was in that Beatles tune, though I well recognize the song. Thanks, Jonathan! 31. #31 Norm Breyfogle February 22, 2007 Heh. Now I see the ref to that Beatles song right there on the Wiki article page I linked. 32. #32 Norm Breyfogle February 22, 2007 Wow, talk about nonlinearity. The Wiki article about that Beatles song indicates concidences between its lyrics and the Manson murders and trials, which occurred a month after the song was written. I suppose there’s a β€˜pataphysical explanation for how McCartney exhibited the power of prophecy (lol). Also, since in the song Maxwell kills a β€˜pataphysicist, perhaps the silver hammer represents an alternate tool of Occam? Perhaps Maxwell Edison’s middle name is Occam? 33. #33 TorbjΓΆrn Larsson February 22, 2007 by this they meant check that the units on both sides of the equation are the same (a good check as it happens Interestingly, units contains more information than that. This permits predicting relationships between quantities and scaling laws in phenomena, even when they are too complex for an explicit Examples of such information is knowing that each dimension form a group, scalar arguments are dimensionless, vector components are orthogonally distinct, dimensionless constants are naturally of order unity, and of course Buckingham’s theorem on the number of independent parameters. http://en.wikipedia.org/wiki/Dimensional_analysis has a rather thorough description. (But it could be a subject for a basics physics article too, I guess.) So yes, what’s not to like about 34. #34 Victor February 23, 2007 well i am an artist and a very creative person i know i can explain anything about this theory, yesh i will make it up for you it is very simple, you can explain the periodic table by saying that it is just our interpretation of the amptitud and Frequency of the standing wave, also i have found the center of the univerce or kolob as we the creative people now call it i have the mathematical equations that prove it but this thing is bad for that tipe of notation, but now think of it we are made of just energy, like richard bach said in jonathan savior seagul just think, by electro magnetic refraction we could be cloned or a projection of that magnetism β€œastral projection” can be used for remotely studing the universe, you see when we die we come one with kolob and therefore are recycled into the evrything, yesh evryone knows that is what is keeping my feet stuck to the ground. so if any of you have any doubt with this theory i will iluminate you, just aslong as you can forgive my aufull speling in inglish. much love to all you haters. and remember if you do not know evrything about the universe anything is posible to be the truth. 35. #35 W. Kevin Vicklund February 23, 2007 yesh i will make it up for you The only relevant portion of Victor’s post. 36. #36 TorbjΓΆrn Larsson February 23, 2007 The only relevant portion of Victor’s post. Obviously, since he having β€œmy feet stuck to the ground” is refuted by the rest of the gibberish. 37. #37 Andrew Wade February 23, 2007 Linear gravity results in a linear space-time and is the same as zero gravity. Basically if you have a gravity force, F, that is independent of position, you can remove it from your Newtonian equations of dynamics through through the appropriate coordinate transformation. I don’t believe this is true of Special relativity though (at least not without making a complete hash of your equations), and in General relativity the premise isn’t well-defined to start with. All this is assuming that the author has no clue what β€œlinear” or β€œisotropic” means, as the force I’m describing above is decidedly an-isotropic. As for β€œlinear”, it’s not really clear what that would mean in this context, β€œconstant” would be more usual term, or perhaps β€œhomogeneous”. 38. #38 John Owens February 23, 2007 You see, the main reason that we know that electromagnetic waves must be the actual force behind gravity is… They both follow inverse-square relationships: Wow, so electromagnetism and gravity are both really just our interpretation of effects naturally caused by sound waves! Who knew? 39. #39 Norm Breyfogle February 23, 2007 Victor: Try slipping on a banana peel or starting a pie fight instead. 40. #40 Andrew Wade February 23, 2007 I must admit I just skimmed the source and probably gave the author more credit than he deserves. After reading more I’m starting to think I’m just projecting sense onto a Rorschach inkblot of 41. #41 Norm Breyfogle February 23, 2007 Wooblots are fun and can even be profoundly meaningful(though often they’re just dumb), but they aren’t science. 42. #42 attotheobscure February 24, 2007 When I Hie to Kolob Joseph Smith invented the idea of Kolob. He claimed it is a planet or star nearest to the throne of God. Whatever that means. But hey, if your a 19th farmer-prophet trying to wow other farmers into unquestioning obedience before you marry their wives and daughters, you got to pull out all the bells and whistles to appear like the real deal. The Mormon doctrine of Kolob is one of the red flags that those investing the LDS religion usually find waving in their faces that helps them to a conclusion that Joseph Smith was a crackpot just pulling strange ideas out of his derriere. Their is even a painfully old-school hymn lauding Kolob. If you could hie to Kolob In the twinkling of an eye, And then continue onward With that same speed to fly, Do you think that you could ever, Through all eternity, Find out the generation Where Gods began to be? Or see the grand beginning, Where space did not extend? Or view the last creation, Where Gods and matter end? Methinks the Spirit whispers, β€œNo man has found β€˜pure space,’ Nor seen the outside curtains, Where nothing has a place.” The works of God continue, And worlds and lives abound; Improvement and progression Have one eternal round. There is no end to matter; There is no end to space; There is no end to spirit; There is no end to race. There is no end to virtue; There is no end to might; There is no end to wisdom; There is no end to light. There is no end to union; There is no end to youth; There is no end to priesthood; There is no end to truth. There is no end to glory; There is no end to love; There is no end to being; There is no death above. There is no end to glory; There is no end to love; There is no end to being; There is no death above. Those last three verses are particularly creative and must have taken years to write. For more on Kolob and to listen to the Mormon hymn about it visit this site. 43. #43 Jonathan Vos Post February 24, 2007 Fascinating song. I’ve read quite a number of papers on Medieval Jewish Philosophy. The point is, quite a bit of this is hairsplitting argument on whether Space is infinite, and whether Time is infinite. The centuries-long conversation between Islam, Christianity, and Judaism is fascinating, albeit pre-Science. The key person in all this? Aristotle. Once his works were translated from Greek to Arabic, and from that to Hebrew and European vernacular, everyone had to choose sides: Pro-Aristotle, anti-Aristotle, pro-Arabic-interpretation-of-Aristotle, anti-Arabic-interpretation-of-Aristotle, pro-Hebrew-commentary on-Arabic- interpretation-of-Aristotle, anti-Hebrew-commentary on-Arabic- interpretation-of-Aristotle,… and so on, ad infinitum. Mormon Theology might be taken in the context of this pre-scientific speculation, and as modernized by the Fantasy and Science Fiction of the former #1 Mormon playwright (now displaced by Neil Bute): Orson Scott Card. 44. #44 Victor February 24, 2007 I am sorry i am a layman! i never ment any disrespect to the inteligent people posting here, the advance of science is the most noble persuit for a human being and as a painter i have taken as my topic glorifing this search. But even in my ignorance just going over this stuff this ficcion of mr. engenier borg i can tell it is a bunch of well toght out b.s. again MarkCC i never ment dissrespect to you or you’r noble calling. and as far as I can see with my minds eye science will replace religion one day soon, evry one will realise that what all religions intend is to explain the universe and that they do with out following a method. all of you that here post in use of reason, i bow to you for you are as priest to my humble sigth. 45. #45 TorbjΓΆrn Larsson February 24, 2007 Wooblots are fun At least they admit a fun term. Thank you Norm for introducing such a beautiful word to me! I will immediately go and whack some wooblots. I think I saw some in the web over there… Pro-Aristotle, anti-Aristotle, pro-Arabic-interpretation-of-Aristotle, I heard that song before. How does it go now; β€œ99 Aristotles of beer”? Or perhaps β€œAristotle drove me to the bottle”? 46. #46 Chris' Wills February 25, 2007 So yes, what’s not to like about dimensions? Posted by: TorbjΓΆrn Larsson>> The other simple thing we did with dimensions/units in engineering was, of course, to help us work out the model to describe the experimental results. You initially balanced the dimensions for the inputs against the outputs. Thanks for the wikipedia article link, I’ll have to read that a few times I suspect but it is fascinating. 47. #47 Norm Breyfogle February 26, 2007 Don’t know if I’m the first to do so, but I made up β€œwooblot” on the spot. Maybe the Lewis Carroll-style word (or phrase) Mark CC is looking for is β€œβ€˜pataphysical wooblot”? TorbjΓΆrn wrote, β€œI will immediately go and whack some wooblots. I think I saw some in the web over there…” Don’t forget to bring Occam’s hammer! 48. #48 tom February 26, 2007 anyone familar with tom bearden? you can check his credentials. in fact you can go knock on his door down in huntsville, ala and he’ll tell you things that will blow your mind while he tells you the formula for free energy and the second half of the receipe that tesla found over 100 years ago but nobody wants to acknowledge as valid. i don’t know about the blaze stuff but it fits into this pix somewhere. and bearden will be happy to show you how orthodox science has done to tesla exactly what you accuse blaze of doing. but watching this vid and a little elementary checking into his credentials, claims, and the history lesson will save you the trip to huntsville and maybe save us all from the quakesville we are currently serving… http://novakeo.com/?p=806 here’s his website… http://www.cheniere.org/ here’s some bio i picked up off the web on the retired army lt. colonel and grad of guided missle staff officer’s course… http://homepages.ihug.co.nz/~sai/Beard_bio.htm quote: He and his wife Doris live in Huntsville, Alabama where Tom is retired from aerospace, continues private research, and serves as a special consultant to industry on scalar electromagnetics there is other stuff elsewhere from other visionaries/crackpots. like buckie fuller. like michael tsarion. and others. connect these dots and then start figuring out how to develope instead of how to dis-credit. how to prove it instead of how to dis-prove it. maybe the way to do that is to develope it and let it prove or dis-prove itself. you might as well get on board cuz its already being done. you wouldn’t want to be left behind would you? then, if we can do something about the ultimate challenge, the human condition, and not blow ourselves up by using this stuff for sinister agendas we WILL be able to become fossil fuel independent, we WON’T have to hijack soverign nations for their resources, and we CAN get on with cleaning up this planet. maybe you’d rather be part of the opposite. part of the problem instead of part of the solution. the choice is ours. 49. #49 Mark C. Chu-Carroll February 26, 2007 I haven’t heard of Bearden, but I will take a look. But you’re playing a classic stupid rhetorical game. Borg, the subject of this post, is wrong. It doesn’t matter what kind of politics you espouse; it doesn’t matter whether you’re an investor in oil wells or a scientist working on cheap clean energy: facts are facts. Borg is wrong. His stuff makes no sense – it is utter quackery. As it happens, I’m very liberal. I hate the idiotic war we got ourselves into. I’m an environmentalist who gives part of my income to a variety of organizations that try to either preserve untouched lands, or clean up polluted land. I hate the way that various business interests sabotage attempts to find methods of conserving energy and/or producing clean energy. But none of that has anything to do with whether crackpottery like Borg’s gibberish has any shred of validity to it. Wanting something like a perpetual motion machine to generate power doesn’t change the fact that creating a perpetual motion machine is impossible. Facts are facts; math is math: and the facts and the math show that Borg’s theory is a crock of nonsensical gibberish. To decide that any opposition to crackpot nonsense is equivalent to actively supporting everything bad about the status quo is just dumb. It pretty much guarantees that you’re going to do nothing but waste your time. In terms of energy, we could spend our time working to reduce energy consumption, and find clean ways of producing it that minimize environmental damage. Or we could spend our time shouting about how wonderful the free power provided by Borg’s electromagnetic fields would be – and then waste time, money, and resources building something that we could have known wouldn’t work, because it’s based on a nonsensical theory that bears no resemblance to reality. Finally, in response to your list of validated crackpots, I’ll just bring back my classic response. They laughed at Einstein. They laughed at Edison. They laughed at laughed at Tesla. And they laughed at Bozo the clown. 50. #50 tom February 26, 2007 thanks for your attention. i’ll respond to each of your 8 paragraphs… 1) good 2) perhaps after bearden you’ll have a different opinion. frankly i don’t care about your opinion of me. what i care about is your reaction to what bearden has to say and what you then do next. you run a very well respected site here. it is appreciated. 3) you’re quakery conclusion may well be right. my physics is limited to 1 semester in high school over 40 years ago. i never said borg is right. bearden may not be either. but to a layman, he sure sounds like he’s on to something that we could have been on to many years ago. and, he’s transparent and, unlike borg, is right here in front of our noses. you can go shake his hand. i have some friends who are discussing both blaze and bearden and are drawing parallels. just how i don’t know. but my understanding is it has something to do with the 2nd half of the receipe. the shadow. the anti. the thing. the mirror. thats the same thing tesla was accused of quakery over isn’t it? 4) me too. 5) i don’t think what bearden is talking about has anything to do with a perpetual motion machine. when i think of the word machine i see hardware. that will never happen. one might be inclined to call how the forces of the universe work a machine though. 6) you miss my point my friend. perhaps it was dumb of me to attempt to make it. you did much better than i did in your para 8. this was my point. 7) agreed. 8) agreed. 51. #51 Andrew Wade February 27, 2007 here’s his website… http://www.cheniere.org/ Ooomph. I’m reading his explanation. It’s been a while since I’ve done this physics, but I don’t think he’s correct. Discussion 1: Potentials are real and force fields are derived. The old notion that potentials were merely mathematical conveniences has long been falsified, particularly by the Aharonov-Bohm effect {2}, extended to the Berry phase {10}, and further extended to the geometric phase {11}. There are some 20,000 physics papers on geometric phase, Berry phase, and Aharonov-Bohm effect. In quantum electrodynamics, potentials are primary and force fields are derived. Physicists nowadays don’t much worry about what is β€œderived” and what is β€œreal”; it’s a distraction that has no bearing on what theories actually predict. The Aharonov-Bohm effect is but one of a large number of non-local effects in quantum mechanics; and while very strange is indeed well-established physics. The force fields only exist in mass, and are the effects of the interaction of the β€œforce-free fields” in space that exist as curvatures of spacetime. This is just plain wrong. Leaving aside the question of what it means for a force field to β€œexist in mass”, electromagnetic fields/potentials/whatever do not exist as curvatures in spacetime. These fields can exist in perfectly flat spacetime, and in fact the relevant equations are only known for flat spacetimes. Gravity is an effect of curved spacetime, but the (β€œgravity”) metric of general relativity is a very different beastie than the E.M. potentials being discussed here. There are no force fields in space; there are only gradients of potentials. Spacetime itself is an intense potential. Quoting Feynman {12}: I suppose the metric of GR is vaguely analogous to a potential… The Feynman quote is unrelated. The distinction between E-field and B-field is blurred. As Jackson {13} points out: … This is fine. In other words, one can have a magnetic component and at least partially turn it into an electric component, or vice versa. This is important to the MEG’s operation. And this is not. One can partially β€œturn” a magnetic component into an electric component through a coordinate transformation. This is not a physical process, what is occurring is that there there is there are many ways to mathematically describe the same situation, and some of those descriptions may have, say, magnetic fields only, and some may have both magnetic and electric fields. This is what happens with electromagnetism in special relativity. The author appears to be confusing these coordinate transformations with the guage transformations below. Guage transformations are also transformations between equivalent mathematical descriptions of the same situation, but they’re not the same transformations as the coordinate transformations here. Skipping ahead… It is stressed that, in the AB effect, a regauging has taken place. This is bogus. If you don’t get the same outcome with the AB effect for all guages, then your theory is inconsistent. The potential outside the localization zone has been freely changed, with an extra spacetime curvature and extra energy transferred there by gauge freedom, at no cost to the operator. In the AB effect, the potential outside the localization zone has indeed been changed (for whatever guage you wish to use), but there is no extra energy and no spacetime curvature there. At least not in standard physics, and the author does not explain how he calculates the energy density of an E.M. potential. The author goes increasingly off the rails from here on out, but there’s only a couple more points I wish to address: The special nanocrystalline core material used in the MEG has a very special characteristic: The material itself freely localizes an inserted B-field (from the input coil, or from a separate permanent magnet, or both) within the core material itself. His β€œvery special characteristic” is shared by all soft magnetic materials. It may be special in how well it does this; in transformers even small losses to hysteresis can be significant. By inputting nearly rectangular pulses to the input coil, the rise time and decay time of each pulse edge produces a resulting sharp change in the external A-potential, producing an E-field by the equation E = A/t. This is a well known phenomenon called induction, and has bugger-all to do with the Aharonov-Bohm effect. Crucially, induction must work both ways; if you try and tap this E-field, you will induce a field in the source coil, and the source will need to do work to overcome it. There is no free lunch to be had here. In conclusion, there is a reason this person is hawking his wares on a website to people who don’t know the physics rather than publishing papers in peer-reviewed journals with peers who do know the physics. 52. #52 Caradoc February 28, 2007 At least Neal Adams draws a wicked Batman. 53. #53 Prof.Debono March 12, 2007 To Mark and all, I’ve been a close collegue to Xavier for some years, and I am really disappointed about the way you are treating this respectful person here. Xavier is considered a typical genious in our country, which by the way is far from the US. For those not used to Borg as a surname, you will find it very common in Finland, Sweden and some European countries. The title engineer is required to be stated by law and virtually becomes part of the qualified persons’ name. He is also well respected by other professors across europe. You can see this from the fact that he has been granted the use of some european university labs to conduct his research. See for example: Let me remind you, he preceeded NASA in their same experiment, and was the first one to publish the results regarding the vacuum tests. He has setup various research teams in his own university where I still teach, he is very creative with experiments and as far as I know, most of his research makes a lot of sense…yes, including the EMRP gravity theory. As he says, this topic has been given serious thought by many scientists like Lorentz, H.Poincare, F.Brush, Secchi, Leray, V.Thomson, Schramm, Tait, Isenkrahe, Preston, Jarolimek, Waachy, Rynsanek, Darwin, Majorana, J.K.Harms, Sulaiman .. of whom we have other of their own theories established in our present books. We also have the proof that Newton himself derived his equation for gravity using the same principle. Also, you do not have to scandal yourself when he states that matter is electromagnetic standing waves, that’s the biggest achievement De Broglie is known for. Regarding his Space time system of units, he seems to be the first person on earth who has numerically derived a multitude of experimentally found constants using his simple and yet genious unified units table. He shows all his maths, and all his results, and no body here could find any cheat or inconsistency: For the lazy ones, he also worked out a java converter which works perfectly: and any one understanding java, can look at the java source and see that the results are being honestly worked out. As to relativity, it seems that he is more against GR than SR, and he has good reasons for that. As I see it, GR can be explained in terms of EMRP for the condition c=constant, which we know is Regarding references to Kolob, he is definitely not an ex-mormon, in fact he clearly stated his source of information, and said that Kolob just fits perfectly into his scientific model, so he kept the same name. All astromoners know that our galaxies rotate around some other point in space which is possibly rotating as well around some other point. We have no proof that the heirarchy given by him is not the correct one. I see his theories agree with Mach’s principle more than Einstein’s GR does. I’ve been teaching mainstream topics all these years, and I can tell you, that when one analyses the fundumental laws of our established laws of physics, he finds them to be very vague and giberish. So, think twice before discrediting him, as his theories might as well be in tomorrows books. 54. #54 Mark C. Chu-Carroll March 12, 2007 For all your complaints about my post on Borg, you don’t actually address any of my complaints in anything approximating a real way. The math of Borg’s theory is completely invalid – as I He can handwave away relativity all he wants – but parts of his math are absolutely dependent on the math of relativity – in particular, the group theoretic symmetry of relativity. You don’t get to pick and choose when to apply the theory without justification – but that’s exactly what he does. When the math of relativity is convenient, he uses it. When it’s not, he drops it. That’s just indefensibly wrong. 55. #55 TorbjΓΆrn Larsson March 12, 2007 For those not used to Borg as a surname, you will find it very common in Finland, Sweden and some European countries. The jokes is on the Borg’s of engineer occupation, since rumor has it BjΓΆrn Borg’s singleminded concentration on the playing field was the in-joke behind naming the android collective opponent of Star Trek Next Generation as β€œthe Borg”. Charles Darwin was mainly a biologist, and there is no reference that he worked on this. References, please. One can say a lot about the misconceptions of physics in the comment, so I will address just one which shows the problems: matter is electromagnetic standing waves, that’s the biggest achievement De Broglie is known for. De Broglie’s hypothesis, the one he got the Nobel prize for, was that any moving particle or object has an associated wave. But that is a quantum mechanical description, not an EM one. It led both to SchrΓΆdinger’s QM formulation where the absolute square of the wave function describes a probability density for finding a particle, or the (falsified) de Broglie-Bohm QM formulation where the wave is a pilot wave. The electron waves of de Broglie was a predecessor designed specifically to explain the quantization of light in emission or adsorption by atoms. 56. #56 Prof.Debono March 12, 2007 The Darwin I was referring to is another Darwin, who is known for studying Le Sage/ push gravity concept- reference G. H. Darwin. Proc. Roy. Soc. London, 76, 1905 I must say you have quite an original way to put complaints. You have mixed up 2 different parts from his website, one part taken from his EMRP theory, and another taken from a section where he showed that gravity is not generated by earth’s electric field gradient. Then you are shifting the theory of relativity status from that of a theory into that of an undeniable truth, and conclude that the author is totally wrong because he does not agree with the whole thruth. The only references I see which clearly make use of the relativity theory is where it comes to time dilation and length contraction effects, which strictly speaking is Lorentz work not Einstein’s. You also do not differentiate between GR and SR. I do not see Xavier supporting any part of GR, and the part he does support, which is mainly the Lorentz part within SR, can be easily derived by assuming matter to be a standing electromagnetic wave form. I see you complain about the existence of a central core, but you simply have no proof of the opposite. Personally, I think a central core within a stable universe makes more sense than a big bang originating from nowhere. His model is analogous to an atom, which we have evidence of existence. As for the big bang, it’s just another theory, which not even most scientists agree on. And yes, he explicitly talks about a fixed frame of reference, which would break down SR, but still consistent with Mach’s principle, something that Einstein was not able to keep within his final GR version. Again, we are not talking about religion, this is science, and Einstein could be right on some things and wrong on others. For example I have documentation of Newton’s work on perpetual machines, and yet, we do not learn about them because they form part of Newton’s wrong ideas. As to the maths of how the push gravity creates a force equal to GMM/r^2, you have to contact him. I recall he was recently working on some project in Russia but I am sure he will send you all maths you request, assuming you change your approach. As to Chris Wills, note that nowhere does this engineer claim anything about any perpetual or overunity machine. We worked together on the Aquafuel project, and our main aim was to discredit the overunity claim done at that time by a crazy French free-energy guy. Note a COP>1 is far from claiming a perpetual machine. In fact Borg’s findings on aquafuel show that you always need more energy input than you can obtain from burning the generated gas. He had also successfully converted a Mazda RX7 rotary engine to run on compressed aquafuel gas, accumulated during the week by a solar powered gas generator, as one of our university projects, all at his own expenses. 57. #57 Mark C. Chu-Carroll March 12, 2007 I have no intention of β€œchanging my approach”. I read Borg’s writings as he presents them, and they are pure, utter, bunk. One of the beauties of fields like math and science is that politeness *does not matter*, fame *does not matter*, status *does not matter*. If the math is wrong, the math is wrong – and whether I point out its problems in a mocking way or a kind and respectful way, the fact remains that *the math is wrong*. And even as you defend him, you’re basically admitting that you don’t have a clue about how he allegedly did his math. You’ll argue that his theory makes β€œmore sense” than relativity, while simultaneously admitting that you don’t understand his explanation of one of the most fundamental things that he claims to explain. Tell me, exactly how is it that you know that his theory β€œmakes more sense” than relativity if you don’t even understand how his fixed-reference-point theory can explain the observed behavior of gravity? 58. #58 Blake Stacey March 12, 2007 I just discovered this, and I need to get the poison out of my system. What better place to rant than a thread devoted to physics crackpottery? This example comes from, of all places, The American Scholar β€” yes, the quarterly publication of the Phi Beta Kappa society. Robert Lanza, a cell biologist, gives us β€œA New Theory of the Universeβ€œ. What was so terribly wrong with the old? Well, you see, we need a new idea, β€œbiocentrism”, which β€œbuilds on quantum physics by putting life into the equation”. Our science fails to recognize those special properties of life that make it fundamental to material reality. This view of the world–biocentrism–revolves around the way a subjective experience, which we call consciousness, relates to a physical process. It is a vast mystery and one that I have pursued my entire life. The conclusions I have drawn place biology above the other sciences in the attempt to solve one of nature’s biggest puzzles, the theory of everything that other disciplines have been pursuing for the last century. Such a theory would unite all known phenomena under one umbrella, furnishing science with an all-encompassing explanation of nature or reality. We need a revolution in our understanding of science and of the world. Living in an age dominated by science, we have come more and more to believe in an objective, empirical reality and in the goal of reaching a complete understanding of that reality. Part of the thrill that came with the announcement that the human genome had been mapped or with the idea that we are close to understanding the big bang rests in our desire for completeness. But we’re fooling ourselves. Lanza is at least candid about admitting that biology is not a likely place to look for answers to the fundamental Cosmic mysteries. Nevertheless, he perseveres: But at a time when biologists believe they have discovered the β€œuniversal cell” in the form of embryonic stem cells, and when cosmologists like Stephen Hawking predict that a unifying theory of the universe may be discovered in the next two decades, shouldn’t biology seek to unify existing theories of the physical world and the living world? What other discipline can approach it? Biology should be the first and last study of science. It is our own nature that is unlocked by means of the humanly created natural sciences used to understand the universe. Ever since the remotest of times philosophers have acknowledged the primacy of consciousnessβ€”that all truths and principles of being must begin with the individual mind and self. He says that the additional dimensions of string theory have not been observed (true), and then lumps Einstein’s spacetime into this same category (WTF?), along with the luminiferous ether (this guy’s problems are now in the WTF-complete class). Moments later, he’s delving into the Anthropic Principle: β€œModern science cannot explain why the laws of physics are exactly balanced for animal life to exist.” Quick, point this man to Victor Stenger. Or Sean Carroll. Or Skatje Myers, for crying out loud. Before we do that, we should also recall the wise words of Carl Sagan: There is something stunningly narrow about how the Anthropic Principle is phrased. Yes, only certain laws and constants of nature are consistent with our kind of life. But essentially the same laws and constants are required to make a rock. So why not talk about a Universe designed so rocks could one day come to be, and strong and weak Lithic Principles? If stones could philosophize, I imagine Lithic Principles would be at the intellectual frontiers. This comes from his book Pale Blue Dot (1994). Sagan’s chapter on the Anthropic Principle demolished the sort of pseudo-arguments that Lanza puts forth, yet thirteen years later, we still see the same inanities bubbling up like methane from the marsh of sophomoric vanity. A few paragraphs later, he’s bungling a description of quantum entanglement: Another aspect of modern physics, in addition to quantum uncertainty, also strikes at the core of Einstein’s concept of discrete entities and spacetime. Einstein held that the speed of light is constant and that events in one place cannot influence events in another place simultaneously. In the relativity theory, the speed of light has to be taken into account for information to travel from one particle to another. However, experiment after experiment has shown that this is not the case. In 1965, Irish physicist John Bell created an experiment that showed that separate particles can influence each other instantaneously over great distances. The experiment has been performed numerous times and confirms that the properties of polarized light are correlated, or linked, no matter how far apart the particles are. There is some kind of instantaneousβ€”faster than lightβ€”communication between them. All of this implies that Einstein’s concept of spacetime, neatly divided into separate regions by light velocity, is untenable. Instead, the entities we observe are floating in a field of mind that is not limited by an external Actually, it’s pretty well established that information doesn’t travel faster than light in Bell-type experiments. Only a scientifically illiterate buffoon can turn a phrase like β€œspacetime turns out to be incompatible with the world discovered by quantum physics”, when in fact quantum theory and spacetime mesh together like chicken and curry, in quantum field theory. As the villain once said in an old DangerMouse episode, β€œCurses curses squared!” Until now, I’d had a favorable impression of The American Scholar, thanks to Brian Boyd’s illuminating smackdown of β€œcultural critique” and Literary Theory (Autumn 2006 issue). Now, this unmitigated dreck has put me in a foul mood. Maybe I can erase it from existence by suitably altering the quantum vibrational frequencies of my perceptions. β€œWithout perception, there is in effect no reality.” If I don’t see it, then it can’t exist! 59. #59 TorbjΓΆrn Larsson March 13, 2007 The Darwin I was referring to is another Darwin, who is known for studying Le Sage/ push gravity concept- reference G. H. Darwin. Proc. Roy. Soc. London, 76, 1905 Ah, the physicist George Darwin. I could quibble with your implication of the more famous Charles Darwin, but that is besides the point. The point is that the kinetic theory of gravitation doesn’t involve EM at all, so you are listing irrelevant authorities, besides the irrelevancy of using authority at all. I also note that you don’t explain the factual points of your faulty recount of de Broglie’s results. This is in anyone’s eyes an admittance of error. As for the big bang, it’s just another theory, which not even most scientists agree on. β€œ[j]ust another theory”, the usual admittance of scientific illiteracy. And yet the concept and use of theories and their verification are so basic and simple things… Big bang theory is verified, in fact the details are know beginning to be clarified ( http://en.wikipedia.org/wiki/Lambda_CDM_model ). And again an irrelevant appeal to authority. The use of commenting in support of crackpot theories when it is readily apparent that the commenter not properly understand neither the crackpot theory nor the simple basics of science eludes 60. #60 John March 17, 2007 I don’t think its very wise to judge Mr. Borg by what Mr. Chu-Carrol has to say about him. Go to Mr. Borg’s site and see for yourself what he has to say about hard particle theory. http:// 61. #61 Mark C. Chu-Carroll March 17, 2007 You know, of one the things that’s fascinating to me is how just about every time I write about one of these bizzare crackpot theories, someone like you always comes along, and either implies or directly states that I’ve unfairly represented the theory. But somehow, those people – you included – never say anything to address any of my critiques. Borg’s theory is a pile of rubbish. And my post explains exactly why the mathematical part of it is rubbish. So if you want to claim that there’s something about Borg’s theory that isn’t rubbish, how about you try looking at my criticism, and telling me where it’s wrong? If you *can’t* do that, then you clearly also can’t understand the math that underlies Borg’s theory. And if you don’t understand it’s math, then you have absolutely no way to judge whether it’s an accurate description of reality. All of which reduces to: put up, or shut up. 62. #62 Vigilant February 11, 2008 Β«A non empirical derivation for all magic numbers has been shown in the work published by Xavier Borg [4], where all magic numbers, including the theorized magic 184, are derived systematically from a hyper geometrical model based on two simplex stacked structures within the nucleus. A highly simplified version of this is the shell model with a deformed harmonic oscillator potential and spin-orbit interaction.Β» 63. #63 Jonathan Vos Post February 11, 2008 Yes, yes, all true. And yet, I hasten to point out, under General Relativity, there is a non-crackpot intertpretation of electrogravitic and magnetogravitic effects. They are significant, for example, near charged, spinning, black holes. As wikipedia stubs (two subs, actually): (1) β€œIn general relativity, the tidal tensor or electrogravitic tensor is one of the pieces in the Bel decomposition of the Riemann tensor. It is physically interpreted as giving the tidal stresses on small bits of a material object (which may also be acted upon by other physical forces), or the tidal accelerations of a small cloud of test particles in a vacuum solution or electrovacuum solution.” (2) β€œIn general relativity, the magnetogravitic tensor is one of the three pieces appearing in the Bel decomposition of the Riemann tensor. The magnetogravitic tensor can be interpreted physically as a specifying possible spin-spin forces on spinning bits of matter, such as spinning test particles.” Not that lunatics usually demonstrate any computational ability with Riemannian or Pseudoriemannian spaces. Even though there are such nice things to say about the latter since John Forbes Nash, Jr., published definitively in the field. As Eric W. Weisstein clarifies: Suppose for every point x in a manifold M, an inner product < Β·,Β·>_x is defined on a tangent space T_xM of M at x. Then the collection of all these inner products is called the Riemannian metric. In 1870, Christoffel and Lipschitz showed how to decide when two Riemannian metrics differ by only a coordinate transformation. SEE ALSO: Compact Manifold, Line Element, Metric Tensor, Minkowski Metric, Riemannian Geometry, Riemannian Manifold. Besson, G.; Lohkamp, J.; Pansu, P.; and Petersen, P. Riemannian Geometry. Providence, RI: Amer. Math. Soc., 1996. Buser, P. Geometry and Spectra of Compact Riemann Surfaces. Boston, MA: BirkhΓ£user, 1992. Chavel, I. Eigenvalues in Riemannian Geometry. New York: Academic Press, 1984. Chavel, I. Riemannian Geometry: A Modern Introduction. New York: Cambridge University Press, 1994. Chern, S.-S. β€œFinsler Geometry is Just Riemannian Geometry without the Quadratic Restriction.” Not. Amer. Math. Soc. 43, 959-963, 1996. do Carmo, M. P. Riemannian Geometry. Boston, MA: BirkhΓ£user, 1992. 64. #64 Mikael February 3, 2010 MarkCC I suggest you do what you do best and leave the things you don’t understand to rest. If it does not make sense… try to wipe your blurred lens. You work for google you say? Hmmm and still look past a lot on a clear day? 65. #65 Anonymous June 20, 2010
{"url":"http://scienceblogs.com/goodmath/2007/02/20/another-revolution-in-physics-1/","timestamp":"2014-04-16T08:23:22Z","content_type":null,"content_length":"152606","record_id":"<urn:uuid:fa42f3f2-cdc1-44c0-9457-6782ebbe7b08>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Fountain Hill, PA ACT Tutor Find a Fountain Hill, PA ACT Tutor ...I run indoor track, outdoor track, and cross country. In high school my 4x400m team got the school record at the Penn Relays and my junior year of college my 4x800m team got the school record in the indoor championship meet. I run everyday and I can be a great motivator to get up and going! 26 Subjects: including ACT Math, reading, calculus, statistics ...I enjoy finding the connections between all the different properties, and encouraging my students to do the same. A good foundation in pre-algebra will help through the rest of the math courses that follow. As a general rule, I believe in not only learning the processes and rule, but understanding how and why they work. 12 Subjects: including ACT Math, calculus, statistics, geometry ...While I was at UConn, I taught three different classes: one semester of Anatomy and Physiology and two semesters of Biochemistry. My students had the lowest fail rates for the last two semesters I was there. Since graduate school, I've taught at the high school and middle school level, doing tu... 17 Subjects: including ACT Math, chemistry, calculus, geometry ...I use many examples to illustrate various math concepts and I will review any math concepts multiple times to assist with student comprehension. My background includes a Master's degrees in Mathematics and Statistics, as well as Masters degrees in Computer Science and Electrical Engineering. I ... 12 Subjects: including ACT Math, calculus, geometry, statistics ...I've spent many years using it in my engineering career and I have also taught this at the high school Level. As an engineer, Excel has been a very effective tool in documenting and testing ideas. The graphing capability of Excel is almost always underestimated. 11 Subjects: including ACT Math, physics, geometry, statistics Related Fountain Hill, PA Tutors Fountain Hill, PA Accounting Tutors Fountain Hill, PA ACT Tutors Fountain Hill, PA Algebra Tutors Fountain Hill, PA Algebra 2 Tutors Fountain Hill, PA Calculus Tutors Fountain Hill, PA Geometry Tutors Fountain Hill, PA Math Tutors Fountain Hill, PA Prealgebra Tutors Fountain Hill, PA Precalculus Tutors Fountain Hill, PA SAT Tutors Fountain Hill, PA SAT Math Tutors Fountain Hill, PA Science Tutors Fountain Hill, PA Statistics Tutors Fountain Hill, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/Fountain_Hill_PA_ACT_tutors.php","timestamp":"2014-04-20T13:44:19Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:a3282590-96ec-4a91-9d7a-211815990d50>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Max square/rec area under the curve July 22nd 2009, 04:40 PM #1 Max square/rec area under the curve Hi everyone, I've been working on this optimization problem for awhile now. I guess I don't understand the steps. Here's my work. The Problem A rectangle is inscribed with its base on the x-axis and its upper corners on the parabola y = 4 - x^2. What are the dimensions of such a rectangle with the greatest possible area? My work Define variables... A = 2xy y = 4- x^2 setup problem: A(x) = 2x ( 4 - x^2) Find d/dx: A'(x) = 8 - 6x^2 Critical Points: 0 = A'(x) = { -1.1547, 1.1547} ~~Then am I supposed to use this formula??? ----> 4x + 3y ? Thanks for your time Hi everyone, I've been working on this optimization problem for awhile now. I guess I don't understand the steps. Here's my work. The Problem A rectangle is inscribed with its base on the x-axis and its upper corners on the parabola y = 4 - x^2. What are the dimensions of such a rectangle with the greatest possible area? My work Define variables... A = 2xy y = 4- x^2 setup problem: A(x) = 2x ( 4 - x^2) Find d/dx: A'(x) = 8 - 6x^2 Critical Points: 0 = A'(x) = { -1.1547, 1.1547} ~~Then am I supposed to use this formula??? ----> 4x + 3y ? Thanks for your time You don't need to use the formula $4x + 3y$ at all. You are exactly right, the Area of the rectangle is given by $A = 2x ( 4 - x^2)$ $= 8x - 2x^3$. So to find the maximum $\frac{dA}{dx} = 8 - 6x^2$ $0 = 8 - 6x^2$ $6x^2 = 8$ $x^2 = \frac{4}{3}$ $x = \pm \sqrt{\frac{4}{3}}$ $x = \pm \frac{2\sqrt{3}}{3}$. The maximum area would be at the point $x = \frac{2\sqrt{3}}{3}$. So put it into the formula for area. $A = 8x - 2x^3$ $A = 8\left(\frac{2\sqrt{3}}{3}\right) - 2\left(\frac{2\sqrt{3}}{3}\right)^3$ Thank you so much! I got 6.1584 for the MAX How would you find the Length and Width of the rectangle after that? well, if you look at prove it's post, the value of x that gives you the maximum area is already there. when x is that value, what is y? whichever is the greater of the two values becomes the length, the other is the width. So am I looking to do an perimeter calculation with my MAX (X)? Using the 4x + 3y formula. Also how do I get y, just algebra? well, you see, how you find y is already in the question itself. y = 4 - x^2. [edit] From there you can find x through the formula for area, as HallsofIvy mentions below.[/edit] Last edited by compliant; July 24th 2009 at 08:22 AM. Notice that while the height of the rectangle is y, the base length is 2x, not just x. The original problem says nothing about the perimeter. I don't understand why you keep mentioning it. hey sweet! I got... y= 2.66667 x= 2.3094 July 22nd 2009, 04:55 PM #2 July 22nd 2009, 05:01 PM #3 July 22nd 2009, 05:07 PM #4 July 22nd 2009, 06:37 PM #5 Jul 2009 July 24th 2009, 05:47 AM #6 July 24th 2009, 07:09 AM #7 Jul 2009 July 24th 2009, 07:27 AM #8 MHF Contributor Apr 2005 July 25th 2009, 05:26 PM #9
{"url":"http://mathhelpforum.com/calculus/95837-max-square-rec-area-under-curve.html","timestamp":"2014-04-16T05:59:27Z","content_type":null,"content_length":"57095","record_id":"<urn:uuid:3a6a687a-9473-4c94-b215-5fbe58802f59>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Degeneracy (mathematics) Mathematics is the study of quantity, space, structure, and change. Mathematicians seek out patterns and formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proofs, which are arguments sufficient to convince other mathematicians of their validity... , a degenerate case is a limiting case in which a class of object changes its nature so as to belong to another, usually simpler, class. A degenerate case thus has special features, which depart from the properties that are In mathematics, properties that hold for "typical" examples are called generic properties. For instance, a generic property of a class of functions is one that is true of "almost all" of those functions, as in the statements, "A generic polynomial does not have a root at zero," or "A generic... in the wider class, and which would be lost under an appropriate small Perturbation theory comprises mathematical methods that are used to find an approximate solution to a problem which cannot be solved exactly, by starting from the exact solution of a related β€’ A point In geometry, topology and related branches of mathematics a spatial point is a primitive notion upon which other concepts may be defined. In geometry, points are zero-dimensional; i.e., they do not have volume, area, length, or any other higher-dimensional analogue. In branches of mathematics... is a degenerate circle A circle is a simple shape of Euclidean geometry consisting of those points in a plane that are a given distance from a given point, the centre. The distance between any of the points and the centre is called the radius.... , namely one with radius 0. β€’ A circle is a degenerate form of an ellipse In geometry, an ellipse is a plane curve that results from the intersection of a cone by a plane in a way that produces a closed curve. Circles are special cases of ellipses, obtained when the cutting plane is orthogonal to the cone's axis... , namely one with eccentricity In mathematics, the eccentricity, denoted e or \varepsilon, is a parameter associated with every conic section. It can be thought of as a measure of how much the conic section deviates from being circular.In particular,... β€’ The line The notion of line or straight line was introduced by the ancient mathematicians to represent straight objects with negligible width and depth. Lines are an idealization of such objects... is a degenerate form of a parabola In mathematics, the parabola is a conic section, the intersection of a right circular conical surface and a plane parallel to a generating straight line of that surface... if the parabola resides on a tangent plane. β€’ A segment In geometry, a line segment is a part of a line that is bounded by two end points, and contains every point on the line between its end points. Examples of line segments include the sides of a triangle or square. More generally, when the end points are both vertices of a polygon, the line segment... is a degenerate form of a rectangle In Euclidean plane geometry, a rectangle is any quadrilateral with four right angles. The term "oblong" is occasionally used to refer to a non-square rectangle... , if this has a side of length 0. β€’ A hyperbola In mathematics a hyperbola is a curve, specifically a smooth curve that lies in a plane, which can be defined either by its geometric properties or by the kinds of equations for which it is the solution set. A hyperbola has two pieces, called connected components or branches, which are mirror... can degenerate into two lines crossing at a point, through a family of hyperbolas having those lines as common asymptote In analytic geometry, an asymptote of a curve is a line such that the distance between the curve and the line approaches zero as they tend to infinity. Some sources include the requirement that the curve may not cross the line infinitely often, but this is unusual for modern authors... β€’ A set containing a single point is a degenerate continuum In the mathematical field of order theory, a continuum or linear continuum is a generalization of the real line.Formally, a linear continuum is a linearly ordered set S of more than one element that is densely ordered, i.e., between any two members there is another, and which "lacks gaps" in the... β€’ A random variable which can only take one value has a degenerate distribution. β€’ A sphere A sphere is a perfectly round geometrical object in three-dimensional space, such as the shape of a round ball. Like a circle in two dimensions, a perfect sphere is completely symmetrical around its center, with all points on the surface lying the same distance r from the center point... is a degenerate standard torus where the axis of revolution passes through the center of the generating circle, rather than outside it. β€’ A degenerate triangle A triangle is one of the basic shapes of geometry: a polygon with three corners or vertices and three sides or edges which are line segments. A triangle with vertices A, B, and C is denoted .... has collinear vertices. β€’ See "general position In algebraic geometry, general position is a notion of genericity for a set of points, or other geometric objects. It means the general case situation, as opposed to some more special or coincidental cases that are possible... " for other examples. Similarly, roots of a In mathematics, a polynomial is an expression of finite length constructed from variables and constants, using only the operations of addition, subtraction, multiplication, and non-negative integer are said to be if they coincide, since generically the roots of an th degree polynomial are all distinct. This usage carries over to eigenproblems: a degenerate eigenvalue (i.e. a multiply coinciding root of the characteristic polynomial In linear algebra, one associates a polynomial to every square matrix: its characteristic polynomial. This polynomial encodes several important properties of the matrix, most notably its eigenvalues, its determinant and its trace.... ) is one that has more than one linearly independent eigenvector. quantum mechanics Quantum mechanics, also known as quantum physics or quantum theory, is a branch of physics providing a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energy and matter. It departs from classical mechanics primarily at the atomic and subatomic... any such In mathematics, the multiplicity of a member of a multiset is the number of times it appears in the multiset. For example, the number of times a given polynomial equation has a root at a given in the eigenvalues of the Hamiltonian operator gives rise to degenerate energy level In physics, two or more different quantum states are said to be degenerate if they are all at the same energy level. Statistically this means that they are all equally probable of being filled, and in Quantum Mechanics it is represented mathematically by the Hamiltonian for the system having more... s. Usually any such degeneracy indicates some underlying Symmetry generally conveys two primary meanings. The first is an imprecise sense of harmonious or aesthetically pleasing proportionality and balance; such that it reflects beauty or perfection... in the system. Degenerate rectangle For any non-empty subset See also External links
{"url":"http://www.absoluteastronomy.com/topics/Degeneracy_(mathematics)","timestamp":"2014-04-20T18:27:58Z","content_type":null,"content_length":"27794","record_id":"<urn:uuid:93f7e3f7-8aae-4498-b032-f8d9f4368a74>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Middle School Charters in Texas: An Examination of Student Characteristics and Achievement Levels of Entrants and Leavers Posted on August 23, 2012 Charter schools have proliferated in Texas and across the nation. The expansion of charter schools is now a popular reform effort for many policymakers on both the right and left of the political spectrum. To examine the efficacy of such policies, a number of researchers have focused on the effects charter schools have had on student achievement, most of which have found little difference in achievement between the two types of schools (CREDO, 2009; Zimmer, R., Gill, B., Booker, K., Lavertu, S., Sass, T., and Witte, 2008). Yet, there is still a relative dearth of information about the characteristics of students entering and leaving charter schools and how these characteristics might be related to school-level achievement. This is particularly true with respect to charter schools in Texas. Most of the work in this area has focused on student racial and ethnic characteristics, while a fair number of studies have examined special education status and English-Language Learner status of entrants. Very little research has focused on the academic ability of students entering charter schools, the student attrition rate of charter schools, and the characteristics of the students staying and leaving charter schools. This study seeks to ameliorate this paucity of information, particularly as it pertains to high-profile and high-enrollment charter schools in Texas. The findings reviewed in this section refer to the results for the most appropriate comparisonβ€”the sending schools comparisonβ€”unless otherwise noted. Full results are in the body of the report or in the appendices. The CMOs included in this particular study included: KIPP, YES Preparatory, Harmony (Cosmos), IDEA, UPLIFT, School of Science and Technology, Brooks Academy, School of Excellence, and Inspired Vision. Characteristics of Students Entering Charter Schools Differences in Average TAKS Z-Scores The differences are reported in z-scores in order to make the results across school years comparable. At both the 5^th- and 6^th-grade levels, students entering most of the CMOs in this study had TAKS mathematics and reading scores that were statistically significantly greater than comparison schools–in particular, schools that sent at least one child to the charter in question or schools located in the same zip code as the charter school. Figure 1 shows the differences for incoming 5th grade students. Note the extremely large differences for Harmony charter schools. While the differences for KIPP were small, they were still statistically significant. Figure 1: Difference in TAKS Math and Reading Z-Scores of Students from Sending Schools Entering and Not Entering Selected CMOs All differences denoted with a number were statistically significant at the p < .05 level Figure 2 shows the differences for incoming 6th grade students. Most of the differences were relatively large and positive for eight of the nine CMOs, thus showing that students entering these schools have substantially greater TAKS scores than students from sending schools. Again, the differences are particularly large for Harmony schools. The results for KIPP reflect only KIPP schools with 6th grade as the entry year. Figure 2: Difference in TAKS Math and Reading Z-Scores of Students from Sending Schools Entering and Not Entering Selected CMOs All differences denoted with a number were statistically significant at the p < .05 level In only a few cases was there no statistically significant difference in TAKS scores between those students entering a CMO and not entering a CMO. Only one CMOβ€”School of Excellenceβ€”had TAKS scores that were consistently lower than comparison schools. For CMOs with greater levels of achievement for incoming students than comparison schools, the differences tended to be both statistically significant and practically significant. In other words, the differences appeared to be large enough to potentially explain differences in the levels of achievement between the CMOs and comparison schools. This does not mean the CMOs did not have greater student growth than other schools (growth was not examined in this study), but that the differences in TAKS passing rates often cited by supporters of charter schools and politicians could potentially be explained by the initial differences in achievement levels between students entering the CMOs and comparison schools. Distribution of TAKS Mathematics and Reading Scores The following two figures document the distribution of TAKS mathematics and reading scores for students in CMOs and in the various comparison groups of schools. As shown in Figure 3, eight of the nine CMOs had greater percentages of students scoring in the top 40% of test-takers and lower percentages of students scoring in the bottom 40% of test-takers. The differences were greater than ten percentage points in each of the two groups for KIPP, YES Prep, Harmony, and UPLIFT. The differences for Harmony and UPLIFT approached 15 percentage pointsβ€”strikingly large disparities in the performance of incoming students. The compression of reading scores against the test score ceiling could explain the smaller differences in reading than in mathematics, but further analyses are needed to examine this possibility. Figure 3: Difference in the Percentages of 5^th Grade Students Entering the 6^th Grade with TAKS Mathematics Scores in the Bottom 40% of Scores and Top 40% of Scores for CMOs and Comparison Schools* * Comparison school percentage is based on the average of the results for comparison schools in the same zip code, schools in the same zip code and contiguous zip codes, and sender schools Figure 4: Difference in the Percentages of 5^th Grade Students Entering the 6^th Grade with TAKS Reading Scores in the Bottom 40% of Scores and Top 40% of Scores for CMOs and Comparison Schools* * Comparison school percentage is based on the average of the results for comparison schools in the same zip code, schools in the same zip code and contiguous zip codes, and sender schools Differences in Scores for All Students and Economically Disadvantaged Students (Students Entering the 5^th Grade) This section examines the TAKS math and reading scores of 4^th grade students identified as economically disadvantaged entering CMOs in the 5^th grade. Students entering a CMO were defined as not having been enrolled in the same CMO in the previous year. Further, a student must have been identified as economically disadvantaged to be included in the analysis. Finally, the analysis focuses only on those students enrolled in the 4^th grade in a β€œsending” schoolβ€”a school that sent at least one student to that particular CMO in at least one of the cohorts of students included in the analysis. Statistically significant differences are noted by inclusion of the difference in z-scores in the graph. If the difference between students entering the CMO and not entering the CMO was not statistically significantly different, then no number was included in the graph. TAKS Mathematics As shown in Figure 5, both all students and economically disadvantaged students entering KIPP and Harmony had greater TAKS math z-scores than all students and economically disadvantaged students from sending schools that did not enter the CMOs. Moreover, the differences for economically disadvantaged students were greater than for all students. For economically disadvantaged students entering KIPP, the difference was 0.260 standard deviations while the difference for students entering Harmony was 0.530. Both differences were quite substantial. Thus, economically disadvantaged students entering KIPP and Harmony has substantially greater prior mathematics scores than students from sending schools that did not enter KIPP or Harmony. Both all students and economically disadvantaged students entering the School of Excellence, on the other hand, had prior math scores that were lower than the prior math scores for students not entering the School of Excellence. The difference was smaller for economically disadvantaged students than for all students. Figure 5: Differences in TAKS Mathematics Z-Scores Between Economically Disadvantaged Students Entering and Not Entering Selected CMOs for All Texas Students and Sending School Comparison Groups (4th Grade Scores of Incoming 5th Grade Students) All differences denoted with a number were statistically significant at the p < .05 level TAKS Reading As shown in Figure 6, the results for reading were quite similar to the results for mathematics. For example, all students and economically disadvantaged students entering KIPP and Harmony had greater TAKS reading scores than students from sending schools not entering those CMOs. Further, the differences were much greater for economically disadvantaged students than for all students. For economically disadvantaged students entering KIPP, the difference was 0.229 while the difference for economically disadvantaged students entering Harmony was 0.495. Again, as with the differences in mathematics, the differences in reading were quite substantial for these two CMOs. Thus, economically disadvantaged students entering KIPP and Harmony were far higher performing than economically disadvantaged students from the very same schools that did not enter these CMOs. Figure 6: Differences in TAKS Reading Z-Scores Between Economically Disadvantaged Students Entering and Not Entering Selected CMOs for All Texas Students and Sending School Comparison Groups (5th Grade Scores of Incoming 6th Grade Students) All differences denoted with a number were statistically significant at the p < .05 level Differences in Scores for All Students and Economically Disadvantaged Students (Students Entering the 5^th Grade) TAKS Mathematics As shown in Figure 7, the differences in TAKS mathematics scores for economically disadvantaged students were statistically significant and positive for eight of the nine CMOs (School of Excellence was excluded from the graph, but had a statistically significant difference that indicated economically disadvantaged students entering the CMO had lower TAKS mathematics scores than students from sending schools not entering the CMO). All but one of the eight differences was at least 0.200 standard deviations and three of the differences were greater than 0.300 standard deviations. Further, and perhaps more importantly, the differences were substantially larger than the differences for all students for the eight selected CMOs except Inspired Vision. Thus, undoubtedly, the economically disadvantaged students entering the CMOs were substantially different than the students from the very same schools that did not enter the CMOs. Indeed, the economically disadvantaged students entering the CMOs had far greater levels of achievement than the economically disadvantaged students that did not enter the CMOs. Figure 7: Differences in TAKS Mathematics Z-Scores Between Economically Disadvantaged Students Entering and Not Entering Selected CMOs for All Texas Students and Sending School Comparison Groups All differences denoted with a number were statistically significant at the p < .05 level TAKS Reading As shown in Figure 8, eight of the nine CMOs had statistically significant differences in TAKS reading scores for economically disadvantaged students that revealed students entering the CMOs had greater TAKS reading scores than students from the sending schools not entering the CMOs (again, School of Excellence was excluded from the graph, but had a statistically significant, negative, and larger difference for economically disadvantaged students). All but one of the differences was greater than 0.150 standard deviations and four of the differences were greater than 0.350 standard deviations. As with the mathematics finding, this clearly demonstrates that economically disadvantaged students entering the CMOs had far greater levels of reading achievement than students from sending schools that did not enter the CMOs. Figure 8: Differences in TAKS Reading Z-Scores Between Economically Disadvantaged Students Entering and Not Entering Selected CMOs for All Texas Students and Sending School Comparison Groups All differences denoted with a number were statistically significant at the p < .05 level Difference in Economically Disadvantaged Students Figure 9 shows the results for the statewide comparison and the comparison between a CMO and other schools in the same zip code. KIPP, IDEA, Harmony, and UPLIFT all had a greater percentage of students entering the schools identified as economically disadvantaged as compared to all students in the state. Alternatively, when compared to schools located in the same zip codes as the CMOs, a lower percentage of students identified as economically disadvantaged entered the CMOs. Again, this shows that the comparison set of schools or students employed in an analysis can substantially alter the results. Indeed, in this case, comparing the percentage of economically disadvantaged students in the CMO to all students in the state suggests that the CMOs enroll a greater percentage of economically disadvantaged students. When the comparison group employed is students enrolled in schools within the same zip code as the CMO, however, an entirely different picture emerges. Indeed, now a lower percentage of students entering the CMOs were designated as economically disadvantaged. Figure 9: Differences in the Percentage of Economically Disadvantaged Students Entering the 6^th Grade between Students Entering Selected CMOs and not Entering CMOs From All Schools and Schools in the Same Zip Code All differences denoted with a number were statistically significant at the p < .05 level STUDENT RETENTION RATES This section examines student retention and attrition from a few different perspectives. All of the analyses focus on students enrolled in schools in the 6^th grade in either the 2007-08 or 2008-09 school years. Further, the two cohorts were combined into one group of students included in the analyses. These two years were selected because they were the two most recent years for which data was available to track retention from 6^th grade through two years laterβ€”presumably 8^th grade for most students. Prior years would have been included, but too few charter schools enrolled students in grades six through eight in prior years to yield sample sizes ;large enough for reliable estimates. This underscores the fact that even though charter schools have existed since 1997, very few have graduated complete cohorts of students over more than a few years. School and District Retention Rates In this study, student retention refers to students remaining enrolled in the same school rather than students being retained in the same grade. In the analyses below, student retention rates indicate the percentage of students remaining enrolled in the same school from 6^th grade through 6^th, 7^th, or 8^thgrade two years later. All students enrolled in the 6^th gradeβ€”regardless of previous enrollment in the particular CMO, were included in the analyses. Because this analysis examined two-year retention rates, only students enrolled in schools that had grades six, seven, and eight for at least two consecutive cohorts of students were included in the analysis So, for example, if a student was enrolled in the 6^th grade in 2008, then the school had to enroll at least 10 students in the 6^th grade in 2008, 10 students enrolled in the 7^th grade in 2009, and 10 students enrolled in the 8^th grade in 2010 to be included in the analysis. Because so few charter schools met this criteria prior to 2008, only the last two cohorts of 6th-grade to 8^th-grade of students were included in the analysis. Even focusing on just the last two cohorts removed a large number of charter schools because most charter schools simply have not been in existence for enough years to meet the criteria set forth. In addition, because some districts opened new schools and, consequently, large numbers of students moved from one middle school to another, schools with low school retention rates and high intra-district mobility rates were not included in the analysis. In general, any school with a district mobility rate of greater than 20% was excluded from the analysis. Ultimately, the inclusion or exclusion of such schools resulted in only a marginal difference in retention rates for comparison schools. The results for KIPP were complicated by the 6^th grade not being the lowest grade level in most KIPP middle schools. For three of the seven KIPP middle schools, the initial grade level was the 5^th grade, not the 6^th grade. These schools had already experienced initial attrition. Finally, the comparison set of schools employed in this particular analysis was all schools located in the same zip code as the charter school or in a zip code contiguous to the zip code in which the charter school was located. As shown in Table 1, three of the eight CMOs had a statistically significant lower two-year school retention rates than their comparison schools while one had statistically significantly greater two-year retention rate. The three with statistically significant differences (Harmony, Brooks, and School of Science and Technology), all had retention rates lower than the retention rates for traditional public schools. Harmony had a retention rate almost 20 percentage points lower than comparison schools while both Brooks and School of Science and Technology had retention rates 24and 27 percentage points lower than comparison schools respectively. Strikingly, Harmony lost more than 40% of 6^th grade students over a two-year time span while Brooks and School of Science and Technology lost about one-half of all 6^th grade students in a two-year time span. YES Prep had a slightly greater retention rate than comparison schools. The difference was slight at 2.6 percentage points. This difference disappeared when student transfers within the YES Prep CMO were not considered as students staying enrolled at the same school. Finally, when only traditionally configured middle schools were included in the KIPP analysis, the four schools serving grades 6 through 8 had a retention rate of 74.7%. This was only slightly lower than the overall retention rate and was still not statistically significantly different than the comparison set of schools for the four KIPP middle schools. Table 1: School Retention Rates (6^th Grade to Two Years After 6^th Grade) ^ p < .10; * Pp < .05; ** p < .01; *** p < .001 Table 2 includes the district-level two-year retention rates. In this analysis, a student staying within the same school district or same CMO was defined as being retained in the district. Note that these rates were identical to the school retention rates for CMOs because of the manner in which I coded the data. The traditional public comparison schools all had statistically significantly greater district retention rates than the CMOs. All of the differences were at least five percentage points, with the greatest differences reserved for Brooks Academy (35.8), School of Science and Technology (28.4), and Harmony (18.1). Table2: Student District Retention Rates (6^th Grade to Two Years After 6^th Grade) ^ p < .10; * Pp < .05; ** p < .01; *** p < .001 Stayers and Leavers by Test Scores While overall retention rates provide important outcome information as well as information that might affect school-level achievement, the characteristics of the students that stay or leave a school also provides critical information that may influence judgments about the academic efficacy of a particular school. For example, in examining the achievement of two schools that have the same retention rate, a school that loses lower performing students to a greater degree than higher performing students may artificially inflate overall performance levels as well as create a peer group effect that improves changes in improvement. The following four tables include the school retention rates for students in CMOs and in traditional public schools in the same geographic location. Some schools were excluded such as schools that had high attrition rates due to intra-district student migration patterns caused by school feeder pattern or boundary changes. This was based on a comparison of school retention rates and within-district migration over a three-year period. In general, schools that had at least 20% student migration to other schools within the district were excluded from the analysis. Further, schools without a regular accountability rating were excluded from the analysis since such schools typically serve students with disciplinary issues or that have special needs. Finally, charter schools were excluded from the comparison group, thus leaving only traditional public schools in the comparison group. The analysis was first conducted without excluding charter schools and retention rates were somewhat lower. Careful examination of the data indicated that Harmony and a few special setting charter schools had very low retention rates that were lowering the average of the comparison groups in the setting, albeit by only a few percentage points. Note that not all students had TAKS scores. Thus, the rates and data in these tables are not directly comparable to tables with the overall attrition rate. TAKS Mathematics Scores Table 3 compares the retention rates between CMOs and traditional public comparison schools for lower performing students on the 6^th grade mathematics examination. Students with z-scores less than -0.25 were designated as lower performing. Four CMOs had statistically significantly lower retention rates for lower performing students: YES Prep, Harmony, Brooks Academy, and School of Science and Technology. The differences for YES Prep and Harmony were moderately large at 5 and almost 11 percentage points, respectively. The differences for School of Science and Technology and Brooks Academy were very large at almost 23 and 31 percentage points respectively. Note that there was no statistically significant difference in retention rates for all students for YES Prep, but a statistically significant difference for lower performing students. Table 3: Student Retention Rates for Lower Performing Students on the 6^th Grade TAKS Mathematics Test for CMOs and Comparison Traditional Public Schools ^ p < .10; * Pp < .05; ** p < .01; *** p < .001 Table 4 compares the retention rates between CMOs and traditional public comparison schools for higher performing students on the 6^th grade mathematics examination. Students with z-scores greater than 0.25 were designated as lower performing. Four CMOs had statistically significantly lower retention rates for lower performing students: YES Prep, Harmony, Brooks Academy, and School of Science and Technology. The differences for YES Prep and Harmony were moderately large at 6 and almost 11 percentage points, respectively. The differences for School of Science and Technology and Brooks Academy were very large at almost 23 and 31 percentage points respectively. Note that there was a statistically significant difference in retention rates for lower performing students for YES Prep, but not a statistically significant difference for higher performing students. Closer inspection also reveals that YES Prep and Brooks Academy were the only CMOs to have lower retention rates for lower performing students and higher retention rates for higher performing students. Table 4: Student Retention Rates for Higher Performing Students on the 6^th Grade TAKS Mathematics Test for CMOs and Comparison Traditional Public Schools ^ p < .10; * Pp < .05; ** p < .01; *** p < .001 TAKS Reading Scores Table 6 compares the retention rates between CMOs and traditional public comparison schools for lower performing students on the 6^th grade reading examination. Students with z-scores less than -0.25 were designated as lower performing. The same three CMOs with lower retention rates for lower performing students in mathematics also had statistically significantly lower retention rates for lower performing students in reading: Harmony, Brooks Academy, and School of Science and Technology. Table 6: Student Retention Rates for Lower Performing Students on the 6^th Grade TAKS Reading Test for CMOs and Comparison Traditional Public Schools ^ p < .10; * Pp < .05; ** p < .01; *** p < .001 Table 7 compares the retention rates between CMOs and traditional public comparison schools for lower performing students on the 6^th grade reading examination. Students with z-scores less than -0.25 were designated as lower performing. The same three CMOs with lower retention rates for lower performing students also had lower retention rates for higher performing students. Note, however, that the difference between Brooks Academy and comparison schools was greater for lower performing students than for higher performing students, thus suggesting some selective attrition that may impact the distribution of test scores. There was also a statistically significant difference for YES Prep. Higher performing students on the reading test were more likely to remain at YES Prep than for comparison schools. This, coupled with the lower retention rates for lower performing students also suggests some selective attrition that may have impacted the distribution of student test scores for YES Prep. Table 7: Student Retention Rates for Higher Performing Students on the 6^th Grade TAKS Reading Test for CMOs and Comparison Traditional Public Schools ^ p < .10; * Pp < .05; ** p < .01; *** p < .001 Effects of Attrition on the Distribution of TAKS Scores If student attrition differs across students with various levels of student achievement, then student attrition may positively or negatively impact overall school student test scores. For example, if student attrition tends to be greater for lower performing than for higher performing students, the overall test score profile of a school could improve regardless of whether the remaining students had improved test scores. Alternatively, if attrition affects higher performing students to a greater degree than lower-performing students, then a school’s test score profile could appear lower than it might have been otherwise if the higher performing students had not left the school. Further, research suggests that peer effects have relatively powerful effects on student achievement. For example, all other factors being equal, an average performing student placed with a group of higher performing students will typically have greater gains in achievement than if placed with a group of lower performing students (need references). Thus, the intent of this section was to examine the effect attrition may have on the composition of students with respect to test scores. This analysis focused only on those students enrolled in a school in the 6^th grade and then enrolled in the 8^th grade two years later in the same school. The TAKS score ranges were based on the 6^th grade score of the student and the percentages in the table represent the distribution of students by the 6^th grade TAKS scores for all students enrolled in the 6^th grade and the students remaining in the school in the 8^th grade. Tables 8a, 8b, 8c, and 8d detail the distribution of TAKS mathematics and reading z-scores in the 6^th grade for all students enrolled in the 6^th grade in the 2007-08 and 2008-09 and the distribution of the z-scores for students remaining in the same school from 2007-08 through 2009-10 and from 2008-09 through 2010-11. The row labeled β€œOriginal Dist” includes the distribution of scores for all students enrolled in the 6^th grade and the row labeled β€œStayer Dist” displays the z-score distribution of 6^th grade scores for only those students that remained at the school for two years after the 6^th grade. The third row, labeled β€œStayer – Original” is the difference in the percentage of students in a particular category of z-scores between students staying at the school and the for all students originally enrolled. In this way, the data in the table document the changes in the distribution of 6^th grade scores due to the removal of students leaving the school. Before examining the impact of attrition on the distribution of scores, Tables 20a and 20b also detail the vast differences in the initial distribution of scores between CMOs and comparison schools. For all but UPLIFT and Inspired Vision, the CMOs had students that had distribution of scores that were much more favorable than for the comparison schools. For example, nearly 60% of the YES Prep 6^ th grade students were higher performing in mathematics as compared to only about 31% for comparison schools. In some cases, these rather substantial differences simply expanded due to attrition as will be described below. As shown in Table 8a and 8b, KIPP, YES Prep, and Harmony and their comparison schools all evidenced a similar pattern in which the percentage of lower performing students (those with z-scores lower than -0.25) decreased and the percentage of higher performing students (those with z-scores greater than 0.25) increased. While the overall trend was the same for all three CMOs, there were, however, some apparent differences in the re-distribution of scores between the CMOs and comparison schools. In mathematics, KIPP schools had a greater decrease in the percentage of lower- and average-performing students than comparison schools and a greater increase in the percentage of higher-performing students. The difference in the percentage of higher performing students, however,, was less than one percentage point. In reading, both KIPP and comparison schools had a decrease in the percentage of lower performing students, but the decrease was more than one percentage point greater for KIPP than comparison schools. KIPP also evidenced an increase of almost one percentage point in the percentage of average performing students while comparison schools had no increase. Finally, the increase in the percentage of higher performing students was 0.5 percentage points greater for KIPP than for comparison schools. Thus, the findings suggest a slightly greater re-distribution of scores upward for KIPP than for comparison schools. Alternatively, the differences between YES Prep and Harmony and the comparison schools was greater than the differences for KIPP and their comparison schools and the differences suggest an important re-distributional effect from attrition of students from YES Prep and Harmony. For YES Prep in mathematics, there was almost a five percentage point decrease in the lower- and average- performing students and almost a five percentage point increase in the percentage of higher performing students. Comparison schools had a 3.5 percentage point decrease in the percentage of lower performing students and a 0.5 percentage point increase in the percentage of average performing students and only a three percentage point increase in the percentage of higher performing students. Thus, the initial differences in performance at the 6^th grade starting point between YES Prep and comparison schools simply grew larger due to the differences in attrition between YES Prep and comparison schools. Thus, even if students made no progress on the TAKS tests, YES Prep would have appeared to have made greater progress due to differences in attrition across the distribution of scores. With respect to Harmony schools, there was a decrease in both the percentage of lower- and average-performing students and a commensurate increase in the percentage of higher-performing students. This was ore pronounced for mathematics than for reading. For comparison schools, there was a decrease in lower performing students in both subject areas, but no real change in the percentage of average performing students. Ultimately, the increase in the percentage of higher performing students in comparison schools was lower than the increase for Harmony schools. The differences, however, were only one percentage point. Thus, Harmony have benefited more from the re-distribution of scores due to attrition more than comparison schools, but the advantage would have been less than the advantage for YES Prep. Interestingly, there was no significant re-distribution of scores due to attrition for IDEA schools in either subject. For IDEA comparison schools, there was a slight upward re-distribution of scores. Thus, in this particular case, the comparison schools garnered a greater positive re-distributional effect from attrition than IDEA schools. Table 8a: Distribution of Students TAKS Mathematics and Reading Z-Score Ranges for Students Enrolled in the 6^th Grade and Students Remaining in the Same School (KIPP and YES Prep) Table 8b: Distribution of Students TAKS Mathematics and Reading Z-Score Ranges for Students Enrolled in the 6^th Grade and Students Remaining in the Same School (IDEA and Harmony) The remaining four CMOs and comparison schools are included in Table 8c and 8d. The re-distributional effects were greater for these four CMOs than for the first four CMOs. The smaller sample sizes and greater attrition rates are two reasons for these greater shifts in scores after attrition of students. In both subject areas, UPLIFT experienced a greater upward re-distribution of scores than for comparison schools, although the effect in reading was relatively small. The impact on mathematics scores, however, was substantial. Indeed, There was a 6.7 percentage point decrease in the percentage of lower performing students in UPLIFT and a 4.0 percentage point increase for average performing students and a 2.7 percentage point increase for higher performing students. This re-distributional shift was far more positive than for comparison schools. For Brooksβ€”which had an extremely high overall attrition rateβ€”there was a massive re-distribution of scores after the attrition of students. Specifically, in both subjects areas, there was greater than a 10 percentage point decrease in the percentage of lower performing students and an almost 8 percentage point increase in the percentage of higher performing students. For comparison schools, there was a small decrease in the percentage of lower performing students and a small increase in the percentage of higher performing students, but the changes were quite small in relation to the large changes for Brooks. Thus, Brooks Academy test scores improved rather dramatically for the two cohorts of students simply by losing large percentages of lower performing students. In mathematics, both Inspired Vision and comparison schools lost a substantial proportion of lower performing students (5 percentage points). The scores for Inspired Vision shifted into the average- and higher-performing categories while most of the increase was for the higher performing categories for the comparison schools. For reading, there was a similar result. For comparison schools, there was a decrease of around two percentage points for both lower- and average performing students and increase of four percentage points for higher performing students. This was much greater than the one percentage point increase for Inspired Vision. Thus, Inspired Vision comparisons schools benefited more from attrition than did Inspired Vision. Finally, the School of Science and Technology had a greater upward re-distribution of scores due to attrition than for comparison schools. For both mathematics and reading, the increase in the percentage of higher-performing students was about 1.5 percentage points. In mathematics, the School of Science and Technology also had a far greater decrease in the percentage of lower-performing students: 5.5 percentage points to 2.9 percentage points for comparison schools. Table 8c: Distribution of Students TAKS Mathematics and Reading Z-Score Ranges for Students Enrolled in the 6^th Grade and Students Remaining in the Same School (UPLIFT and Brooks Academy) Table 8d: Distribution of Students TAKS Mathematics and Reading Z-Score Ranges for Students Enrolled in the 6^th Grade and Students Remaining in the Same School (Inspired Vision and Scool of Science and Technology) While these shifts may appear relatively small, in many cases the re-distribution of scores simply exacerbated existing differences in the distribution of scores between CMOs and comparison schools as shown in previous sections. Further, the differences are compounded because they occur for every cohort of students in the school. The most pronounced effects of attrition may be to reinforce certain peer group effects. If students see that peers that β€œcannot cut it” systematically leave a school, then the remaining students may be more motivated to work even harder to ensure continued enrollment in their school of choice. How does this affect the performance levels reported by the state such as the percentage of students passing and percentage of students achieving commended status? The effects turn out to be quite similar. Figure 10 below shows that student attrition increased the percentage of students that had passed or met commended status on the 6^th grade mathematics test for YES Prep. In fact, the increase in the percentage of students that had met commended status increased from 57.1% to 62.0%. Again, this suggests a re-distribution of students after attrition such that lower-performing students were more likely to leave the school and higher-performing students were more likely to stay at the school. Figure 10: Change in the Percentage of Students Passing and Meeting Commended Status on the TAKS Mathematics Test after Student Attrition for YES Prep Middle Schools While these analyses do not reveal how student attrition impacts the actual scores, passing rates, and commended rates on the 8^th grade test, the fact that students passing or meeting the commended standard typically continue to meet those standards on future tests, the results do strongly suggest that attrition artificially increases passing and commended rates for some CMOs such as YES Prep. FINAL CONCLUSIONS AND DISCUSSION This study is a preliminary examination of high-profile/high-performing charter management organizations in Texas. Specifically, the study examined the characteristics of students entering the schools, retention/attrition rates; and,the impact of attrition/retention rates on the distribution of students. Contrary to the profile often portrayed in the media, by some policymakers, and by some charter school proponents (including some charter CEOs), the high-profile/high-enrollment CMOs in Texas enrolled groups of students that would arguably be easier to teach and would be more likely to exhibit high levels of achievement and greater growth on state achievement tests. Indeed, the above analyses showed that, relative to comparison schools, CMOs had: β€’ Entering students with greater prior TAKS scores in both mathematics and reading; β€’ Entering economically disadvantaged students with substantially greater prior TAKS scores in both mathematics and reading; β€’ Lower percentages of incoming students designated as ELL; β€’ Lower percentages of incoming students identified as special needs; and, β€’ Only slightly greater percentages of incoming students identified as economically disadvantaged. In other words, rather than serving more disadvantaged students, the findings of this study suggest that the high-profile/high-enrollment CMOs actually served a more advantaged clientele relative to comparison schoolsβ€”especially as compared to schools in the same zip code as the CMO schools. This is often referred to as the β€œskimming” of more advantaged students from other schools. While CMOs may not intentionally skim, the skimming of students may simply be an artifact of the policies and procedures surrounding entrance into these CMOs. Thus, the comparisons that have been made between these CMOs and traditional public schoolsβ€”especially traditional public schools in the same neighborhoods as the CMO schoolsβ€”have been β€œapples-to-oranges” comparisons rather than β€œapples-to-apples” comparisons. The public and policymakers need to look past the percentages of economically disadvantaged students and disabuse themselves of the notion that enrolling a high percentage of economically disadvantaged students is the same as having a large percentage of lower-performing students. In fact, despite a large majority of students entering the CMOs identified as economically disadvantaged, students at the selected CMOs tended to have average or above average TAKS achievement and certainly greater achievement levels than comparison schools. This was particularly true when comparing economically disadvantaged students in CMOs and traditional public schoolsβ€”the economically disadvantaged students in CMOs had substantially greater academic performance than the economically disadvantaged students in the comparison traditional public schools. There were few differences in attrition rates between CMO and comparison schools (with Harmony, Brooks Academy, and School of Science and Technology being the exceptions) and the attrition rates did not appear to advantage or disadvantage CMOs as a group relative to comparison schools. Three CMOs did appear to have selective attrition such that scores were artificially increased by the loss of lower performing students and the retention of higher performing students. These three CMOs were Brooks Academy, UPLIFT, and YES Prep. What is beyond the scope of this study is to determine the effect of β€œskimming” higher performing students from traditional public schools, the effect of selective attrition, and the effect of selective β€œback-filling” might have on student peer effects. If, in fact, academic achievement gains are driven by the impacts of these phenomena on peer effects, then policymakers would need to ask whether CMOs are assisting truly disadvantaged students or simply serving as voluntary magnet schools that have selective entrance and attrition. Ultimately, while far more detailed and sophisticated research needs to occur in this area, these preliminary results should raise serious questions about how the characteristics of incoming students and the effect of attrition might impact the achievement profiles of CMOs and other schools. These questions beg to be answered before state policymakers endeavor to further expand and provide greater support to such CMOs and local policymakers move to replicate charters or adopt charters to replace local schools. This study was commissioned by the Texas Business and Education Coalition. TBEC was formed by Texas business leaders to engage with educators in a long-term effort to improve public education in Texas. Since its formation in 1989, TBEC has become one of the state’s most consistent and important forces for improving education in the state. Conclusions are those of the author and do not necessarily reflect the views of TBEC, its members or sponsors. DATA AND METHODOLOGY This section provides a description of the data and methodology employed in this study. This study relied on three sources of data. The first source was student-level testing data from the Texas Education Agency (TEA). The data was purchased from TEA by TBEC for the purpose of examining important education topics in Texas. The second set of data was school-level information from the Academic Excellence Indicator System (AEIS).The third set of data was the Financial Allocation Study for Texas from the Texas State Comptroller’s Office. Student-Level Testing Data The testing data included information on students taking the Texas Assessment of Knowledge and Skills (TAKS) from spring 2003 through summer 2011 in grades three through 12. Student information includes the school and district in which the student was enrolled when s/he took the TAKS test, grade level, economically disadvantaged status, test score, score indicator, exemption status (e.g., special education exemption, Limited English Proficiency exemption, absent, etc.) and test type. Importantly, even if a student did not actually take the TAKS, such a student would be included in the data because an answer document was submitted for the student. Because of FERPA, some information was masked by TEA. However, the student was not removed from the data and the student was still associated with a particular school and district. School-Level AEIS data The school-level AEIS data included a wealth of information on schools in Texas, including charter status, district in which the school was located, the region of the state in which the school was located, the overall number of students, and the number and percentage of students with various characteristics (i.e., percentage of economically disadvantaged students, percentage of White students, percentage of Latino students, percentage of African American students, etc.) and participating in specific education programs (special education, bilingual education, English as a Second Language). Student Characteristics Included the Analyses The characteristics examined in this section include: 1. TAKS mathematics scores; 2. TAKS reading scores; 3. Economically disadvantaged status; 4. Spanish-Language TAKS test; 5. Exemption from TAKS for Limited English Proficiency (LEP) reasons; and 6. Special Needs students as identified by the type of TAKS tests taken by the student. TAKS Scores Because metrics such as passing TAKS, achieving commended status on TAKS, and scale scores on TAKS vary over time, with students in later cohorts being more likely to have passed, attained commended status, or achieved a higher scale score than students in earlier cohorts, such metrics would not provide an appropriate measure to use across multiple cohorts of students. In order to compare achievement levels in a defensible manner, the TAKS scale scores were standardized across years and test administrations so schools with more students in later cohorts and fewer in earlier cohorts would not have artificially greater scores and schools with more students in earlier cohorts would not have artificially lower scores. To standardize the TAKS scores over time, the scale scores were converted to z-scores for each grade level and year. Further, z scores were calculated separately for students taking different versions of the test. Thus, a separate z-score was calculated for all students taking the standard TAKS, TAKS-modified, and TAKS-alternative versions of the test for each grade level and each subject area. Economically Disadvantaged Status In Texas, economically disadvantaged is determined by participation in the federal free-/reduced-price lunch program. In addition, a student can be identified as economically disadvantaged if she or he is eligible for other public assistance programs intended for families in poverty. In the data provided by TEA, the district in which the student enrolled identified whether or not a student was classified as economically disadvantaged. English Language Learner Status English Language Learner (ELL) students were identified in two ways. First, students were identified by having taking the Spanish-language version of the TAKS. The Spanish version was available in grades three through six. Second, the test score code provided by the state also identified those students exempted from testing for Limited English Proficiency (LEP) reasons. Thus, measures four and five were collapsed into one measure identifying students as English-Language Learner students. While there was some overlap between the two groups, only 7% of the students taking the Spanish TAKS were also identified as being LEP exempt. Ultimately, a student was identified as ELL if the student (a) took the Spanish-language TAKS in the previous year or (b) was exempted from TAKS testing because of LEP reasons. Special Needs Students Special needs students were identified by the type of TAKS test taken by the student. Unfortunately, this data was only available in academic years 2008 through 2010. In previous years, a substantial proportion of students identified as special needs were placed into a separate file by the state. To comply with FERPA, the student identifier process was different than the one employed for non-special needs students. Thus, the two files could not be merged together which prohibited the use of the data in years prior to 2008. With respect to the different types of TAKS tests, the TAKS-modified (TAKS-M) and TAKS-alternate (TAKS-A) tests were developed for students that require some alternate test form based on either modifications or an alternative assessment strategy to meet the needs of the student under either Section 504 of the Rehabilitation Act of 1973 or under an Individual Education Plan. Unfortunately, relying on test type does not directly assess the number of students in special education. Some special education students do not require any special test modifications while other students (such as those with a 504 plan) may require modifications but not be designated as special education. Thus, the students taking either a TAKS-A or TAKS-M test were not designated as special education, but rather as having special needs with respect to state standardized assessments. This is an important distinction because many 504 and special education students need only minimal changes in instruction and additional assistance while those requiring special testing are far more likely to require extra attention and assistance by educators. TEA described the TAKS-modified test in the following manner: The Texas Assessment of Knowledge and Skills–Modified (TAKS–M) is an alternate assessment based on modified academic achievement standards designed for students who meet participation requirements and who are receiving special education services. TAKS–M has been designed to meet federal requirements mandated under the No Child Left Behind (NCLB) Act. According to federal regulations, all students, including those receiving special education services, will be assessed on grade-level curriculum. TAKS–M covers the same grade-level content as TAKS, but TAKS–M tests have been changed in format (e.g., larger font, fewer items per page) and test design (e.g., fewer answer choices, simpler vocabulary and sentence structure). (Retrieved from http://www.tea.state.tx.us/student.assessment/special-ed/taksm/ TEA described the TAKS-alternate test in the following manner: TAKS–Alternate (TAKS–Alt)is an alternate assessment based on alternate academic achievement standards and is designed for students with significant cognitive disabilities receiving special education services who meet the participation requirements for TAKS–Alt.* This assessment is not a traditional paper or multiple-choice test. Instead, it involves teachers observing students as they complete state-developed assessment tasks that link to the grade-level TEKS. Teachers then evaluate student performance based on the dimensions of the TAKS–Alt rubric and submit results through an online instrument. This assessment can be administered using any language or other communication method routinely used with the student. (Retrieved from http://www.tea.state.tx.us/student.assessment/taks/accommodations/) While other measures of special needs or special education designation would certainly be important as well, such measures were not available in the data procured from the Texas Education Agency. The measures included in the analyses were selected for two reasons: First, because the data can be used to directly address claims of charter school proponents; and, second, research suggests each measure is associated in some manner with school-level test score levels as well as school-level growth. Understanding Z-Scores Transforming the TAKS scale scores into z-scores not only controls for differences in scores across time (scale scores typically increase for the same grade level in each successive year), z scores have some important properties that allow for arguably better and easier to understand comparisons of average scores between schools. First, transforming scale scores into z scores creates a set of scores that are normally distributed as in the well-known bell-shaped curve. Second, placing the scale scores into z-scores creates some useful properties that make comparisons across schools easier. For example, a z-score distribution creates a mean of zero. Thus, the average student in a cohort of students has a z-score of zero. Thus, once the scale scores are converted into z scores, the z scores indicate how far a student’s score is from the statewide average. If a student had a positive z-score, then the student’s score was above average. If a student had a z-score that was negative, then the student’s score was below average. Not only does the z-score indicate direction, but also magnitude. So, for example, if a student had a z-score of 1.0, then that student had a score that was 1.0 standard deviations greater than the score for the average student. If a student had a z-score of -0.45, then that student had a score that was 0.45 standard deviations lower than the score for the average student. Third, we know that a certain percentage of students fall within each standard deviation. For example, we know that 34.1% of students will have a TAKS z-score between 0.0 and +1.0 and 34.1% of students will have a TAKS z-score between 0.0 and -1.0. Because of this characteristic of normal curves, we can translate the z-scores into percentile rankings. For example, if a student has a TAKS z-score of +2.0, then we know that only about 2% of students have a greater TAKS z-score and 98% of students have a lower TAKS z-score. If a student has a TAKS z-score of -1.0, then we know that about 85% of students have a greater TAKS z-score and about 15% of students have a lower TAKS z-score. One difficulty in examining charter school students is determining the appropriate comparison group of students. Often, state education agencies, charter school representatives, and media personnel compare charter schools and students to all other schools and students. While such a comparison provides some useful information, such comparisons are flawed because charter schools are located in distinct locations. As such, enrollment in a charter school is typically limited to those students that live relatively close to the charter school. So, for example, a student living in Texarkana cannot enroll in a charter school in Houston. Thus, most researchers employ a different comparison group of schools and students rather than simply all schools or students. Thus, this study examines four different comparisons: all Texas students, same geographic location, same zip code, and sending schools. Each of the four comparisons is described in more detail below. 1) All Texas Students One comparison made in this report was charter schools and charter students compared to all schools and all students in Texas. Thus, for example, the incoming characteristics of students entering a charter school in the 6^th grade would be compared to the characteristics of all 5^th grade students in all Texas schools. 2) Same Geographic Location The second comparison employed in this analysis was between a selected charter school and all schools in the same geographic location. The same geographic location was defined as all schools located in the zip code in which the charter school was located or in a zip code contiguous to the zip code in which the charter school was located. 3) Same Zip Code The third comparison employed in this analysis was between a selected charter school and all schools in the same zip code. Thus, a charter school and the students within the charter school were compared to only schools and students within schools located in the same zip code as the charter school. 4) Sending School The final comparison employed in the analysis was between a selected charter school and the students in that charter school to schools and the students within the schools that sentat least one student to the selected charter school over the given time period. This comparison was used by both Mathematica (2010)in their analyses of charter schools. Such a comparison seems most appropriate when comparing characteristics of students, characteristics of schools, or student performance. Selected Charter Schools This study focused on nine charter schools that served students in grades four through eight. Most were charter management organizations (CMOs) that included multiple schools and had schools across the different grade levels such that the CMOs served students from kindergarten through the 12^th grade. These charter schools are referred to as charter management organizations (CMOs) in this paper even though, in some cases, only one school from a CMO was included in an analysis. Table 2 lists, in descending order, the CMOs with the greatest number of incoming 6^th grade students from Texas public schools and other schools not in the Texas public school system for the years 2005 through 2011. The number of incoming students excluded students already enrolled in that particular CMO in the previous grade. Thus, for example, a student enrolled in KIPP in the 5^th grade and then the 6^th grade was not identified as an incoming student for KIPP in the 6^th grade. Ultimately, nine of the 15 CMOs with the largest number of incoming 6^th grade students were included in this study. These CMOs appear in bold in the table below. CMOs that utilized on-line or distance education were excluded from the study as were CMOs that focused on students at-risk of dropping out of school. Overall, these nine CMOs enrolled almost 60% of all of the incoming 6^th grade students into Texas charter schools. The two largest CMOsβ€”Harmony and Yes Prepβ€”both accounted for 15% of all students entering charter schools in the 6^th grade. Two schoolsβ€”Responsive Education and Southwest Virtual Schoolβ€”were not included in subsequent analyses because creating a set of comparison schools based on zip codes simply did not apply to a virtual school. Houston Gateway could conceivably have been selected for inclusion, but Inspired Vision had a greater percentage of incoming students with data from the 5^th grade than the Gateway charter. The Radiance Academy could have been chosen, but it was unclear as to whether the Radiance charter was associated with other charters with the same or similar names. Thus, rather than risk making a mistake in correctly identifying the complete set of schools for the CMO, I selected the next school on the list which was Inspired Vision. 19 Responses β€œMiddle School Charters in Texas: An Examination of Student Characteristics and Achievement Levels of Entrants and Leavers” β†’ 1. This is significant work, Dr. Fuller, and I am glad to have come across it this morning. I would like to conduct a similar study here in Georgia. Perhaps we can collaborate. 2. Hello, I have information that could help you redo the analysis of the KIPP schools. KIPP does not have a single middle school that begins with the sixth grade as a starting year. They all start at the fifth grade. Instead, to get the fifth grade data on KIPP middle schools that seem to start in the sixth grade, you need to search for a data that appears to be coming from a KIPP elementary feeder school. As an example, KIPP Academy in Houston’s TAKS scores are reported 6-8, but that is because KIPP Academy’s 5 grade is under a different charter. I think this is a documentation quirk that results from middle schools traditionally starting in the 6th grade in Texas. A further complication is that it wouldn’t make sense to use the KIPP Academy data starting in 2010 or KIPP Sharpstown data starting 2012 because they now both enroll students who have gone to KIPP elementary schools. The TAKS scores of incoming students will now reflect what KIPP elementary schools have done. Hope this helps. β–‘ I’d like to see how the difference in TAKS scores between incoming fifth grade KIPP students and neighborhood schools’ incoming fifth graders change when you include data from ALL the KIPP middle schools. KIPP also should not be part of the analysis for incoming sixth graders because there are no KIPP schools that start in the sixth grade. For future research, I wonder if there’s a way to measure if there is a statistical difference in students entering Texas charter schools based on student behavior. For example, if an elementary school sends ten students to KIPP, are they reflective of their elementsry’s school’s population in terms of behavior… Number of days absent, number of referrals, etc. I imagine this would be difficult to do. β˜† I included all KIPP schools in the analysis. In examining incoming 5th and 6th grade students, I excluded all students previously enrolled in KIPP. So, if a student was enrolled in KIPP in the 4th grade, then the student was excluded from the analysis for incoming 5th graders. Same with a student enrolled in KIPP in the 5th grade and then enrolled in a new KIPP school in the 6th grade. In fact, no matter what, if a student was enrolled in any KIPP school in the previous year, they were excluded from the analysis. 15 Trackbacks For This Post
{"url":"http://fullerlook.wordpress.com/2012/08/23/tx_ms_charter_study/","timestamp":"2014-04-20T21:13:39Z","content_type":null,"content_length":"152581","record_id":"<urn:uuid:09ecfbbb-156a-46cf-bf84-7f5a062eea66>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Huntington Beach Prealgebra Tutor ...My name is Anna and am currently an undergraduate student at Chapman University. I am in the process of getting my bachelors in their Integrated Educational Studies program and a Bachelors in Mathematics. I plan to be a math teacher and am currently employed at a local tutoring center. 18 Subjects: including prealgebra, reading, algebra 1, geometry ...I have two girls (10 and 12 years old), that also love math, and are pretty good at it. Math runs in our veins. I also speak Spanish, and I can reach the Latin community. 7 Subjects: including prealgebra, algebra 1, algebra 2, precalculus ...For tests or quizzes I review sample tests with the students and discuss test-taking strategy. I don't believe in studying just to pass a test (and forget the material later) but put value on learning and understanding the material. I specialize in college level finance, accounting, statistics, and economics classes as well as high school math. 24 Subjects: including prealgebra, statistics, geometry, accounting ...I can tailor my lessons around your child's homework or upcoming tests and stay synchronized. Your child's skills will be improved in a few sessions. I am organized, professional and friendly. 14 Subjects: including prealgebra, reading, Spanish, ESL/ESOL Graduating Magna Cum Laude with a degree in Mathematics can only begin to demonstrate my passion for math. I believe understanding mathematical concepts is crucial for success in life, no matter what your long term goals are! Critical thinking can be used anywhere. 7 Subjects: including prealgebra, algebra 1, algebra 2, precalculus
{"url":"http://www.purplemath.com/Huntington_Beach_prealgebra_tutors.php","timestamp":"2014-04-21T10:34:20Z","content_type":null,"content_length":"24236","record_id":"<urn:uuid:219f663b-250f-4242-97e3-a984e6775e56>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
TR06-031 | 27th February 2006 00:00 On the Approximation and Smoothed Complexity of Leontief Market Equilibria We show that the problem of finding an \epsilon-approximate Nash equilibrium af an n*n two-person game can be reduced to the computation of an (\epsilon/n)^2-approximate market equilibrium of a Leontief economy. Together with a recent result of Chen, Deng and Teng, this polynomial reduction implies that the Leontief market exchange problem does not have a fully polynomial-time approximation scheme, that is, there is no algorithm that can compute an \epsilon-approximate market equilibrium in time polynomial in m, n, and 1/\epsilon, unless PPAD \subseteq P. We also extend the analysis of our reduction to show, unless PPAD \subseteq RP, that the smoothed complexity of the Scarf's general fixed-point approximation algorithm (when applying to solve the approximate Leontief market exchange problem) or of any algorithm for computing an approximate market equilibrium of Leontief economies is not polynomial in n and 1/\sigma, under Gaussian or uniform perturbations with magnitude
{"url":"http://eccc.hpi-web.de/report/2006/031/","timestamp":"2014-04-20T08:14:37Z","content_type":null,"content_length":"19948","record_id":"<urn:uuid:184196ea-997d-4a82-bafc-f2ca469ad522>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
Mount Zion, GA Prealgebra Tutor Find a Mount Zion, GA Prealgebra Tutor ...I have a degree in Global Studies with a minor in German from the University of West Georgia. In this program I studied everything from psychology to Geography of the European Union. A favorite subject of mine was Art History, which provided me with knowledge of history and culture through art. 40 Subjects: including prealgebra, reading, English, geometry I am attending Jacksonville State University to complete my degree in secondary math education. I have tutored many students in algebra who are high school students and students in college. I enjoy teaching math and helping others learn math. 9 Subjects: including prealgebra, calculus, algebra 1, geometry ...I have been a math tutor in the past for subjects such as Algebra I, II, and geometry. I have also assisted students in studying for the SAT and ACT. My scores for both tests were 1980 (super scored SAT) 1880 (highest score SAT) and 28 (ACT). I am a member of Pi Sigma Alpha, the Political Science Honor Society. 7 Subjects: including prealgebra, geometry, algebra 1, SAT math I recently graduated from MIT with a Bachelor of Science in Brain and Cognitive Sciences and am currently taking a gap year before applying to graduate school. In high school, I graduated as valedictorian, and I have experience tutoring math subjects ranging from pre-algebra to AP Calculus. Additi... 18 Subjects: including prealgebra, English, algebra 2, calculus ...I have taught part time at Chattahoochee Community College and currently teach part time at Georgia Highlands College. I love mathematics but also appreciate the struggles that so many students have with this most important subject. I guarantee that I can make mathematics understandable to you or your child. 12 Subjects: including prealgebra, calculus, geometry, algebra 1 Related Mount Zion, GA Tutors Mount Zion, GA Accounting Tutors Mount Zion, GA ACT Tutors Mount Zion, GA Algebra Tutors Mount Zion, GA Algebra 2 Tutors Mount Zion, GA Calculus Tutors Mount Zion, GA Geometry Tutors Mount Zion, GA Math Tutors Mount Zion, GA Prealgebra Tutors Mount Zion, GA Precalculus Tutors Mount Zion, GA SAT Tutors Mount Zion, GA SAT Math Tutors Mount Zion, GA Science Tutors Mount Zion, GA Statistics Tutors Mount Zion, GA Trigonometry Tutors Nearby Cities With prealgebra Tutor Bowdon Junction prealgebra Tutors Buchanan, GA prealgebra Tutors Centralhatchee, GA prealgebra Tutors Ephesus, GA prealgebra Tutors Felton, GA prealgebra Tutors Graham, AL prealgebra Tutors Heflin, AL prealgebra Tutors Ranburne prealgebra Tutors Roopville prealgebra Tutors Sargent, GA prealgebra Tutors Tallapoosa, GA prealgebra Tutors Temple, GA prealgebra Tutors Whitesburg, GA prealgebra Tutors Winston, GA prealgebra Tutors Woodland, AL prealgebra Tutors
{"url":"http://www.purplemath.com/Mount_Zion_GA_prealgebra_tutors.php","timestamp":"2014-04-18T19:17:01Z","content_type":null,"content_length":"24225","record_id":"<urn:uuid:7770d875-4496-4059-89f5-2e49be01d109>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra for College Students 7th edition by Blitzer | 9780321758927 | Chegg.com Algebra for College Students 7th edition Details about this item Algebra for College Students: The Blitzer Algebra Series combines mathematical accuracy with an engaging, friendly, and often fun presentation for maximum appeal. Blitzer’s personality shows in his writing, as he draws readers into the material through relevant and thought-provoking applications. Every Blitzer page is interesting and relevant, ensuring that students will actually use their textbook to achieve success! Back to top Rent Algebra for College Students 7th edition today, or search our site for Robert F. textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Pearson.
{"url":"http://www.chegg.com/textbooks/algebra-for-college-students-7th-edition-9780321758927-0321758927?ii=14&trackid=a10cffac&omre_ir=1&omre_sp=","timestamp":"2014-04-16T17:00:06Z","content_type":null,"content_length":"21140","record_id":"<urn:uuid:bc5428a2-b1ce-43bc-9eaf-3664bce5b128>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
//// FIGLEAF SOFTWARE RGB to Hex Math.rgbToHex = function(r,g,b){ return(r<<16 | g<<8 | b);} Converts three separate integers representing the rgb values of a color to a hexadecimal representation of the same color. Any one color is composed of 3 color components; red, green and blue. Each color component can hold a value from 0 to 255. When working with color, a component's value can generally be written 3 different ways. For example's sake, let's assume we are working with an ORANGE color with the values listed in the example below. The green component of our orange color is at 50% of its maximum brightness. In a base ten system ORANGE's green component is written as 128 (half of 256). Very simple. In a base 16 number system 128 is written as 80 which means 8 sets of 16 which is, you guessed it, 128. The third way to write 128 is to use binary (base 2 system). In binary the number 128 is represented as 10000000 which is another way of saying 1 times 2 to the 7th power. Where this is all β”‚ Format = Red : Green : Blue β”‚ β”‚ Base10 ORANGE = 255 : 128 : 000 β”‚ β”‚ Base16 ORANGE = FF : 80 : 00 β”‚ β”‚ Base2 ORANGE = 11111111 : 10000000 : 00000000 β”‚ β”‚ β”‚ β”‚ β”‚ Let's say we want to convert our three ORANGE color components (Red=255, Green=128, Blue=0) to one hexadecimal (base 16) value (0xFF8000). The "rgbToHex" function first converts all the RGB values to binary numbers. This happens automatically whenever the bitwise << operator is used. Since our ORANGE color has a red value of 255 the red component is converted to 11111111 binary. The green value of 128 is converted to 10000000 binary and the blue value of 0 is essentially unchanged or converted to 00000000. That gives us {r:11111111, g:10000000, b:00000000}. Now that we have the binary equivalents of our RGB values we can start manipulating them. The R<<16 part of the function shifts the red value 16 places to the left and fills in the blank spaces with zeros giving us 111111110000000000000000. The G<<8 part of the function shifts the green value 8 places to the left 1000000000000000. The blue value does not shift (there's no << after the b). The "|" operator then tallies the column of numbers, similar to adding them but not quite the same. If any column of numbers has a 1 in it the result for that column is a 1. β”‚ 111111110000000000000000 β”‚ β”‚ 1000000000000000 β”‚ β”‚ 00000000 β”‚ β”‚ +_______________________ β”‚ β”‚ 111111111000000000000000 β”‚ β”‚ β”‚ β”‚ β”‚ The resulting binary number 111111111000000000000000 is equivalent to the hexadecimal number 0xFF8000 and the base 10 number 16744448. And that's how you get from RGB notation to a hexadecimal color value. Are we having fun yet? Good, I knew we were. // PiXELWiT REVERSE ENGINEERING Math.hexToRGB = function(hex){ var red = hex>>16; var grnBlu = hex-(red<<16); var grn = grnBlu>>8; var blu = grnBlu-(grn<<8); return({r:red, g:grn, b:blu}); Breaks a hexadecimal representation of a color into its three color components. Math.hexToRGB = function(hex){ In keeping with our ORANGE example lets say we pass our hexToRGB function the hexadecimal color 0xFF8000. This line first converts our hexadecimal 0xFF8000 to binary 111111111000000000000000 by applying the >> operator then it removes the last 16 digits leaving 11111111. var grnBlu = hex-(red<<16); This adds 16 zeros to the end of red then subtracts that value from hex (111111110000000000000000-111111111000000000000000) leaving grnBlu with a value of 1000000000000000. The green value is extracted from grnBlu by chopping the last 8 digits off grnBlu which leaves green with a value of 10000000. var blu = grnBlu-(grn<<8); This adds 8 zeros to the end of grn and subtracts grn from grnBlu (1000000000000000-1000000000000000) leaving blu with a value of 0 or 00000000. return({r:red, g:grn, b:blu}); The final line returns a generic object with three properties called r, g, and b which hold their respective numeric color values.
{"url":"http://www.actionscript.org/resources/articles/162/2/Color-fade/Page2.html","timestamp":"2014-04-20T10:46:35Z","content_type":null,"content_length":"31543","record_id":"<urn:uuid:7179fd19-a6df-4588-991f-a2fa6fa071ac>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Graham, WA Precalculus Tutor Find a Graham, WA Precalculus Tutor ...I also focus on test-taking skills: how best to work and present an answer as well as the best methods of 'relaxing' during a test. I approach the material in a patient, non-threatening manner, and try to accommodate the student's needs and time-table to the best of my ability. It is my goal to make math an interesting, if not enjoyable subject. 12 Subjects: including precalculus, chemistry, geometry, ASVAB ...I have been learning French for more than 6 years. I can also help with programming. I earned a Bachelor of Science in Computer Science and in Computer Engineering at UW in Tacoma. 16 Subjects: including precalculus, chemistry, calculus, algebra 2 With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a quick fix, but I will not stop working if you make the effort. -Bill 16 Subjects: including precalculus, calculus, statistics, geometry ...Regards, GlennAlgebra, is one of the broadest parts of mathematics, along with geometry, trigonometry, number theory, calculus, etc. to work and study science. Literally all aspects of mathematics use algebraic rules to accomplish computations. Regardless what field of study you are interested in mastering, you will find algebraic ideas and rules intertwined inside. 45 Subjects: including precalculus, chemistry, physics, calculus ...I do not expect anyone to learn anything in one session or me to be able to teach and overview of months worth of work in one session. I can help you succeed, but it takes work. Some people, unfortunately, do not have the patience to take time with this area of learning and, unfortunately, that was reflected in recent feedback I received. 25 Subjects: including precalculus, reading, chemistry, geometry
{"url":"http://www.purplemath.com/graham_wa_precalculus_tutors.php","timestamp":"2014-04-17T07:26:24Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:80c74e8b-d949-4580-95ce-6b43cbb9baa2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Looking for Advanced Order of operations sheets. February 21st 2010, 12:04 PM #1 Junior Member Dec 2009 Looking for Advanced Order of operations sheets. Many times when im doing calculus i do the calculus correct , but i simplify wrong i belive this is because i am not always certain what is illegal to add multiply or divide to what . So i was wondering if anyone has found or knows of some kind of worksheet online where i can practice problems that involve simplying the results of the quoteint rule or product rule. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/calculus/129974-looking-advanced-order-operations-sheets.html","timestamp":"2014-04-19T01:09:58Z","content_type":null,"content_length":"28509","record_id":"<urn:uuid:7233d342-09b4-4786-bfa7-0b234bc63ddb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Household Calculator Calculator for household use Windows AllPlatform : USD $5Price : 1.33 MBFile Size : Popularity : 12/13/2008Date Added : Household Calculator is a general purpose ergonomic calculator which combines use simplicity and calculation power. Household Calculator handles main arithmetic operations with two operands, addition, subtraction, multiplication, division, and complex formulas with unlimited number of operations and operands. [ More downloads from
{"url":"http://www.topshareware.com/Household-Calculator-download-65817.htm","timestamp":"2014-04-17T12:32:26Z","content_type":null,"content_length":"16406","record_id":"<urn:uuid:d328f27b-1657-47c8-b750-0c0d52a84c5e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Previous | ToC | Next Labs: Probability & Statistics. Part 1. Math Alive Playing Craps Here we present the rules for playing the game of craps in our simulation below. When a player rolls the dice for the first time, any combination of the two dice that adds up to 7 or 11 is a winner. Any dice total that equals 2, 3, or 12 is an immediate loser and is called craps. If the first roll is not an immediate winner or a loser, the total of the dice becomes known as the point. For all successive rolls, the player will win a game if the point is rolled again. However, if a 7 is rolled before the point is rolled, the player craps out. Below you can play craps. It will count for you the total number of wins and losses. If you want to restart the count, click on the "Start Over" button. The Game of Craps Let us try to calculate the probability of winning. We can use the probabilities we calculated on the previous page. The probability of winning on the first roll is the probability of rolling 7 or 11, which is 1/6 plus 1/18, which equals to 2/9. Suppose we roll 4 on the first roll (the probability of rolling 4 is 1/12). On each successive roll the probability of rolling 7 is 1/6 and the probability of rolling 4 is 1/12. That is, on each successive roll the probability of losing is twice that of winning. That means, that on several rolls we are twice as probable to lose as to win. That is, the probability of winning after we rolled 4, is 1/3. Hence, the probability of rolling 4 and winning is 1/12 times 1/3, that is 1/36. Continuing in the same manner we can count the overall probability of winning. Can you do that? Previous | ToC | Next Last Modified: August 2008
{"url":"http://web.math.princeton.edu/math_alive/3/Lab1/Playing.html","timestamp":"2014-04-19T19:57:43Z","content_type":null,"content_length":"3069","record_id":"<urn:uuid:6986b67a-7b05-43f9-908b-691909b1ee97>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support. Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with the best way to expand their education. Below is a small sample set of documents: U. Houston - ECE - 6340 ECE 6340Fall 2013Homework 11) Show that v = 0 for a time-harmonic (sinusoidal) field in a source-free homogeneousregion of matter, having an arbitrary complex permittivity . Homogeneous means thatthe material properties do not vary with position insi UChicago - MATH - 19620 ve of f at xm . What is the smallest valueof n for which we are guaranteed to be able to find such a function f ?(A) 1(B) 2(C) 3(D) 4(E) 5Your answer:(5) (*) Let k , m, and n be natural numbers with m β‰₯ 3. We are given a bunch of numbers x Berkeley - UGBA - 121 lan. Shea lso elects to have $1,300 of her salary paid into the flexible benefits plan.B ecause her medical costs were lower than expected, Janet gets back only $1,250 ofthe $1,300 she paid into the plan. What is Janet's gross income for the curre U. Houston - ECE - 2317 DO NOT BEGIN THIS EXAM UNTIL TOLD TO STARTName: _ECE 2317Applied Electricity and MagnetismExam 1March 19, 20131. This exam is closed-book and closed-notes notes. A formula sheet isprovided.1. Show all of your work. No credit will be given if the w U. Houston - ECE - 2317 DO NOT BEGIN THIS EXAM UNTIL TOLD TO STARTName: _SOLUTION_ECE 2317Applied Electricity and MagnetismExam 1March 10, 20111. This exam is closed-book and closed-notes notes. A formula sheet isprovided.2. Show all of your work. No credit will be given U. Houston - ECE - 2317 DO NOT BEGIN THIS EXAM UNTIL TOLD TO STARTName: _SOLUTION_ECE 2317Applied Electricity and MagnetismExam 1Oct. 18, 20121. This exam is closed-book and closed-notes notes. A formula sheet isprovided.2. Show all of your work. No credit will be given U. Houston - ECE - 2317 ECE 2317Applied Electricity and MagnetismFall 2013Homework #41)12)b3) =V= Edrab0.05ln = 0.0127 [V]aSee plot in (2).4) (a) 50 (b) 50 (c) 3.6 (d) 7.2 (e) 75 e r (r 2 + 2r + 2) 2 =5) E r [V/m]r 2 0 v 0 r r [V/m], r a3 0 3 v 0 a r U. Houston - ECE - 3317 DO NOT BEGIN THIS EXAM UNTIL TOLD TO STARTName: _PeopleSoft ID: _ECE 3317Applied Electromagnetic WavesExam IIApril 25, 20131. This exam is open book and open notes. However, you are not allowedto use a computer or any electronic device other than U. Houston - ECE - 3317 DO NOT BEGIN THIS EXAM UNTIL TOLD TO STARTName: _PeopleSoft ID: _ECE 3317Applied Electromagnetic WavesMarch 19, 20131. This exam is open book and open notes. However, you are not allowedto use a computer or any electronic device other than a calcul U. Houston - ECE - 3317 DO NOT BEGIN THIS EXAM UNTIL TOLD TO STARTName: _Solution_PeopleSoft ID: _ECE 3317Applied Electromagnetic WavesMarch 22, 20121. This exam is open book and open notes. However, you are not allowedto use a computer or any electronic device other than U. Houston - ECE - 3317 ECE 3317Spring 2013Homework #11This homework assignment is not to be turned in, but it is strongly recommended thatyou work all of the problems, as this material will be covered on the final exam.1) Consider an open-circuited transmission line of len U. Houston - ECE - 3317 ECE 3317Spring 2013Homework #10Answers1) Find the frequency ranges for a single TE10 mode of operation for an air-filled rectangularwaveguide whose dimensions are:(a) 0.9 0.4 inches (X-band)(b) 0.10 0.05 inches (W-band)(a) 6.562 GHz 13.123 GHz(b) U. Houston - ECE - 3317 ECE 3317Spring 2013Homework #9Answers1) A plane wave in free space has an electric field given byE=( 4 x 4 y )0.8 z e( j kx x + k y y + kz z),where k x = 0.4k0 and k y = 0.2k0 .Determine k z and the k vector, and verify that the electric fie U. Houston - ECE - 3317 ECE 3317Spring 2013Homework #8Answers1) Calculate the skin depth and the surface impedance for aluminum at a frequency of 2.45[GHz] (this is the frequency of a microwave oven). The conductivity of aluminum istaken as 2.0 107 [S/m]. Aluminum is nonma U. Houston - ECE - 3317 ECE 3317Spring 2013Homework #7Answers1) How long does it take light to travel from the surface of the earth to the following:(a) An overhead aircraft at an altitude of 35,000 feet(b) An overhead geostationary satellite (altitude 36,000 km)(c) The m U. Houston - ECE - 3317 ECE 3317Fall 2012Homework #6AnswersNote: In this homework set we are in the sinusoidal steady-state (phasor domain). Please turn inSmith charts for each problem, clearly showing your work.1) From the Smith chart find L for the following ZLN:(a) 0.6 U. Houston - ECE - 3317 ECE 3317Fall 2012Homework #5AnswersNote: In this homework set we are in the sinusoidal steady-state (phasor domain).1) Use the formula for the load reflection coefficient to show thatL 1.The load reflection coefficient formula isZ Z0L = L.Z L + U. Houston - ECE - 3317 ECE 3317Spring 2013Homework #4Note: In all plotting problems, you will be graded on both the accuracy and the quality o f yourplots. Your plots should be drawn neatly and to scale. Please use graph paper.1) Use the formula for the load reflection coe U. Houston - ECE - 3317 ECE 3317Fall 2012Homework #3AnswersNote: In all plotting problems, you will be graded on both the accuracy and the quality of yourplots. You plots should be drawn neatly and to scale. Please use graph paper.1) Starting with the telegraphers equation U. Houston - ECE - 3317 ECE 3317Spring 2013Homework #1Answers1) Given c1= 2 + j and c2 =2 + j 3 , calculate the following:(a)(b)(c)(d)c1 + c2c1 c2c1c2c1 / c2Give the answers in both rectangular and polar forms.(a) c1 + c2 = j 4 = 4ej2(b) c1 c2 =4 j 2 =4.472e j 0 U. Houston - ECE - 3317 ECE 3317Spring 2013Homework #2Answersxy 1) If B ( x, y, z ) = 2 x + xy 2 z 3 y + x 2 yz 2 z , find B and B .=B(x z22 3 xy 2 z 2 ) x ( 2 xyz 2 ) + ( y 2 z 3 2 xy ) zy B y 2 + 2 xyz 3 + 2 x 2 yz=2) Prove that for an arbitrary vector A (x, y, z U. Houston - ECE - 3455 ECE 3455: ElectronicsSection 12071Spring 2011Final ExamVersion BMay 7, 2011Do not open the exam until instructed to do so. Answer the questions in thespaces provided on the question sheets. If you run out of room for an answer,continue on the back U. Houston - ECE - 3455 ECE 3455: ElectronicsSection 12071Spring 2011Exam 2Version AApril 23, 2011Do not open the exam until instructed to do so. Answer the questions in thespaces provided on the question sheets. If you run out of room for an answer,continue on the back U. Houston - ECE - 3455 ECE 3455: ElectronicsSection 12071Spring 2011Exam 1Version AMarch 5, 2011Do not open the exam until instructed to do so. Answer the questions in thespaces provided on the question sheets. If you run out of room for an answer,continue on the back o U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #6May 3, 2010Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on these U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #5August 4, 2009Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on the U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #5April 14, 2010Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on the U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #4July 16, 2009Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on thes U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #4April 5, 2010Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on thes U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #3June 30, 2009Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on thes U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #3March 3, 2010Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on thes U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #2June 25, 2009Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on thes U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #2February 22, 2010Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #1June 18, 2009Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on thes U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3455 - Quiz #1February 10, 2010Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on U. Houston - ECE - 3455 ECE 3455 Midsemester Exam July 2, 2009 Page 1Name: _ (please print)Signature: _ECE 3455 Midsemester ExamJuly 2, 2009Keep this exam closed until youare told to begin.1. This exam is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or U. Houston - ECE - 3455 ECE 3455 Final Exam August 7, 2009 Page 1Name: _ (please print)Signature: _ECE 3455 Final ExamAugust 7, 2009Keep this exam closed until youare told to begin.1. This exam is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equi U. Houston - ECE - 3455 ECE 3455 Final Exam May 8, 2010 Page 1Name: _ (please print)Signature: _ECE 3455 Final ExamMay 8, 2010Keep this exam closed until youare told to begin.1. This exam is closed book, closed notes. You may use two 8.5 x 11 cribsheets, or their equival U. Houston - ECE - 3455 ECE 3455 Exam 2 April 17, 2010 Page 1Name: _ (please print)Signature: _ECE 3455 Exam 2April 17, 2010Keep this exam closed until youare told to begin.1. This exam is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent. U. Houston - ECE - 3455 ECE 3455 Exam 1 March 6, 2010 Page 1Name: _ (please print)Signature: _ECE 3455 Exam 1March 6, 2010Keep this exam closed until youare told to begin.1. This exam is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3355 - Quiz #6December 4, 2012Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on t U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3355 - Quiz #5November 29, 2012Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3355 - Quiz #4November 13, 2012Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3355 - Quiz #3October 30, 2012Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on t U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3355 - Quiz #3October 6, 2011Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on th U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3355 - Quiz #2October 4, 2012Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on th U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3355 - Quiz #1September 18, 2012Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on U. Houston - ECE - 3455 Name: _ (please print)Signature: _ECE 3355 - Quiz #1September 8, 2011Keep this quiz closed andface up until you are told tobegin.1. This quiz is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent.2. Show all work on U. Houston - ECE - 3455 ECE 3355 Final Exam December 15, 2012 Page 1Name: _ (please print)Signature: _ECE 3355 Final ExamDecember 15, 2012Keep this exam closed until youare told to begin.1. This exam is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or it U. Houston - ECE - 3455 ECE 3355 Exam 2 November 17, 2012 Page 1Name: _ (please print)Signature: _ECE 3355 Exam 2November 17, 2012Keep this exam closed until youare told to begin.1. This exam is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equiva U. Houston - ECE - 3455 ECE 3355 Exam 2 November 19, 2011 Page 1Name: _ (please print)Signature: _ECE 3355 Exam 2November 19, 2011Keep this exam closed until youare told to begin.1. This exam is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equiva U. Houston - ECE - 3455 ECE 3355 Exam 1 October 6, 2012 Page 1Name: _ (please print)Signature: _ECE 3355 Exam 1October 6, 2012Keep this exam closed until youare told to begin.1. This exam is closed book, closed notes. You may use one 8.5 x 11 cribsheet, or its equivalent U. Houston - ECE - 3455 U. Houston - ECE - 3455 U. Houston - ECE - 3455 U. Houston - ECE - 3455 UAB - CS - 201 CIS201 Introduction to Object-Oriented ProgrammingLaboratory ReportName: William BlackLab: Lab 1BlazerID: Black3Date submitted: 8/27/2012IntroductionMy goal is to get my programming working smooth and successful in java. Im looking tosee how each U. Houston - ECE - 3455 U. Houston - ECE - 6340 ECE 6340Fall 2013Homework 31) Assume that we have a plane wave in free space that hasE = x e jk0 z1H = y e jk0 z 0 a) Determine the following three vectors: S , S ( t ) , S ( t ) .()b) Is it true that S ( t ) = Re S e jt ? If not, explain why o UAB - CS - 201 CIS201 Introduction to Object-Oriented ProgrammingLaboratory ReportName: William BlackLab: Lab 2BlazerID: Black3Date submitted: dateIntroductionIn this Lab report, I am going to work on a new program with eclipse and see how itoperates compared to U. Houston - ECE - 6340 ECE 6340Fall 2013Homework 21) Consider two parallel infinite wires in free space each carrying a DC current I in theopposite direction, as shown below. Use Amperes law along with the Lorentz wire forcelaw to show that the force per unit length on the
{"url":"http://www.coursehero.com/file/8252234/C-h-0-0-l-0-ln-z-z-/","timestamp":"2014-04-19T02:37:00Z","content_type":null,"content_length":"52313","record_id":"<urn:uuid:d10206a3-c2ea-492a-9118-ea37616a98b3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Antioch, CA Math Tutor Find an Antioch, CA Math Tutor ...I have extensive problem-solving experience including winning 1st place at the ICTM State Math Contest in 2007 and placing in the top 500 in the national Putnam competition. My tutoring methods vary student-by-student, but I specialize in breaking down problems and asking questions to guide the ... 17 Subjects: including logic, algebra 1, algebra 2, calculus ...Under my guidance I will make sure you or your student obtains the best tutoring help possible. Moreover, you will find that I also value both study- and executive-skills very highly. To put it simply, I believe the most effective way to prepare for high school, college, graduate school, etc. and life, in general, is through an eclectic approach that emphasizes all of these 45 Subjects: including algebra 2, ACT Math, algebra 1, SAT math ...After working with me, regular students of all ages or those who have different learning modalities will all find math easy. Confidence is an attitude that makes a world of difference in learning. I will change my students' attitude toward math positively and provide them with math skills, so thorough that they will feel confident in doing math independently and successfully. 17 Subjects: including discrete math, probability, algebra 1, algebra 2 ...This approach demands an understanding the needs of each student from a holistic perspective. I believe the essence of teaching is not telling; it’s listening. I have experience and training in administering and evaluating educational assessment. 10 Subjects: including prealgebra, reading, writing, grammar ...I have 11 years of teaching and tutoring experience in high school/college physics (mechanics, electromagnetism, electronics, etc,) and math (geometry, trigonometry, algebra, and calculus). My approach of teaching can be summed up in the following steps: 1. gauge the subject level of the student... 34 Subjects: including algebra 2, grammar, reading, writing Related Antioch, CA Tutors Antioch, CA Accounting Tutors Antioch, CA ACT Tutors Antioch, CA Algebra Tutors Antioch, CA Algebra 2 Tutors Antioch, CA Calculus Tutors Antioch, CA Geometry Tutors Antioch, CA Math Tutors Antioch, CA Prealgebra Tutors Antioch, CA Precalculus Tutors Antioch, CA SAT Tutors Antioch, CA SAT Math Tutors Antioch, CA Science Tutors Antioch, CA Statistics Tutors Antioch, CA Trigonometry Tutors Nearby Cities With Math Tutor Berkeley, CA Math Tutors Brentwood, CA Math Tutors Concord, CA Math Tutors Danville, CA Math Tutors Elk Grove Math Tutors Fairfield, CA Math Tutors Hayward, CA Math Tutors Oakland, CA Math Tutors Oakley, CA Math Tutors Pittsburg, CA Math Tutors Pleasanton, CA Math Tutors Richmond, CA Math Tutors Stockton, CA Math Tutors Vallejo Math Tutors Walnut Creek, CA Math Tutors
{"url":"http://www.purplemath.com/Antioch_CA_Math_tutors.php","timestamp":"2014-04-21T00:18:01Z","content_type":null,"content_length":"23933","record_id":"<urn:uuid:15043e06-d0b4-4c95-90eb-6013307b5aad>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Spotlight: Archives of American Mathematics Featured Collections The Paul R. Halmos Papers Notes, publications, manuscripts, lectures, correspondence, course materials, and personal records document the career of mathematician, teacher, author, and editor Paul Halmos. The William G. Chinn Papers The papers of this educator and former second vice president of the MAA document his involvement with the Mathematical Association of America, the School Mathematics Study Group, and the U.S. Commission on Mathematical Instruction. Two Audio Collections Recordings of Pat Kenschaft's Math Medley radio show and oral history interviews of prominent mathematicians in the field of finite simple group theory, conducted and recorded by Joseph Gallian, are now available. The Lawrence Biedenharn Papers Notes, publications, conference talks, teaching materials, correspondence, and personal documents of one of the leaders of modern theoretical physics are now available. The Jeanne Agnew Papers Correspondence, publications, photographs, and printed materials document the teaching career of Jeanne Agnew at Oklahoma State University, including her publications and work on several National Science Foundation grants. The Isaac Jacob Schoenberg Papers Correspondence, manuscripts of published and unpublished papers and lectures, research notes, teaching materials, and photographs document the career of mathematician Isaac Jacob Schoenberg. The School Mathematics Study Group Records Records documenting the history of the School Mathematics Study Group (SMSG), including the writing, implementation, and evaluation of the SMSG curriculum, as well as the files of the director, Edward G. Begle, are now available. The Alfred Schild Papers Papers documenting the career of Alfred Schild, a mathematical physicist specializing in relativity and gravitation at the Carnegie Institute of Technology and the University of Texas at Austin, are now available. The H.S. Vandiver Papers Correspondence, research notes, bibliographies, lecture notes, notebooks, drafts of publications, reprints, and photographs document the career of H.S. Vandiver. The Bryce S. DeWitt Papers General research files containing handwritten notes, correspondence, and printed material, as well as extensive documentation of the 1973 eclipse experiment, document the career of Bryce S. DeWitt, known for his mathematical approach to physics and his work in quantum field theory, supermanifolds, gauge theory, and relativistic astrophysics. The Walter Feit Papers Correspondence, preprints, reprints, and manuscripts document the career of Walter Feit, a pure mathematician who contributed to algebra, geometry, topology, number theory, and logic. The New Mathematical Library Papers Records documenting the New Mathematical Library, a mathematical monograph series aimed at high school and early college students, including the work of its longtime editor Anneli Lax. The Max Dehn Papers Papers documenting the career of Max Dehn, relating chiefly to his research in geometry, topology, group theory, and the history of mathematics, are now available. The R.H. Bing Papers Correspondence, research and lecture notes, drafts of publications, teaching material, memoranda, reports, newspaper clippings, preprints, and reprints document the career of R.H. Bing in geometric topology, largely as a professor at the University of Texas at Austin. The George Bruce Halsted Papers Correspondence, ephemera, printed material, photographs, and publications document the life and work of George Bruce Halsted, a mathematician who explored foundations of geometry and introduced non-Euclidean geometry to the United States. Addition to the Mathematical Association of America Records Additional records from the MAA headquarters were recently added to this collection of correspondence, printed material, notes, publications, and photographs documenting the work of the MAA. The R.L. Moore Papers Correspondence, research notebooks, drafts, teaching material, mathematical notes, printed material, photographs and other material document the life and career of R.L. Moore, a prominent mathematician and professor of mathematics at the University of Texas at Austin for almost fifty years. NCTM Oral History Project Records Audiocassette recordings of the interviews, typed transcripts, release forms, and related materials created and collected by the NCTM Oral History Task Force are now available.
{"url":"http://www.maa.org/about-maa/maa-history/spotlight-archives-of-american-mathematics?device=mobile","timestamp":"2014-04-17T21:31:00Z","content_type":null,"content_length":"29539","record_id":"<urn:uuid:b98b2882-e968-410f-ae81-f720ba6e60ca>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Root finding algorithm for f(a)f(b) > 0 March 31st 2009, 02:33 PM Root finding algorithm for f(a)f(b) > 0 Does anyone know of a root finding algorithm that doesn't rely on function values at given bounds being of opposite signs? I am using the Brent algorithm, but this doesn't work for curves like a parabola resting on the x-axis. The function is non-differentiable and must be solved using numerical rather than analytical means. Any ideas other than an incremental search? April 2nd 2009, 10:53 PM Does anyone know of a root finding algorithm that doesn't rely on function values at given bounds being of opposite signs? I am using the Brent algorithm, but this doesn't work for curves like a parabola resting on the x-axis. The function is non-differentiable and must be solved using numerical rather than analytical means. Any ideas other than an incremental search? May be, can you give any more information about the actual problem.
{"url":"http://mathhelpforum.com/advanced-math-topics/81701-root-finding-algorithm-f-f-b-0-a-print.html","timestamp":"2014-04-21T13:49:57Z","content_type":null,"content_length":"4617","record_id":"<urn:uuid:2a5ec180-a709-44a1-b147-2b3f1b2fa231>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Event Horizon Most commonly known as the point of no return, the event horizon is an astronomical point that has puzzled even Stephen Hawking, famous for his controversial theories towards black holes. It is a sphere-like boundery surrounding the singularity of a black hole (Couper, H., & Henbest, N. 1996). This mysterious outline is made up of millions and millions of photons. In the region where the photons are located, the gravity is too strong to allow them to escape, but not strong enough to pull them into the black hole (Couper, H., & Henbest, N. 1996). The numerous photons remain motionless, unable to direct itself in anyway, marking the boundary known as the event horizon (Couper, H., & Henbest, N. 1996). It is important to remember that the event horizon is the point that if crossed, the speed (escape velocity) needed to escape the black holes gravitational pull exceeds that of the speed of light. Yet, being on the exterior of the event horizon does not mean that there is no gravitational pull on the other side of this invisible wall of photons. Even if you have not passed the event horizon, escape is still possible, yet very unlikely (Couper, H., & Henbest, N. 1996). The event horizon was, surprisingly, not discovered by Stephen Hawking, but by another man named Karl Schwarzschild (Couper, H., & Henbest, N. 1996). This physicist and astronomer used Albert Einstein’s theory of general relativity to create an equation that proved the existence of a β€œmagic circle” through which nothing could escape. The name: β€œMagic Circle” was later changed to the present day title, the event horizon (Couper, H., & Henbest, N. 1996). Stephen Hawking believed that in space, there are many "virtual matter particles" (Ferguson, K. 1991). They are invisible, but prove to exist due to their reactions and effects on objects sorrounding them. Around the region of the event horizon, these particles, travelingu in pairs (positive and negative), were attracted to the mysterious outline. The negative particle would be caught inside the event horizon, and sucked into the black hole. Thus, this originally virtual particle was made into a real particle (Ferguson, K. 1991). We must acknowledge the fact that this process is being repeated constantly and in large numbers all around the black hole. To the observers of black holes, these positive particles are seen as a sort of radiation (Ferguson, K. 1991). Hawking called this the Hawking Radiation. This proves that a black hole can get smaller and eventually evaporate. However, this contradicts with his first theory that states that black holes can never get smaller due to the placement and functions of the photons that make up the event horizon. But to any thinking person, this statement doesn’t make any sense. If nothing can escape the black hole, it is impossible for it to get smaller and disappear. But Hawking came up with another statement that proved this contradiction wrong. According to Ferguson, when the black hole changes the virtual particles into real particle, it loses energy. When the black hole takes this NEGATIVE energy in, it takes the energy OUT of the black hole, therefore making it smaller because when something has less energy, it has less mass (Ferguson, K. 1991). Hence, Albert Einstein’s famous equation: E=mc[2] "...the event horizon is the point that if crossed, the speed (escape velocity) needed to escape the black holes gravitational pull exceeds that of the speed of light." ^5
{"url":"http://www.odec.ca/projects/2007/joch7c2/Event_Horizon.html","timestamp":"2014-04-18T00:22:00Z","content_type":null,"content_length":"28164","record_id":"<urn:uuid:801e0ab7-00b5-4e8f-b8e6-061522ec43f6>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Regularization and semi-supervised learning on large graphs Results 1 - 10 of 103 , 2006 "... We review the literature on semi-supervised learning, which is an area in machine learning and more generally, artificial intelligence. There has been a whole spectrum of interesting ideas on how to learn from both labeled and unlabeled data, i.e. semi-supervised learning. This document is a chapter ..." Cited by 447 (8 self) Add to MetaCart We review the literature on semi-supervised learning, which is an area in machine learning and more generally, artificial intelligence. There has been a whole spectrum of interesting ideas on how to learn from both labeled and unlabeled data, i.e. semi-supervised learning. This document is a chapter excerpt from the author’s doctoral thesis (Zhu, 2005). However the author plans to update the online version frequently to incorporate the latest development in the field. Please obtain the latest version at http://www.cs.wisc.edu/~jerryzhu/pub/ssl_survey.pdf , 2005 "... We believe that the cluster assumption is key to successful semi-supervised learning. Based on this, we propose three semi-supervised algorithms: 1. deriving graph-based distances that emphazise low density regions between clusters, followed by training a standard SVM; 2. optimizing the Transd ..." Cited by 120 (9 self) Add to MetaCart We believe that the cluster assumption is key to successful semi-supervised learning. Based on this, we propose three semi-supervised algorithms: 1. deriving graph-based distances that emphazise low density regions between clusters, followed by training a standard SVM; 2. optimizing the Transductive SVM objective function, which places the decision boundary in low density regions, by gradient descent; 3. combining the first two to make maximum use of the cluster assumption. We compare with state of the art algorithms and demonstrate superior accuracy for the latter two methods. - ICML06, 23rd International Conference on Machine Learning , 2006 "... A novel semi-supervised learning approach is proposed based on a linear neighborhood model, which assumes that each data point can be linearly reconstructed from its neighborhood. Our algorithm, named Linear Neighborhood Propagation (LNP), can propagate the labels from the labeled points to the whol ..." Cited by 58 (9 self) Add to MetaCart A novel semi-supervised learning approach is proposed based on a linear neighborhood model, which assumes that each data point can be linearly reconstructed from its neighborhood. Our algorithm, named Linear Neighborhood Propagation (LNP), can propagate the labels from the labeled points to the whole dataset using these linear neighborhoods with sufficient smoothness. We also derive an easy way to extend LNP to out-ofsample data. Promising experimental results are presented for synthetic data, digit and text classification tasks. 1. - in Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics , 2005 "... We describe an algorithm for nonlinear dimensionality reduction based on semidefinite programming and kernel matrix factorization. The algorithm learns a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. In earlier work, the kernel matrix was learned by maximiz ..." Cited by 49 (5 self) Add to MetaCart We describe an algorithm for nonlinear dimensionality reduction based on semidefinite programming and kernel matrix factorization. The algorithm learns a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. In earlier work, the kernel matrix was learned by maximizing the variance in feature space while preserving the distances and angles between nearest neighbors. In this paper, adapting recent ideas from semi-supervised learning on graphs, we show that the full kernel matrix can be very well approximated by a product of smaller matrices. Representing the kernel matrix in this way, we can reformulate the semidefinite program in terms of a much smaller submatrix of inner products between randomly chosen landmarks. The new framework leads to order-of-magnitude reductions in computation time and makes it possible to study much larger problems in manifold learning. 1 - In Proc. Artificial Intelligence and Statistics , 2005 "... There has been an increase of interest for semi-supervised learning recently, because of the many datasets with large amounts of unlabeled examples and only a few labeled ones. This paper follows up on proposed nonparametric algorithms which provide an estimated continuous label for the given unlabe ..." Cited by 41 (5 self) Add to MetaCart There has been an increase of interest for semi-supervised learning recently, because of the many datasets with large amounts of unlabeled examples and only a few labeled ones. This paper follows up on proposed nonparametric algorithms which provide an estimated continuous label for the given unlabeled examples. First, it extends them to function induction algorithms that minimize a regularization criterion applied to an outof-sample example, and happen to have the form of Parzen windows regressors. This allows to predict test labels without solving again a linear system of dimension n (the number of unlabeled and labeled training examples), which can cost O(n 3). Second, this function induction procedure gives rise to an efficient approximation of the training process, reducing the linear system to be solved to m β‰ͺ n unknowns, using only a subset of m examples. An improvement of O(n 2 /m 2) in time can thus be obtained. Comparative experiments are presented, showing the good performance of the induction formula and approximation algorithm. 1 - In Proc. Int. Conf. Machine Learning , 2005 "... Graph-based methods for semi-supervised learning have recently been shown to be promising for combining labeled and unlabeled data in classification problems. However, inference for graphbased methods often does not scale well to very large data sets, since it requires inversion of a large matrix or ..." Cited by 36 (2 self) Add to MetaCart Graph-based methods for semi-supervised learning have recently been shown to be promising for combining labeled and unlabeled data in classification problems. However, inference for graphbased methods often does not scale well to very large data sets, since it requires inversion of a large matrix or solution of a large linear program. Moreover, such approaches are inherently transductive, giving predictions for only those points in the unlabeled set, and not for an arbitrary test point. In this paper a new approach is presented that preserves the strengths of graph-based semisupervised learning while overcoming the limitations of scalability and non-inductive inference, through a combination of generative mixture models and discriminative regularization using the graph Laplacian. Experimental results show that this approach preserves the accuracy of purely graph-based transductive methods when the data has β€œmanifold structure, ” and at the same time achieves inductive learning with significantly reduced computational cost. 1. - ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 18 , 2005 "... ..." - In ICML , 2006 "... In ranking, one is given examples of order relationships among objects, and the goal is to learn from these examples a real-valued ranking function that induces a ranking or ordering over the object space. We consider the problem of learning such a ranking function when the data is represented as a ..." Cited by 33 (1 self) Add to MetaCart In ranking, one is given examples of order relationships among objects, and the goal is to learn from these examples a real-valued ranking function that induces a ranking or ordering over the object space. We consider the problem of learning such a ranking function when the data is represented as a graph, in which vertices correspond to objects and edges encode similarities between objects. Building on recent developments in regularization theory for graphs and corresponding Laplacian-based methods for classification, we develop an algorithmic framework for learning ranking functions on graph data. We provide generalization guarantees for our algorithms via recent results based on the notion of algorithmic stability, and give experimental evidence of the potential benefits of our framework. 1. - Proc. 22-nd Int. Conf. Machine Learning , 2005 "... We apply classic online learning techniques similar to the perceptron algorithm to the problem of learning a function defined on a graph. The benefit of our approach includes simple algorithms and performance guarantees that we naturally interpret in terms of structural properties of the graph, such ..." Cited by 32 (9 self) Add to MetaCart We apply classic online learning techniques similar to the perceptron algorithm to the problem of learning a function defined on a graph. The benefit of our approach includes simple algorithms and performance guarantees that we naturally interpret in terms of structural properties of the graph, such as the algebraic connectivity or the diameter of the graph. We also discuss how these methods can be modified to allow active learning on a graph. We present preliminary experiments with encouraging results. 1. - In Advances in Neural Information Processing Systems 18 , 2006 "... We present a series of theoretical arguments supporting the claim that a large class of modern learning algorithms that rely solely on the smoothness prior – with similarity between examples expressed with a local kernel – are sensitive to the curse of dimensionality, or more precisely to the variab ..." Cited by 27 (14 self) Add to MetaCart We present a series of theoretical arguments supporting the claim that a large class of modern learning algorithms that rely solely on the smoothness prior – with similarity between examples expressed with a local kernel – are sensitive to the curse of dimensionality, or more precisely to the variability of the target. Our discussion covers supervised, semisupervised and unsupervised learning algorithms. These algorithms are found to be local in the sense that crucial properties of the learned function at x depend mostly on the neighbors of x in the training set. This makes them sensitive to the curse of dimensionality, well studied for classical non-parametric statistical learning. We show in the case of the Gaussian kernel that when the function to be learned has many variations, these algorithms require a number of training examples proportional to the number of variations, which could be large even though there may exist short descriptions of the target function, i.e. their Kolmogorov complexity may be low. This suggests that there exist non-local learning algorithms that at least have the potential to learn about such structured but apparently complex functions (because locally they have many variations), while not using very specific prior domain knowledge. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.129.819","timestamp":"2014-04-18T06:42:16Z","content_type":null,"content_length":"37981","record_id":"<urn:uuid:62657b60-aed7-4544-9ccd-db1c5cc23d24>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Times Interest Earned Ratio Times interest earned (also called interest coverage ratio) is the ratio of earnings before interest and tax (EBIT) of a business to its interest expense during a given period. It is a solvency ratio measuring the ability of a business to pay off its debts. Times interest earned ratio is calculated as follows: Times Interest Earned = Earnings before Interest and Tax Interest Expense Both figures in the above formula can be obtained from the income statement of a company. Earnings before interest and tax (EBIT) is same as operating income. Higher value of times interest earned ratio is favorable meaning greater ability of a business to repay its interest and debt. Lower values are unfavorable. A ratio of 1.00 means that income before interest and tax of the business is just enough to pay off its interest expense. That is why times interest earned ratio is of special importance to creditors. They can compare the debt repayment ability of similar companies using this ratio. Other things equal, a creditor should lend to a company with highest times interest earned ratio. It is also beneficial to create a trend of times interest earned ratio. Example 1: Calculate the times interest earned ratio of a company having interest expense and earnings before interest and tax for the year ended Dec 31, 2010 of $239,000 and $3,493,000 respectively. Times Interest Earned = $3,493,000 Γ· $239,000 β‰ˆ 14.6 Example 2: The times interest earned ratio and earnings before interest and tax of a company were 9.34 and $1,324,400 during the year ended Jun 30, 2011. Calculate the interest expense of the Interest Expense = $1,324,400 Γ· 9.34 β‰ˆ $141,800 Written by Irfanullah Jan
{"url":"http://accountingexplained.com/financial/ratios/times-interest-earned","timestamp":"2014-04-18T15:41:09Z","content_type":null,"content_length":"13761","record_id":"<urn:uuid:7001adb7-3f47-46ec-8d6c-9cfc8826da70>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Math Club Replies: 3 Last Post: Sep 26, 2011 1:42 PM Messages: [ Previous | Next ] Math Club Posted: Feb 15, 2002 7:10 PM Matt has attended the Boston area Math Circle for the last two years, since grade 6. It's the best thing that's happened to him in math. Its website is: The Boston area Math Circle is non-competitive; it does not prepare students for math competitions of any kind. The Bay Area Math Circle, however, does prepare students for competitions, all the way to the Olympiads. I would assume that it therefore appeals to a different kind of students. The Boston-area Math Circle is not connected to a single school or district, but attracts students from all over the Boston area. There are weekday classes (one hour) and sunday morning sessions (3 hours divided into 3 segments). There is no homework. The topics covered by the Boston area Math Circle are outside the regular curriculum. Topics have ranged from "Are There Numbers Between Numbers?", The Euclidean Algorithm, Linear Functions (for 5-7 years olds) to Cantorian Set Theory, Random Walks, Min/Max Problems (for 7-9 and 9-11 years olds) to Pythagorean Triples, Concurrency and Collinearity (10-11 years olds, no algebra) to Projective Geometry, Complex Analysis, Combinatorial Geometry (15-17 years olds with good algebra and geometry). The list of topics is quite long. Typically, one topic is covered in ten one-hour sessions, coinciding with local universities' academic calendars. Some of the problems appear in James Tanton's Solve This (MAA, 2001). Tanton is one of the three Math Circle organizers. Hoping this provides some inspiration, submissions: post to k12.ed.math or e-mail to k12math@sd28.bc.ca private e-mail to the k12.ed.math moderator: kem-moderator@thinkspot.net newsgroup website: http://www.thinkspot.net/k12math/ newsgroup charter: http://www.thinkspot.net/k12math/charter.html
{"url":"http://mathforum.org/kb/message.jspa?messageID=1137136","timestamp":"2014-04-18T16:11:57Z","content_type":null,"content_length":"21409","record_id":"<urn:uuid:b5369995-c036-4dc0-a46b-ee069e542e3d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... β€’ Teamwork 19 Teammate β€’ Problem Solving 19 Hero β€’ Engagement 19 Mad Hatter β€’ You have blocked this person. β€’ βœ” You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/sheradon.burkett/medals","timestamp":"2014-04-17T00:56:46Z","content_type":null,"content_length":"71671","record_id":"<urn:uuid:0082ea5c-4d5c-4888-a73f-42dba91c6d74>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
AlfvΓ©n's Cosmos A hierarchical universe can have an average density of zero, while containing infinite mass. Hannes AlfvΓ©n (1908–1995), who pioneered the field of magnetohydrodynamics, against initial skepticism, to give us a universe permeated by what are now called "AlfvΓ©n waves," never relinquished his own skepticism concerning the Big Bang. "They fight against popular creationism, but at the same time they fight fanatically for their own creationism," he argued in 1984, advocating, instead, for a hierarchical cosmology, whose mathematical characterization he credited to Edmund Edward Fournier d'Albe (1868β€”1933) and Carl Vilhelm Ludvig Charlier (1861–1932). Hierarchical does not mean isotropic, and observed anisotropy does not rule it out. Gottfried Wilhelm Leibniz (1646-1716), a lawyer as well as a scientist, believed that our universe was selected, out of an infinity of possible universes, to produce maximum diversity from a minimal set of natural laws. It is hard to imagine a more beautiful set of boundary conditions than zero density and infinite mass. But this same principle of maximum diversity warns us that it may take all the time in the universe to work the details out.
{"url":"http://edge.org/response-detail/10128","timestamp":"2014-04-17T22:08:43Z","content_type":null,"content_length":"35877","record_id":"<urn:uuid:8c2c5da8-c7a9-4a4b-a90e-8892d7095d71>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Modular inverse From Algorithmist The inverse of a number a modulo m is a number x such that $ax \equiv 1 \mod{m}$. It exists (and is unique if exists) if and only a and m are relatively prime (that is, gcd(a,m) = 1). In particular, if m is a prime, every non-zero element of Z[m] has an inverse (thus making it an algebraic structure known as field). Conventionally, the mathematical notation used for inverses is $a^{-1} \mod{m}$. In modular arithmetic the inverse of a is analogous to the number 1 / a in usual real-number arithmetic. If you have a product c = ab, and one of the factors has an inverse, you can get the other factor by multiplying the product by that inverse: $a = c b^{-1} \mod{m}$. Thus you can perform division in ring Z[m]. [edit] Finding the inverse We can rewrite the defining equation of modular inverses as an equivalent linear diophantine equation: ax + my = 1. This equation has a solution whenever gcd(a,m) = 1, and we can find such solution ( x,y) by means of the extended Euclidean algorithm. Then $a^{-1} \equiv x \mod{m}$, and also $m^{-1} \equiv y \mod{a}$. The following Python code implements this algorithm. # Iterative Algorithm (xgcd) def iterative_egcd(a, b): x,y, u,v = 0,1, 1,0 while a != 0: q,r = b//a,b%a; m,n = x-u*q,y-v*q # use x//y for floor "floor division" b,a, x,y, u,v = a,r, u,v, m,n return b, x, y # Recursive Algorithm def recursive_egcd(a, b): """Returns a triple (g, x, y), such that ax + by = g = gcd(a,b). Assumes a, b >= 0, and that at least one of them is > 0. Bounds on output values: |x|, |y| <= max(a, b).""" if a == 0: return (b, 0, 1) g, y, x = recursive_egcd(b % a, a) return (g, x - (b // a) * y, y) egcd = iterative_egcd # or recursive_egcd(a, m) def modinv(a, m): g, x, y = egcd(a, m) if g != 1: return None return x % m [edit] Alternative algorithm If you happen to know Ο†(m), you can also compute the inverses using Euler's theorem, which states that $a^{\phi(m)} \equiv 1 \mod{m}$. By multiplying both sides of this equation by a's modular inverse, we can deduce that: $a^{-1} \equiv a^{\phi(m) - 1} \mod{m}$. And so you can utilize repeated squaring algorithm to quickly find the inverse. This algorithm can be useful if m is a fixed number in your program (so, you can hardcode a precomputed value of Ο†(m)), or if m is a prime number, in which case Ο†(m) = m - 1. In general case, however, computing Ο†(m) is equivalent to factoring, which is a hard problem, so prefer using the extended GCD algorithm. [edit] Applications Suppose we need to calculate $\frac{a}{b} \mod{p}$. If b and p are co-primes (or if one of them is a prime), then we can calculate the modular inverse b' of b. $\frac{a}{b} \mod{P} \equiv ab' \mod{P}$
{"url":"http://www.algorithmist.com/index.php/Modular_inverse","timestamp":"2014-04-19T17:01:14Z","content_type":null,"content_length":"21175","record_id":"<urn:uuid:aa7b8375-1739-4031-be9f-44928c97d912>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics Homework Help We are intended in providing you assistance exclusively for Statistics. We have seen a lot of students struggle in learning Statistics and that is exactly why we have formed this special team which can offer you excellent help in Statistics. Many students have been benefited with the online statistics help that we provide and have requested us to continue our service for various other students who are having difficulty with statistics. Hence Statistics Help is intended to make you clearly understand the concepts behind an assignment by providing you a detailed step by step solution. The Statistics Answers that we provide will be extremely easy for you to understand since we explain all the steps in a very clear manner. Hence you can submit your assignments in our website and get detailed Statistics Answers in a step by step manner by using our Online Statistics Help Services. Contact our experts now and get Statistics Help now. You guys are a life saver! I just had to log in to your website and request for help! The assignment solution had clear and detailed steps , which was very easy for me to understand. Thank you - Stacy Williams
{"url":"http://www.statisticshomeworkhelp.ca/index.html","timestamp":"2014-04-16T07:34:00Z","content_type":null,"content_length":"10796","record_id":"<urn:uuid:ede3c06a-cbd5-4d91-800d-773ac6bbc424>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Vector Question March 19th 2009, 05:54 AM #1 Junior Member Dec 2008 Vector Question if $F = x^3yi +(y^2x -zx)j +(z^3 x^2 +y^2)k$ verify the identity $abla \wedge (abla\wedge F) = abla(abla\cdot F) - abla^2 F$ my attempt i found $abla\wedge(abla \wedge F)$ to be $(2y+6xz^2)i -(1+3x^2)j +(-2z^3-2)k$ but when i calculated $abla(abla\cdot F)$ i got $(6x+2y+6z^2 x, 2x, 6zx^2)$ and $abla^2 F$ = $(6+6z^2,0,6x^2)$ can someone tell me where i went wrong, am i using the wrong fomula? if $F = x^3yi +(y^2x -zx)j +(z^3 x^2 +y^2)k$ verify the identity $abla \wedge (abla\wedge F) = abla(abla\cdot F) - abla^2 F$ my attempt i found $abla\wedge(abla \wedge F)$ to be $(2y+6xz^2)i -(1+3x^2)j +(-2z^3-2)k$ but when i calculated $abla(abla\cdot F)$ i got $(6x+2y+6z^2 x, 2x, 6zx^2)$ and $abla^2 F$ = $(6+6z^2,0,6x^2)$ can someone tell me where i went wrong, am i using the wrong fomula? The formula is right. Some of your terms are wrong. Here's what I got. $abla\times(abla \times F) = (2y+6xz^2)i +3x^2 j +(-2z^3-2)k$ $abla(abla\cdot F) = (6xy+2y+6xz^2)i + (2x+3x^2)j + (6 x^2z)K$ $abla^2 F = 6xyi + 2x j +(2 + 6x^2z + 2z^3)k$ March 19th 2009, 06:57 AM #2
{"url":"http://mathhelpforum.com/calculus/79487-vector-question.html","timestamp":"2014-04-18T21:18:09Z","content_type":null,"content_length":"36649","record_id":"<urn:uuid:1fd8bff2-3ebd-4f49-8b56-8f367d75dd32>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating the Efficient Frontier: Part 1 A Matrix Based Example of Mean-Variance Optimization using Octave The concept of an β€œefficient frontier” was developed by Harry Markowitz in the 1950s. The efficient frontier shows us the minimum risk (i.e. standard deviation) that can be achieved at each level of expected return for a given set of risky securities. Of course, to calculate the efficient frontier, we need to have an estimate of the expected returns and the covariance matrix for the set of risky securities which will used to build the optimal portfolio. These parameters are difficult (impossible) to forecast, and the optimal portfolio calculation is extremely sensitive to these parameters. For this reason, an efficient frontier based portfolio is difficult to successfully implement in practice. However, a familiarity with the concept is still very useful and will help to develop intuition about diversification and the relationship between risk and return. Calculating the Efficient Frontier In this post, I’ll demonstrate how to calculate and plot the efficient frontier using the expected returns and covariance matrix for a set of securities. In a future post, I’ll demonstrate how to calculate the security weights for various points on this efficient frontier using the two-fund separation theorem. In order to calculate the efficient frontier using n assets, we need two inputs. First, we need the expected returns of each asset. The vector of expected returns will be designated Once we have this information, we can run the following calculations using a matrix based mathematical program such as Octave or Matlab. Using these values, the variance ( You can see from the equation, that the efficient frontier is a parabola in mean-variance space. Using the standard deviation ( Example using Octave Script As an example, lets consider four securities, A,B,C and D, with expected returns of 14%, 12%, 15%, and 7%. The expected return vector is: The covariance matrix for our example is shown below. In practice, the historical covariance matrix can be calculated by reading the historical returns into Octave or Matlab and using the cov(X) command. Note that the diagonal of the matrix is the variance of our four securities. So, if we take the square root of the diagonal, we can calculate the standard deviation of each asset (13.6%, 14%, 20.27%, and 5%). The example script for computing the efficient frontier from these inputs is shown at the end of this post. It can be modified for any number of assets by updating the expected return vector and the covariance matrix. The plot of the efficient frontier for our four assets is shown here: Derivation and References Deriving the approach I have shown is beyond the scope of this post. However, for those who want to dive into the linear algebra, there are several excellent examples available online. Derivation of Mean-Variance Frontier Equation using the Lagrangian (The Appendix B result is identical to what I show above, but the notation is a little different) Old school derivation by a young professor who later went on to win a Nobel prize Octave Code: This script will also work in Matlab, but I’ve chosen to use Octave since it is opensource and available for free. % Mean Variance Optimizer % S is matrix of security covariances S = [185 86.5 80 20; 86.5 196 76 13.5; 80 76 411 -19; 20 13.5 -19 25] % Vector of security expected returns zbar = [14; 12; 15; 7] % Unity vector..must have same length as zbar unity = ones(length(zbar),1) % Vector of security standard deviations stdevs = sqrt(diag(S)) % Calculate Efficient Frontier A = unity'*S^-1*unity B = unity'*S^-1*zbar C = zbar'*S^-1*zbar D = A*C-B^2 % Efficient Frontier mu = (1:300)/10; % Plot Efficient Frontier minvar = ((A*mu.^2)-2*B*mu+C)/D; minstd = sqrt(minvar); title('Efficient Frontier with Individual Securities','fontsize',18) ylabel('Expected Return (%)','fontsize',18) xlabel('Standard Deviation (%)','fontsize',18) 2 Responses to β€œCalculating the Efficient Frontier: Part 1” 1. in injection molding i have 2 defects (warpage&sink marks) they are controlled with variables(pressure&temperature) and i want to study the effect of the combination between these 2 variables on the 2 defects by using the efficient frontier . please help me and tell me how can i do it and the equation needed thank you
{"url":"http://www.calculatinginvestor.com/2011/06/07/efficient-frontier-1/","timestamp":"2014-04-20T05:48:38Z","content_type":null,"content_length":"35194","record_id":"<urn:uuid:fb3091a4-e961-458a-bbb4-00cc3ffd463c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) - Analytical Study of Bifurcation of a Flow of a Gas Between Coaxial Circular Cylinders with Evaporation and Condensation by Yoshio Sone Yoshio Sone University of Kyoto Time-independent behavior of a gas between two coaxial cylinders made of the condensed phase of the gas, where the cylinders are rotating around their common axis and evaporation or condensation is taking place, is considered with special attention to the bifurcation of the flow. The problem is studied analytically for small values of the speed of rotation of the cylinders and Knudsen number on the basis of the Boltzmann equation, and the solution is obtained explicitly. The bifurcation of the flow occurs even in a simple case where the gas is axially symmetric and uniform (or the flow field depends only on the radial coordinate). The comprehensive feature of the bifurcation of flow is clarified with the explicit forms of solutions and bifurcation diagram.
{"url":"http://www.ima.umn.edu/reactive/abstract/sone1.html","timestamp":"2014-04-16T22:37:42Z","content_type":null,"content_length":"14636","record_id":"<urn:uuid:8a410817-a08a-4de5-ba2f-6b05d56daf3e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00188-ip-10-147-4-33.ec2.internal.warc.gz"}