content
stringlengths
86
994k
meta
stringlengths
288
619
Graph Theory Stephane G Prove by induction on the number of edges in a graph that any bipartite graph has edge colouring number equal to its maximum valency. Also, Find such an edge colouring for a bipartite 4-regular cartesian or tensor product of your choice of 2-regular graphs Hey Stephane G and welcome to the forums. For these kinds of questions, we ask that you show any working and any of your thinking before we help with these kinds of problems, since it is in the form of a homework problem (note it doesn't have to be a homework problem, just in the format of one). What ideas do you have? What have you tried before?
{"url":"http://www.physicsforums.com/showthread.php?t=592569","timestamp":"2014-04-20T01:03:24Z","content_type":null,"content_length":"23048","record_id":"<urn:uuid:faae6c33-46de-46da-b011-8b9114253278>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Simulating Dependent Random Variables Using Copulas Dependence Between Simulation Inputs One of the design decisions for a Monte-Carlo simulation is a choice of probability distributions for the random inputs. Selecting a distribution for each individual variable is often straightforward, but deciding what dependencies should exist between the inputs may not be. Ideally, input data to a simulation should reflect what is known about dependence among the real quantities being modelled. However, there may be little or no information on which to base any dependence in the simulation, and in such cases, it is a good idea to experiment with different possibilities, in order to determine the model's sensitivity. However, it can be difficult to actually generate random inputs with dependence when they have distributions that are not from a standard multivariate distribution. Further, some of the standard multivariate distributions can model only very limited types of dependence. It's always possible to make the inputs independent, and while that is a simple choice, it's not always sensible and can lead to the wrong conclusions. For example, a Monte-Carlo simulation of financial risk might have random inputs that represent different sources of insurance losses. These inputs might be modeled as lognormal random variables. A reasonable question to ask is how dependence between these two inputs affects the results of the simulation. Indeed, it might be known from real data that the same random conditions affect both sources, and ignoring that in the simulation could lead to the wrong conclusions. Simulation of independent lognormal random variables is trivial. The simplest way would be to use the lognrnd function. Here, we'll use the mvnrnd function to generate n pairs of independent normal random variables, and then exponentiate them. Notice that the covariance matrix used here is diagonal, i.e., independence between the columns of Z. n = 1000; sigma = .5; SigmaInd = sigma.^2 .* [1 0; 0 1] SigmaInd = 0.2500 0 0 0.2500 ZInd = mvnrnd([0 0], SigmaInd, n); XInd = exp(ZInd); plot(XInd(:,1),XInd(:,2),'.'); axis equal; axis([0 5 0 5]); xlabel('X1'); ylabel('X2'); Dependent bivariate lognormal r.v.'s are also easy to generate, using a covariance matrix with non-zero off-diagonal terms. rho = .7; SigmaDep = sigma.^2 .* [1 rho; rho 1] SigmaDep = 0.2500 0.1750 0.1750 0.2500 ZDep = mvnrnd([0 0], SigmaDep, n); XDep = exp(ZDep); A second scatter plot illustrates the difference between these two bivariate distributions. plot(XDep(:,1),XDep(:,2),'.'); axis equal; axis([0 5 0 5]); xlabel('X1'); ylabel('X2'); It's clear that there is more of a tendency in the second dataset for large values of X1 to be associated with large values of X2, and similarly for small values. This dependence is determined by the correlation parameter, rho, of the underlying bivariate normal. The conclusions drawn from the simulation could well depend on whether or not X1 and X2 were generated with dependence or not. The bivariate lognormal distribution is a simple solution in this case, and of course easily generalizes to higher dimensions and cases where the marginal distributions are different lognormals. Other multivariate distributions also exist, for example, the multivariate t and the Dirichlet distributions are used to simulate dependent t and beta random variables, respectively. But the list of simple multivariate distributions is not long, and they only apply in cases where the marginals are all in the same family (or even the exact same distributions). This can be a real limitation in many situations. A More General Method for Constructing Dependent Bivariate Distributions Although the above construction that creates a bivariate lognormal is simple, it serves to illustrate a method which is more generally applicable. First, we generate pairs of values from a bivariate normal distribution. There is statistical dependence between these two variables, and each has a normal marginal distribution. Next, a transformation (the exponential function) is applied separately to each variable, changing the marginal distributions into lognormals. The transformed variables still have a statistical dependence. If a suitable transformation could be found, this method could be generalized to create dependent bivariate random vectors with other marginal distributions. In fact, a general method of constructing such a transformation does exist, although not as simple as just exponentiation. By definition, applying the normal CDF (denoted here by PHI) to a standard normal random variable results in a r.v. that is uniform on the interval [0, 1]. To see this, if Z has a standard normal distribution, then the CDF of U = PHI(Z) is Pr{U <= u0} = Pr{PHI(Z) <= u0} = Pr{Z <= PHI^(-1)(u0)} = u0, and that is the CDF of a U(0,1) r.v. Histograms of some simulated normal and transformed values demonstrate that fact. n = 1000; z = normrnd(0,1,n,1); hist(z,-3.75:.5:3.75); xlim([-4 4]); title('1000 Simulated N(0,1) Random Values'); xlabel('Z'); ylabel('Frequency'); u = normcdf(z); title('1000 Simulated N(0,1) Values Transformed to U(0,1)'); xlabel('U'); ylabel('Frequency'); Now, borrowing from the theory of univariate random number generation, applying the inverse CDF of any distribution F to a U(0,1) random variable results in a r.v. whose distribution is exactly F. This is known as the Inversion Method. The proof is essentially the opposite of the above proof for the forward case. Another histogram illustrates the transformation to a gamma distribution. x = gaminv(u,2,1); title('1000 Simulated N(0,1) Values Transformed to Gamma(2,1)'); xlabel('X'); ylabel('Frequency'); This two-step transformation can be applied to each variable of a standard bivariate normal, creating dependent r.v.'s with arbitrary marginal distributions. Because the transformation works on each component separately, the two resulting r.v.'s need not even have the same marginal distributions. The transformation is defined as Z = [Z1 Z2] ~ N([0 0],[1 rho; rho 1]) U = [PHI(Z1) PHI(Z2)] X = [G1(U1) G2(U2)] where G1 and G2 are inverse CDFs of two possibly different distributions. For example, we can generate random vectors from a bivariate distribution with t(5) and Gamma(2,1) marginals. n = 1000; rho = .7; Z = mvnrnd([0 0], [1 rho; rho 1], n); U = normcdf(Z); X = [gaminv(U(:,1),2,1) tinv(U(:,2),5)]; This plot has histograms alongside a scatter plot to show both the marginal distributions, and the dependence. [n1,ctr1] = hist(X(:,1),20); [n2,ctr2] = hist(X(:,2),20); subplot(2,2,2); plot(X(:,1),X(:,2),'.'); axis([0 12 -8 8]); h1 = gca; title('1000 Simulated Dependent t and Gamma Values'); xlabel('X1 ~ Gamma(2,1)'); ylabel('X2 ~ t(5)'); subplot(2,2,4); bar(ctr1,-n1,1); axis([0 12 -max(n1)*1.1 0]); axis('off'); h2 = gca; subplot(2,2,1); barh(ctr2,-n2,1); axis([-max(n2)*1.1 0 -8 8]); axis('off'); h3 = gca; set(h1,'Position',[0.35 0.35 0.55 0.55]); set(h2,'Position',[.35 .1 .55 .15]); set(h3,'Position',[.1 .35 .15 .55]); colormap([.8 .8 1]); Dependence between X1 and X2 in this construction is determined by the correlation parameter, rho, of the underlying bivariate normal. However, it is not true that the linear correlation of X1 and X2 is rho. For example, in the original lognormal case, there is a closed form for that correlation: cor(X1,X2) = (exp(rho.*sigma.^2) - 1) ./ (exp(sigma.^2) - 1) which is strictly less than rho unless rho is exactly one. In more general cases, though, such as the Gamma/t construction above, the linear correlation between X1 and X2 is difficult or impossible to express in terms of rho, but simulations can be used to show that the same effect happens. That's because the linear correlation coefficient expresses the linear dependence between r.v.'s, and when nonlinear transformations are applied to those r.v.'s, linear correlation is not preserved. Instead, a rank correlation coefficient, such as Kendall's tau or Spearman's rho, is more appropriate. Roughly speaking, these rank correlations measure the degree to which large or small values of one r.v. associate with large or small values of another. However, unlike the linear correlation coefficient, they measure the association only in terms of ranks. As a consequence, the rank correlation is preserved under any monotonic transformation. In particular, the transformation method just described preserves the rank correlation. Therefore, knowing the rank correlation of the bivariate normal Z exactly determines the rank correlation of the final transformed r.v.'s X. While rho is still needed to parameterize the underlying bivariate normal, Kendall's tau or Spearman's rho are more useful in describing the dependence between r.v.'s, because they are invariant to the choice of marginal distribution. It turns out that for the bivariate normal, there is a simple 1-1 mapping between Kendall's tau or Spearman's rho, and the linear correlation coefficient rho: tau = (2/pi)*arcsin(rho) or rho = sin(tau*pi/2) rho_s = (6/pi)*arcsin(rho/2) or rho = 2*sin(rho_s*pi/6) rho = -1:.01:1; tau = 2.*asin(rho)./pi; rho_s = 6.*asin(rho./2)./pi; plot(rho,tau,'-', rho,rho_s,'-', [-1 1],[-1 1],'k:'); axis([-1 1 -1 1]); xlabel('rho'); ylabel('Rank correlation coefficient'); legend('Kendall''s tau', 'Spearman''s rho_s', 'location','northwest'); Thus, it's easy to create the desired rank correlation between X1 and X2, regardless of their marginal distributions, by choosing the correct rho parameter value for the linear correlation between Z1 and Z2. Notice that for the multivariate normal distribution, Spearman's rank correlation is almost identical to the linear correlation. However, this is not true once we transform to the final random The first step of the construction described above defines what is known as a copula, specifically, a Gaussian copula. A bivariate copula is simply a probability distribution on two random variables, each of whose marginal distributions is uniform. These two variables may be completely independent, deterministically related (e.g., U2 = U1), or anything in between. The family of bivariate Gaussian copulas is parameterized by Rho = [1 rho; rho 1], the linear correlation matrix. U1 and U2 approach linear dependence as rho approaches +/- 1, and approach complete independence as rho approaches Scatter plots of some simulated random values for various levels of rho illustrate the range of different possibilities for Gaussian copulas: n = 500; Z = mvnrnd([0 0], [1 .8; .8 1], n); U = normcdf(Z,0,1); subplot(2,2,1); plot(U(:,1),U(:,2),'.'); title('rho = 0.8'); xlabel('U1'); ylabel('U2'); Z = mvnrnd([0 0], [1 .1; .1 1], n); U = normcdf(Z,0,1); subplot(2,2,2); plot(U(:,1),U(:,2),'.'); title('rho = 0.1'); xlabel('U1'); ylabel('U2'); Z = mvnrnd([0 0], [1 -.1; -.1 1], n); U = normcdf(Z,0,1); subplot(2,2,3); plot(U(:,1),U(:,2),'.'); title('rho = -0.1'); xlabel('U1'); ylabel('U2'); Z = mvnrnd([0 0], [1 -.8; -.8 1], n); U = normcdf(Z,0,1); subplot(2,2,4); plot(U(:,1),U(:,2),'.'); title('rho = -0.8'); xlabel('U1'); ylabel('U2'); The dependence between U1 and U2 is completely separate from the marginal distributions of X1 = G(U1) and X2 = G(U2). X1 and X2 can be given any marginal distributions, and still have the same rank correlation. This is one of the main appeals of copulas -- they allow this separate specification of dependence and marginal distribution. A different family of copulas can be constructed by starting from a bivariate t distribution, and transforming using the corresponding t CDF. The bivariate t distribution is parameterized with Rho, the linear correlation matrix, and nu, the degrees of freedom. Thus, for example, we can speak of a t(1) or a t(5) copula, based on the multivariate t with one and five degrees of freedom, Scatter plots of some simulated random values for various levels of rho illustrate the range of different possibilities for t(1) copulas: n = 500; nu = 1; T = mvtrnd([1 .8; .8 1], nu, n); U = tcdf(T,nu); subplot(2,2,1); plot(U(:,1),U(:,2),'.'); title('rho = 0.8'); xlabel('U1'); ylabel('U2'); T = mvtrnd([1 .1; .1 1], nu, n); U = tcdf(T,nu); subplot(2,2,2); plot(U(:,1),U(:,2),'.'); title('rho = 0.1'); xlabel('U1'); ylabel('U2'); T = mvtrnd([1 -.1; -.1 1], nu, n); U = tcdf(T,nu); subplot(2,2,3); plot(U(:,1),U(:,2),'.'); title('rho = -0.1'); xlabel('U1'); ylabel('U2'); T = mvtrnd([1 -.8; -.8 1], nu, n); U = tcdf(T,nu); subplot(2,2,4); plot(U(:,1),U(:,2),'.'); title('rho = -0.8'); xlabel('U1'); ylabel('U2'); A t copula has uniform marginal distributions for U1 and U2, just as a Gaussian copula does. The rank correlation tau or rho_s between components in a t copula is also the same function of rho as for a Gaussian. However, as these plots demonstrate, a t(1) copula differs quite a bit from a Gaussian copula, even when their components have the same rank correlation. The difference is in their dependence structure. Not surprisingly, as the degrees of freedom parameter nu is made larger, a t(nu) copula approaches the corresponding Gaussian copula. As with a Gaussian copula, any marginal distributions can be imposed over a t copula. For example, using a t copula with 1 degree of freedom, we can again generate random vectors from a bivariate distribution with Gam(2,1) and t(5) marginals: n = 1000; rho = .7; nu = 1; T = mvtrnd([1 rho; rho 1], nu, n); U = tcdf(T,nu); X = [gaminv(U(:,1),2,1) tinv(U(:,2),5)]; [n1,ctr1] = hist(X(:,1),20); [n2,ctr2] = hist(X(:,2),20); subplot(2,2,2); plot(X(:,1),X(:,2),'.'); axis([0 15 -10 10]); h1 = gca; title('1000 Simulated Dependent t and Gamma Values'); xlabel('X1 ~ Gamma(2,1)'); ylabel('X2 ~ t(5)'); subplot(2,2,4); bar(ctr1,-n1,1); axis([0 15 -max(n1)*1.1 0]); axis('off'); h2 = gca; subplot(2,2,1); barh(ctr2,-n2,1); axis([-max(n2)*1.1 0 -10 10]); axis('off'); h3 = gca; set(h1,'Position',[0.35 0.35 0.55 0.55]); set(h2,'Position',[.35 .1 .55 .15]); set(h3,'Position',[.1 .35 .15 .55]); colormap([.8 .8 1]); Compared to the bivariate Gamma/t distribution constructed earlier, which was based on a Gaussian copula, the distribution constructed here, based on a t(1) copula, has the same marginal distributions and the same rank correlation between variables, but a very different dependence structure. This illustrates the fact that multivariate distributions are not uniquely defined by their marginal distributions, or by their correlations. The choice of a particular copula in an application may be based on actual observed data, or different copulas may be used as a way of determining the sensitivity of simulation results to the input distribution. The Gaussian and t copulas are known as elliptical copulas. It's easy to generalize elliptical copulas to a higher number of dimensions. For example, we can simulate data from a trivariate distribution with Gamma(2,1), Beta(2,2), and t(5) marginals using a Gaussian copula as follows. n = 1000; Rho = [1 .4 .2; .4 1 -.8; .2 -.8 1]; Z = mvnrnd([0 0 0], Rho, n); U = normcdf(Z,0,1); X = [gaminv(U(:,1),2,1) betainv(U(:,2),2,2) tinv(U(:,3),5)]; grid on; view([-55, 15]); xlabel('U1'); ylabel('U2'); zlabel('U3'); Notice that the relationship between the linear correlation parameter rho and, for example, Kendall's tau, holds for each entry in the correlation matrix Rho used here. We can verify that the sample rank correlations of the data are approximately equal to the theoretical values. tauTheoretical = 2.*asin(Rho)./pi tauTheoretical = 1.0000 0.2620 0.1282 0.2620 1.0000 -0.5903 0.1282 -0.5903 1.0000 tauSample = corr(X, 'type','Kendall') tauSample = 1.0000 0.2655 0.1060 0.2655 1.0000 -0.6076 0.1060 -0.6076 1.0000 Copulas and Empirical Marginal Distributions To simulate dependent multivariate data using a copula, we have seen that we need to specify 1) the copula family (and any shape parameters), 2) the rank correlations among variables, and 3) the marginal distributions for each variable Suppose we have two sets of stock return data, and we would like to run a Monte Carlo simulation with inputs that follow the same distributions as our data. load stockreturns nobs = size(stocks,1); subplot(2,1,1); hist(stocks(:,1),10); xlabel('X1'); ylabel('Frequency'); subplot(2,1,2); hist(stocks(:,2),10); xlabel('X2'); ylabel('Frequency'); (These two data vectors have the same length, but that is not crucial.) We could fit a parametric model separately to each dataset, and use those estimates as our marginal distributions. However, a parametric model may not be sufficiently flexible. Instead, we can use an empirical model for the marginal distributions. We only need a way to compute the inverse CDF. The empirical inverse CDF for these datasets is just a stair function, with steps at the values 1/nobs, 2/nobs, ... 1. The step heights are simply the sorted data. invCDF1 = sort(stocks(:,1)); n1 = length(stocks(:,1)); invCDF2 = sort(stocks(:,2)); n2 = length(stocks(:,2)); stairs((1:nobs)/nobs, invCDF1,'b'); hold on; stairs((1:nobs)/nobs, invCDF2,'r'); hold off xlabel('Cumulative Probability'); ylabel('X'); For the simulation, we might want to experiment with different copulas and correlations. Here, we'll use a bivariate t(2) copula with a fairly large negative correlation parameter. n = 1000; rho = -.8; nu = 5; T = mvtrnd([1 rho; rho 1], nu, n); U = tcdf(T,nu); X = [invCDF1(ceil(n1*U(:,1))) invCDF2(ceil(n2*U(:,2)))]; [n1,ctr1] = hist(X(:,1),10); [n2,ctr2] = hist(X(:,2),10); subplot(2,2,2); plot(X(:,1),X(:,2),'.'); axis([-3.5 3.5 -3.5 3.5]); h1 = gca; title('1000 Simulated Dependent Values'); xlabel('X1'); ylabel('X2'); subplot(2,2,4); bar(ctr1,-n1,1); axis([-3.5 3.5 -max(n1)*1.1 0]); axis('off'); h2 = gca; subplot(2,2,1); barh(ctr2,-n2,1); axis([-max(n2)*1.1 0 -3.5 3.5]); axis('off'); h3 = gca; set(h1,'Position',[0.35 0.35 0.55 0.55]); set(h2,'Position',[.35 .1 .55 .15]); set(h3,'Position',[.1 .35 .15 .55]); colormap([.8 .8 1]); The marginal histograms of the simulated data closely match those of the original data, and would become identical as we simulate more pairs of values. Notice that the values are drawn from the original data, and because there are only 100 observations in each dataset, the simulated data are somewhat "discrete". One way to overcome this would be to add a small amount of random variation, possibly normally distributed, to the final simulated values. This is equivalent to using a smoothed version of the empirical inverse CDF.
{"url":"http://www.mathworks.in/help/stats/examples/simulating-dependent-random-variables-using-copulas.html?nocookie=true","timestamp":"2014-04-24T19:17:28Z","content_type":null,"content_length":"56259","record_id":"<urn:uuid:d3779402-b272-45da-b0df-f5e7c31a80f6>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
The effect of particle concentration inhomogeneities on the steady flow of electro- and magneto-rheological materials -- Web of Science v4.3.1 - ISI1 The effect of particle concentration inhomogeneities on the steady flow of electro- and magneto-rheological materials Gavin HP JOURNAL OF NON-NEWTONIAN FLUID MECHANICS 71 (3): 165-182 AUG 1997 The yield stresses of electro-rheological (ER) and magneto-rheological (MR) suspensions increase by orders of magnitude when electric or magnetic fields are applied across them. In the absence of the held, the materials are essentially Newtonian fluids. When ER or MR materials flow through thin laminar ducts, the effect of the finite yield stress concentrates the material deformation gradients in the immediate vicinity of the duct walls. High shear rates in this region introduce drag and lift forces on the suspended particles, the net effect of which moves the particles away from the walls. Electro- or magneto-static image forces at the walls oppose this lift. The ensuing local changes in the particulate volume fraction gives rise to a local inhomogeneity in material properties adjacent to the walls. Four models for the material property inhomogeneities are presented in this paper. Three of these models admit analytical expressions for the relationship between pressure gradient and volumetric flow rate, but presume a piecewise constant particle concentration. The fourth model presumes a smooth relationship between the volume fraction and the shear rate, but requires a numerical solution. Results are presented in terms of the ratio of pressure gradients that can be produced by applying and removing the field. Experimental data collected for a variety of quasi-steady ER flows shows that the analytical solution corresponding to a flow of uniform particle concentration provides an upper bound to the pressure gradients. Each of the four models for inhomogeneous flow provides a lower bound over a sub-domain of the flow conditions. By combining these models heuristically, a single expression for the lower bound on the pressure gradients of ER and MR flows is presented. (C) 1997 Elsevier Science B.V. Author Keywords: electrorheological, magnetorheological, inertial lift, image forces, phase separation, Bingham KeyWords Plus: Gavin HP, DUKE UNIV,DEPT CIVIL & ENVIRONM ENGN,DURHAM,NC 27708 IDS Number:
{"url":"http://people.duke.edu/~hpgavin/papers/NNFM97.html","timestamp":"2014-04-16T19:07:39Z","content_type":null,"content_length":"3076","record_id":"<urn:uuid:daba9dc0-653b-4679-a839-bfdef51a60fc>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Wyckoff Math Tutor Find a Wyckoff Math Tutor ...I have tutored SAT prep (reading, writing, and math) both privately and for the Princeton Review. I earned a BA from the University of Pennsylvania and an MA from Georgetown University. I have tutored GRE prep both privately and for the Princeton Review. 20 Subjects: including ACT Math, algebra 1, algebra 2, SAT math ...I know many people find math and science challenging and sometimes boring but I have a lot of experience in making it fun and exciting! I'm very comfortable tutoring any topic in mathematics, physics or chemistry. Please contact me if you'd like to work with me to improve your performance in these areas. 40 Subjects: including logic, linear algebra, econometrics, differential equations ...Moreover, I relate to many younger students. It is easy for me to get along with people and help them with any problems they have, in education and even personally if necessary. A little about me, I grew up in New York with great family and friends. 29 Subjects: including SAT math, algebra 1, geometry, algebra 2 ...I would be especially interested in a tutoring opportunity that involves academic/study skills advising and coaching.I am a Russian writer, poet, translator, and editor. I authored 8 books of poetry in Russian. I am also an editor of the Storony Sveta literary journal (in Russian), as well as the English language version of the journal, Cardinal Points. 29 Subjects: including prealgebra, precalculus, ACT Math, trigonometry ...I am a former high school math teacher and an experienced tutor. I have taught Algebra, Trigonometry, Geometry, and PreCalculus to students of various background and age (from as young as 5th grade to college students). I can help you with Trig! I am a former high school math teacher with many years of teaching hundreds of students in SAT Math. 11 Subjects: including calculus, algebra 1, algebra 2, geometry
{"url":"http://www.purplemath.com/wyckoff_nj_math_tutors.php","timestamp":"2014-04-20T06:54:25Z","content_type":null,"content_length":"23725","record_id":"<urn:uuid:083d4461-fb72-4601-905f-bd699d0b2f7a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
geometry in real life project Best Results From Yahoo Answers Youtube From Yahoo Answers Question:I am not asking you to do my homework for me, but I would highly appreciate it if you could help me out on this project. I have to take pictures of our geometry class's lesson concepts in the real world. Could you help me find examples of the following: -points -planes -line segments -midpoint -segment bisector -acute angle -obtuse angle -right angle -acute triangle -obtuse triangle -right triangle -scalene triangle -isosceles triangle -equilateral triangle Answers:There are many examples of these items in your daily life. Look around your neighborhood, go to a supermarket. Look at billboards, at street signs (the yield is a good example of a equilateral triangle, you may find obtuse angles in all sides of a stop sign). Look at your school supplies too (geometry rulers make excellent isocelles and escalene triangles). Most Rooftops are angled. Check patterns on your clothing. All of these are good suggestions. Question:I'm a doing a project in geometry where we need an example of a plane used in real life. This can come from a magazine, newspaper, online, but not hand made. Please answer!!! Please! Please! Answers:Come on, Kali .. A plane is a flat "infinite" surface. Walls, floors, ceiling, the expanse of a field of corn/wheat/soybeans. Looking up the front of a skyscraper .... Anything that gives you a feeling a infinity. Question:does anyone know a geometry vocab. word that starts with F, J, N, Q, U,W, X, Y, or Z. I have to be able to show a picture of it in real life. Due Friday. P.S it's an ABC book of geometry. Answers:F: Figure, face. N: Nonagon Q: Quadrilateral, Quadratic equation U: Undecagon, unit, union. W: wedge Hope that helped=) Question:Could you give me some ideas on how these examples of geometry terms are represented in real life? It is supposed to be objects. Here are the terms. Point Line Segment Ray Opposite Rays Perpendicular Lines Parrallel Lines Acute Angle Obtuse Angle Right Angle Vertical Angles (Acute only) Adjacent Angles (must be less than 180 Degrees) Linear Pair Thank you so much ! Answers:You have to put down tile or some type of flooring, you need to be able to use these to figure out how much tile to order and then how to lay this tile so that you have to cut the least amount (I mean really who wants to cut tiles for forever). From Youtube Geometry is Everywhere Project :Just a video I made for my Geometry Class. We had to find 10 examples of Geometry in everyday life. Yes, I know not EVERYONE has an ammunition can laying around, just go with it... Real-time Rendering Still Life Project Jonathan Ruttle :A tabletop scene with a number of textured objects, each illuminated by 3 light sources using the Phong illumination model. With examples of shadows calculated per light per receiver, some tangent space normal mapping, a free moving 6DOF camera, a dynamical generated cube map with real-time Fresnel reflectance and a quick example of a post processed depth of field at the end.
{"url":"http://www.edurite.com/kbase/geometry-in-real-life-project","timestamp":"2014-04-18T10:35:54Z","content_type":null,"content_length":"68937","record_id":"<urn:uuid:4409b840-5101-41a8-9ffa-2800a08c6411>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
maths sums for class ninth Author Message txumtir88 Posted: Tuesday 14th of Mar 09:25 Howdy all, I've an exceptionally crucial test in math soon and I will truly appreciate if any of you can assist me solve lots of exercises in maths sums for class ninth. I'm ok in math otherwise but questions in logarithms puzzle me and I'm at a total loss. It would be fantastic if you will let me know of a fairly priced software program that I may apply? Registered: 03.12.2003 Back to top AllejHat Posted: Tuesday 14th of Mar 18:06 The quickest way to get this done is employing the Algebra Buster software. This unique software furnishes an extraordinarily speedy and a simple to learn way for completing math homework. you'll definitely begin loving mathematics once you practice and learn how simple it can be. I recall how I had a hard time with my Intermediate algebra course however now with the aid of Algebra Buster, learning is much more tolerable. I'm you'll get guidance for your maths sums for class ninth Registered: 16.07.2003 From: Odense, Denmark Back to top Ashe Posted: Thursday 16th of Mar 09:36 it's great to understand that you desire to better your math knowledge and are taking steps to do so. I imagine you could run Algebra Buster. This is not exactly some tutoring device but it offers answers to mathematics assignments in a very | an immensely step-by-step format. The sweetest thing of this product is it's exceptionally user- friendly. There are a lot of examples given under assorted fields that are especially helpful to learn much more regarding a specific topic. Run it. Wish you good luck with mathematics. Registered: 08.07.2001 Back to top humimar Posted: Friday 17th of Mar 21:06 Seems like what I want. Where can I get hold of it? Registered: 17.02.2005 Back to top Paubaume Posted: Sunday 19th of Mar 10:17 I remember having often confronted exercises with function definition, quadratic inequalities plus solving a triangle. A really fantastic piece of algebra software is Algebra Buster software application. By only typing in a problem from class a comprehensive solution would appear with a click on solve. I've exploited it over several algebra courses of study Algebra 1, Basic Math and Remedial Algebra. I highly endorse the software program. Registered: 18.04.2004 From: In the stars... where you left me, and where I will wait for you... always... Back to top Svizes Posted: Tuesday 21st of Mar 09:20 You may get hold of this program from http://www.algebra-online.com/algebra-testimonials.htm. There are many demos on tap to determine if it's genuinely what you need plus if you are interested a sanctioned version for a reasonable sum . Registered: 10.03.2003 From: Slovenia Back to top
{"url":"http://www.algebra-online.com/algebra-homework-3/maths-sums-for-class-ninth.html","timestamp":"2014-04-19T15:23:30Z","content_type":null,"content_length":"30264","record_id":"<urn:uuid:a34eceb3-8b2d-47df-9236-3e4ed050f245>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
My watch list With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. • My watch list • My saved searches • My saved topics • My newsletter Thermodynamic free energy In thermodynamics, the term thermodynamic free energy is a measure of the amount of mechanical (or other) work that can be extracted from a system, and is helpful in engineering applications. It is a subtraction of the entropy of a system ("useless energy") from the total energy, yielding a thermodynamic state function which represents the "useful energy". In short, free energy is that portion of any First-Law energy that is available for doing thermodynamic work; i.e., work mediated by thermal energy. Since free energy is subject to irreversible loss in the course of such work and First-Law energy is always conserved, it is evident that free energy is an expendable, Second-Law kind of energy that can make things happen within finite amounts of In solution chemistry and biochemistry, the Gibbs free energy change (denoted by ΔG) is commonly used merely as a surrogate for (−T times) the entropy produced by spontaneous chemical reactions in situations where there is no work done; or at least no "useful" work; i.e., other than pdV. As such, it serves as a particularization of the second law of thermodynamics, giving it the physical dimensions of energy, even though the inherent meaning in terms of entropy would be more to the point. The free energy functions are Legendre transforms of the internal energy. For processes involving a system at constant pressure p and temperature T, the Gibbs free energy is the most useful because, in addition to subsuming any entropy change due merely to heat flux, it does the same for the pdV work needed to "make space for additional molecules" produced by various processes. (Hence its utility to solution-phase chemists, including biochemists.) The Helmholtz free energy has a special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore pdV work.) The (historically earlier) Helmholtz free energy is defined as A = U − TS, where U is the internal energy, T is the absolute temperature, and S is the entropy. Its change is equal to the amount of reversible work done on, or obtainable from, a system at constant T. Thus its appellation "work content", and the designation A from arbeit, the German word for work. Since it makes no reference to any quantities involved in work (such as p and V), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by a system, and it can increase at most by the amount of work done on a system. The Gibbs free energy G = H − TS, where H is the enthalpy. (H = U + pV, where p is the pressure and V is the volume.) There has been historical controversy: • In physics, “free energy” most often refers to the Helmholtz free energy, denoted by F. • In chemistry, “free energy” most often refers to the Gibbs free energy, also denoted by F. Since both fields use both functions, a compromise has been suggested, using A to denote the Helmholtz function, with G for the Gibbs function. While A is preferred by IUPAC, F is sometimes still in use, and the correct free energy function is often implicit in manuscripts and presentations. The experimental usefulness of these functions is restricted to conditions where certain variables (T, and V or external p) are held constant, although they also have theoretical importance in deriving Maxwell relations. Work other than pdV may be added, e.g., for electrochemical cells, or f ˑdx work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors. In most cases of interest there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy. Even for homogeneous "bulk" materials, the free energy functions depend on the (often suppressed) composition, as do all proper thermodynamic potentials (extensive functions), including the internal energy. Name Definition Natural variables Helmholtz free energy $A=U-TS\,$ $~~~~~T,V,\{N_i\}\,$ Gibbs free energy $G=U+pV-TS\,$ $~~~~~T,p,\{N_i\}\,$ N[i] is the number of molecules (alternatively, moles) of type i in the system. If these quantities do not appear, it is impossible to describe compositional changes. The differentials for reversible processes are (assuming only pV work) $\mathrm{d}A = - p\,\mathrm{d}V - S\mathrm{d}T + \sum_i \mu_i \,\mathrm{d}N_i\,$ $\mathrm{d}G = V\mathrm{d}P - S\mathrm{d}T + \sum_i \mu_i \,\mathrm{d}N_i\,$ where μ[i] is the chemical potential for the i-th component in the system. The second relation is especially useful at constant T and p, conditions which are easy to achieve experimentally, and which approximately characterize living creatures. $(\mathrm{d}G)_{T,p} = \sum_i \mu_i \,\mathrm{d}N_i\,$ Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and/or its surrounding. See also This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Thermodynamic_free_energy". A list of authors is available in Wikipedia.
{"url":"http://www.chemeurope.com/en/encyclopedia/Thermodynamic_free_energy.html","timestamp":"2014-04-17T12:43:44Z","content_type":null,"content_length":"61095","record_id":"<urn:uuid:8aeff13b-7c81-4bbb-9547-9f1d1bf74709>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Voltmeter design Question 1: Suppose I were about to measure an unknown voltage with a manual-range voltmeter. This particular voltmeter has several different voltage measurement ranges to choose from: • 500 volts • 250 volts • 100 volts • 50 volts • 25 volts • 10 volts • 5 volts What range would be best to begin with, when first measuring this unknown voltage with the meter? Explain your answer. Begin by setting the voltmeter to its highest range: 500 volts. Then, see if the movement needle registers anything with the meter leads connected to the circuit. Decide to change the meter's range based on this first indication. I always like to have my students begin their test equipment familiarity by using old-fashioned analog multimeters. Only after they have learned to be proficient with an inexpensive meter do I allow them to use anything better (digital, auto-ranging) in their work. This forces students to appreciate what a "fancy" meter does for them, as well as teach them basic principles of instrument ranging and measurement precision. Question 2: What would happen to this meter movement, if connected directly to a 6-volt battery? Two things would happen: first, the movement would most likely be damaged from excessive current. Secondly, the needle would move to the left instead of the right (as it normally should), because the polarity is backward. When an electromechanical meter movement is overpowered, causing the needle to ßlam" all the way to one extreme end of motion, it is commonly referred to as "pegging" the meter. I've seen meter movements that have been "pegged" so badly that the needles are bent from hitting the stop! Based on your students knowledge of meter movement design, ask them to tell you what they think might become damaged in a severe over-power incident such as this. Tell them to be specific in their Question 3: An important step in building any analog voltmeter or ammeter is to accurately determine the coil resistance of the meter movement. In electrical metrology, it is often easier to obtain extremely precise (ßtandard") resistance values than it is to obtain equally precise voltage or current measurements. One technique that may be used to determine the coil resistance of a meter movement without need to accurately measure voltage or current is as follows. First, connect a decade box type of variable resistance in series with a regulated DC power supply, then to the meter movement to be tested. Adjust the decade box's resistance so that the meter movement moves to some precise point on its scale, preferably the full-scale (100%) mark. Record the decade box's resistance setting as R Then, connect a known resistance in parallel with the meter movement's terminals. This resistance will be known as R , the resistance. The meter movement deflection will decrease when you do this. Re-adjust the decade box's resistance until the meter movement deflection returns to its former place. Record the decade box's resistance setting as R The meter movement's coil resistance (R ) may be calculated following this formula: Your task is to show where this formula comes from, deriving it from Ohm's Law and whatever other equations you may be familiar with for circuit analysis. Hint: in both cases (decade box set to R and set to R ), the voltage across the meter movement's coil resistance is the same, the current through the meter movement is the same, and the power supply voltage is the same. One place to start from is the voltage divider equation, V = V )] ) applied to each circuit scenario: V[meter] = R[coil] || R[s]R[2] + (R[coil] || R[s]) Since we know that the meter's voltage is the same in the two scenarios, we may set these equations equal to each other: R[coil]R[1] + R[coil] = R[coil] || R[s]R[2] + (R[coil] || R[s]) Note: the double-bars in the above equation represent the equivalent of R and R , for which you will have the substitute the appropriate mathematical expression. This problem is really nothing more than an exercise in algebra, although it also serves to show how precision electrical measurements may be obtained by using standard resistors rather than precise voltmeters or ammeters. Question 4: Don't just sit there! Build something!! Learning to mathematically analyze circuits requires much study and practice. Typically, students practice by working through lots of sample problems and checking their answers against those provided by the textbook or the instructor. While this is good, there is a much better way. You will learn much more by actually building and analyzing real circuits , letting your test equipment provide the änswers" instead of a book or another person. For successful circuit-building exercises, follow these steps: 1. Carefully measure and record all component values prior to circuit construction. 2. Draw the schematic diagram for the circuit to be analyzed. 3. Carefully build this circuit on a breadboard or other convenient medium. 4. Check the accuracy of the circuit's construction, following each wire to each connection point, and verifying these elements one-by-one on the diagram. 5. Mathematically analyze the circuit, solving for all values of voltage, current, etc. 6. Carefully measure those quantities, to verify the accuracy of your analysis. 7. If there are any substantial errors (greater than a few percent), carefully check your circuit's construction against the diagram, then carefully re-calculate the values and re-measure. Avoid very high and very low resistor values, to avoid measurement errors caused by meter "loading". I recommend resistors between 1 kΩ and 100 kΩ, unless, of course, the purpose of the circuit is to illustrate the effects of meter loading! One way you can save time and reduce the possibility of error is to begin with a very simple circuit and incrementally add components to increase its complexity after each analysis, rather than building a whole new circuit for each practice problem. Another time-saving technique is to re-use the same components in a variety of different circuit configurations. This way, you won't have to measure any component's value more than once. Let the electrons themselves give you the answers to your own "practice problems"! It has been my experience that students require much practice with circuit analysis to become proficient. To this end, instructors usually provide their students with lots of practice problems to work through, and provide answers for students to check their work against. While this approach makes students proficient in circuit theory, it fails to fully educate them. Students don't just need mathematical practice. They also need real, hands-on practice building circuits and using test equipment. So, I suggest the following alternative approach: students should build their own "practice problems" with real components, and try to mathematically predict the various voltage and current values. This way, the mathematical theory "comes alive," and students gain practical proficiency they wouldn't gain merely by solving equations. Another reason for following this method of practice is to teach students scientific method: the process of testing a hypothesis (in this case, mathematical predictions) by performing a real experiment. Students will also develop real troubleshooting skills as they occasionally make circuit construction errors. Spend a few moments of time with your class to review some of the "rules" for building circuits before they begin. Discuss these issues with your students in the same Socratic manner you would normally discuss the worksheet questions, rather than simply telling them what they should and should not do. I never cease to be amazed at how poorly students grasp instructions when presented in a typical lecture (instructor monologue) format! A note to those instructors who may complain about the "wasted" time required to have students build real circuits instead of just mathematically analyzing theoretical circuits: What is the purpose of students taking your course? If your students will be working with real circuits, then they should learn on real circuits whenever possible. If your goal is to educate theoretical physicists, then stick with abstract analysis, by all means! But most of us plan for our students to do something in the real world with the education we give them. The "wasted" time spent building real circuits will pay huge dividends when it comes time for them to apply their knowledge to practical problems. Furthermore, having students build their own practice problems teaches them how to perform primary research, thus empowering them to continue their electrical/electronics education autonomously. In most sciences, realistic experiments are much more difficult and expensive to set up than electrical circuits. Nuclear physics, biology, geology, and chemistry professors would just love to be able to have their students apply advanced mathematics to real experiments posing no safety hazard and costing less than a textbook. They can't, but you can. Exploit the convenience inherent to your science, and get those students of yours practicing their math on lots of real circuits! Question 5: What is a ? How might you build your own galvanometer from commonly available components? There are several sources of information on galvanometers, both historical and modern. I leave it to you to do the research and present your findings. It is possible to make a crude galvanometer from a large audio speaker, using the voice coil/cone assembly as the moving element. Using a small laser and a mirror, it should be easy to construct a light-beam galvanometer, for greater sensitivity. This could be a fun and educational classroom experiment! Question 6: Describe the design and function of a style meter movement. "PMMC" is an acronym standing for "Permanent Magnet, Moving Coil". In essence, a PMMC meter movement is built like a small DC electric motor, with limited range of motion. Many textbooks provide good illustrations of PMMC meter movements. Your students may find some electronic images of PMMC meter movements on the internet. If possible, have a video projector in the classroom for projecting images like this that your students download. Question 7: We know that connecting a sensitive meter movement directly across the terminals of a substantial voltage source (such as a battery) is a Bad Thing. So, I want you to determine what other component (s) must be connected to the meter movement to limit the current through its coil, so that connecting the circuit to a 6-volt battery results in the meter's needle moving exactly to the full-scale Beginning students sometimes feel "lost" when trying to answer a question like this. They may know how to apply Ohm's Law to a circuit, but they do not know how to design a circuit that makes use of Ohm's Law for a specific purpose. If this is the case, you may direct their understanding through a series of questions such as this: • Why does the meter movement "peg" if directly connected to the battery? • What type of electrical component is good at limiting current? • How might we connect this component to the meter (series or parallel)? (Draw both configurations and let the student determine for themselves which connection pattern fulfills the goal of limiting current to the meter.) The math is simple enough in this question to allow solution without the use of a calculator. Whenever possible, I challenge students during discussion time to perform any necessary arithmetic "mentally" (i.e. without using a calculator), even if only to estimate the answer. I find many American high school graduates unable to do even very simple arithmetic without a calculator, and this lack of skill causes them no small amount of trouble. Not only are these students helpless without a calculator, but they lack the ability to mentally check their calculator-derived answers, so when they do use a calculator they have no idea whether their answer is even close to being correct. Question 8: Calculate the necessary resistance value and power rating for R in order to make the meter movement respond as a voltmeter with a range of 0 to 100 volts: = 99.35 k Ω, [1/8] watt will be sufficient. This is really nothing more than a simple series circuit problem, although the context of it being a voltmeter seems to confuse some students. If you find a large percentage of your class not understanding where to begin in a problem such as this, it means they really don't understand series circuits - all they learned to do when studying series resistor circuits before is to follow an easy sequence of steps to find voltages and currents in series resistor circuits. They did not learn the concepts well enough to abstract to something that looks just a little bit different. Question 9: Calculate the necessary resistance value and power rating for R in order to make the meter movement respond as a voltmeter with a range of 0 to 50 volts: = 830.83 k Ω, [1/8] watt will be sufficient. This is really nothing more than a simple series circuit problem, although the context of it being a voltmeter seems to confuse some students. If you find a large percentage of your class not understanding where to begin in a problem such as this, it means they really don't understand series circuits - all they learned to do when studying series resistor circuits before is to follow an easy sequence of steps to find voltages and currents in series resistor circuits. They did not learn the concepts well enough to abstract to something that looks just a little bit different. Question 10: Calculate the necessary resistance values to give this multi-range voltmeter the ranges indicated by the selector switch positions: • R[1] = 39 k Ω • R[2] = 199 k Ω • R[3] = 499 k Ω • R[4] = 999 k Ω • R[5] = 1.999 M Ω This is really nothing more than a set of simple series circuit problems, although the context of it being a voltmeter seems to confuse some students. If you find a large percentage of your class not understanding where to begin in a problem such as this, it means they really don't understand series circuits - all they learned to do when studying series resistor circuits before is to follow an easy sequence of steps to find voltages and currents in series resistor circuits. They did not learn the concepts well enough to abstract to something that looks just a little bit different. Question 11: Calculate the necessary resistance values to give this multi-range voltmeter the ranges indicated by the selector switch positions: • R[1] = 99 k Ω • R[2] = 300 k Ω • R[3] = 600 k Ω • R[4] = 1 M Ω • R[5] = 3 M Ω Hint: if you need help getting started in this problem, begin with calculating the value of R This is really nothing more than a set of simple series circuit problems, although the context of it being a voltmeter seems to confuse some students. If you find a large percentage of your class not understanding where to begin in a problem such as this, it means they really don't understand series circuits - all they learned to do when studying series resistor circuits before is to follow an easy sequence of steps to find voltages and currents in series resistor circuits. They did not learn the concepts well enough to abstract to something that looks just a little bit different. You should point out to your students how the series arrangement of the range resistors lends itself to more common resistance values, as opposed to having a separate range resistor for each range. There is a downside to this design, however: reliability. Discuss with your students the consequences of öpen" resistor faults in both types of voltmeter designs. Question 12: Ideally, should a voltmeter have a very low input resistance, or a very high input resistance? (Input resistance being the amount of electrical resistance intrinsic to the meter, as measured between its test leads.) Explain your answer. Ideally, a voltmeter should have the greatest amount of input resistance possible. This is important when using it to measure voltage sources and voltage drops in circuits containing large amounts of The answer to this question is related to the very important principle of meter loading. Technicians, especially, have to be very aware of meter loading, and how erroneous measurements may result from it. The answer is also related to how voltmeters are connected with the circuits under test: always in parallel! Question 13: Explain what the ohms-per-volt sensitivity rating of an analog voltmeter means. Many analog voltmeters exhibit a sensitivity of 20 kΩ per volt. Is it better for a voltmeter to have a high ohms-per-volt rating, or a low ohms-per-volt rating? The öhms-per-volt" sensitivity rating of a voltmeter is an expression of how many ohms of input resistance the meter has, per range of volt measurement. The higher this figure is, the better the If students have analog voltmeters in their possession (which I greatly encourage them to have), the ohms-per-volt sensitivity rating is often found in a corner of the meter scale, in fine print. If not, the rating should be found in the user's guide that came with the meter. Question 14: Fundamentally, what single factor in a voltmeter's design establishes its ohms-per-volt sensitivity rating? If your answer is, "the value of the series resistor(s)," you are incorrect. Students' immediate impression is that the range resistor value must establish the sensitivity rating, because they see the resistor as having the most impact on input resistance. However, some quick calculations with different range resistor values prove otherwise! Meter sensitivity is independent of any series-connected range resistor values. You might want to ask your students why meter movement coil resistance is not a factor in determining voltmeter sensitivity. Challenge your students with setting up sample circuit problems to prove the irrelevance of coil resistance on voltmeter sensitivity. Let them figure out how to set up the problems, rather than you setting up the problems for them! Question 15: Determine the different range values of this multi-range voltmeter: All components on the printed circuit board are ßurface-mount," soldered onto the top surfaces of the copper traces. The switch (SW1) schematic diagram is shown to the immediate right of the circuit board, with resistor values shown below the circuit board. Ranges = 10 V, 25 V, and 50 V. Determining the voltage ranges for this voltmeter is simply an exercise in Ohm's Law. The arithmetic is simple enough to permit solution without the use of calculators, so challenge your students during discussion time to work through the math "the old-fashioned way". Question 16: What if this voltmeter suddenly stopped working when set in its middle range. The upper and lower ranges still function just fine, though. Identify the most likely source of the problem. The middle contact in switch SW2 is open. This, despite being the most likely failure, is not the only possible failure that could cause this problem (middle range not functioning)! Challenge question: explain how you could verify the nature of the fault without using another meter. Brainstorm some other alternative possibilities for causing the problem, along with diagnostic procedures to verify each one of them (using another meter, if necessary). Then, discuss with your students the reason why a switch failure is more likely than any of the other faults. Question 17: Suppose you tried to measure the voltage at test point 2 (TP2) with a digital voltmeter having an input resistance of 10 MΩ. How much voltage would it indicate? How much voltage it ideally indicate? Ideally, of course, this voltage divider circuit should exhibit 7.5 volts at test point 2. The voltmeter, however, will register only 6.76 volts. Follow-up question: is the voltmeter registering inaccurately, or is its connection to the circuit actually changing V ? In other words, what is the actual voltage at TP2 with the voltmeter connected as shown? An analogy I often use to explain meter loading is the use of a pressure gauge to measure the air pressure in a pneumatic tire. In order to measure the pressure, some of the air must be let out of the tire, which of course changes the tire's air pressure. And in case you are wondering: no, this is not an example of Heisenberg's Uncertainty Principle, popularly misunderstood as error introduced by measurement. The Uncertainty Principle is far more profound than this! Question 18: Suppose you tried to measure the voltage at all three test points with an analog voltmeter having a sensitivity rating of 20 kΩ per volt, set on the 10 volt scale. How much voltage would it indicate at each test point? How much voltage it ideally indicate at each test point? │Test point│Ideal voltage│Meter indication│ │ TP1│ │ │ │ TP2│ │ │ │ TP3│ │ │ │Test point│Ideal voltage│Meter indication│ │ TP1│ 5 V │ 5 V │ │ TP2│ 4.138 V │ 0.805 V │ │ TP3│ 1.293 V │ 0.197 V │ An analogy I often use to explain meter loading is the use of a pressure gauge to measure the air pressure in a pneumatic tire. In order to measure the pressure, some of the air must be let out of the tire, which of course changes the tire's air pressure. And in case you are wondering: no, this is not an example of Heisenberg's Uncertainty Principle, popularly misunderstood as error introduced by measurement. The Uncertainty Principle is far more profound than this! Related Links
{"url":"http://www.allaboutcircuits.com/worksheets/meters1.html","timestamp":"2014-04-17T21:23:21Z","content_type":null,"content_length":"48420","record_id":"<urn:uuid:22d3b97f-4a6e-4ee4-8b1e-1356387b15c7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
KnittingHelp.com Forum - View Single Post - New to this site, new to knitting The different numbers are the numbers for each size. If you're knitting the very smallest size, your numbers are the first ones in each set. So let's say, the sizes for the pattern are sizes 32, [34, 36, 38, 40, 42] If someone is knitting size 40, her numbers are the 4th inside the brackets, as follows: P13, P2tog, (P20, P2tog)x1, P13. There are a lot of numbers in the pattern writing....but what will help you is to highlight the numbers that are YOUR NUMBERS for the size you're knitting. You didn't mention the size you're knitting...so I used an example that size 40 is represented by the 4th numbers inside the brackets and parentheses.
{"url":"http://www.knittinghelp.com/forum/showpost.php?p=1373970&postcount=2","timestamp":"2014-04-16T19:16:36Z","content_type":null,"content_length":"12882","record_id":"<urn:uuid:c7fd1ec8-3bb2-4ed4-88f3-1f08ad270ad0>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Chester, PA Algebra 1 Tutor Find a Chester, PA Algebra 1 Tutor ...During my time in college, I took one 3-credit course in Discrete Math. At least three of the other fourteen math courses I took also touched on topics from Discrete Math. While I was studying, I worked in the Math Center at my college. 11 Subjects: including algebra 1, calculus, algebra 2, geometry I am available to tutor all levels of mathematics from Algebra to graduate discrete mathematics. I taught mathematics at two major universities. In addition to the usual subjects, I am qualified to tutor actuarial math, statistics and probability, theoretical computer science, combinatorics and introductory graduate topics in discrete mathematics. 18 Subjects: including algebra 1, calculus, geometry, statistics ...I have produced dresses, skirts, blouses, pants, suits. I have also made gowns such as prom and wedding gowns. I have also made flower girl dresses and christening gowns. 14 Subjects: including algebra 1, geometry, SAT math, ACT Math ...Ed. As previously mentioned, with a Math and English major, I have learned many helpful tricks and tactics that I would love to share with my students to help them succeed. I also enjoy and excel in Social Studies and Science, so if you are having difficulty with those subjects in the Praxis II, I would be more than happy to help you with them as well. 15 Subjects: including algebra 1, reading, English, grammar ...I've worked both as a mechanical engineer for one of the country's nuclear power laboratories and as a teacher assistant in two school districts in Upstate New York. Before tutoring for WyzAnt, I tutored about 150 hours of math, science, and engineering before, but it was all volunteer and unpaid. I also currently work as an academic and SAT tutor for StudyPoint. 25 Subjects: including algebra 1, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Chester_PA_algebra_1_tutors.php","timestamp":"2014-04-19T15:20:52Z","content_type":null,"content_length":"24158","record_id":"<urn:uuid:fd4b3dbb-564f-4fbc-a5c8-78c5333c156d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
CS276 Lecture 3: Pseudorandom Generators Scribed by Bharath Ramsundar Last time we introduced the setting of one-time symmetric key encryption, defined the notion of semantic security, and proved its equivalence to message indistinguishability. Today we complete the proof of equivalence (found in the notes for last class), discuss the notion of pseudorandom generator, and see that it is precisely the primitive that is needed in order to have message-indistinguishable (and hence semantically secure) one-time encryption. Finally, we shall introduce the basic definition of security for protocols which send multiple messages with the same key. 1. Pseudorandom Generators And One-Time Encryption Intuitively, a Pseudorandom Generator is a function that takes a short random string and stretches it to a longer string which is almost random, in the sense that reasonably complex algorithms cannot differentiate the new string from truly random strings with more than negligible probability. Definition 1 [Pseudorandom Generator] A function ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is a ${(t,\epsilon)}$-secure pseudorandom generator if for every boolean function ${T}$ of complexity at most ${t}$ we have $\displaystyle \left | {\mathbb P}_{x\sim U_k } [ T(G(x)) = 1] - {\mathbb P} _{x\sim U_m} [ T(x) = 1] \right| \leq \epsilon \ \ \ \ \ (1)$ (We use the notation ${U_n}$ for the uniform distribution over ${\{ 0,1 \}^n}$.) The definition is interesting when ${m> k}$ (otherwise the generator can simply output the first m bits of the input, and satisfy the definition with ${\epsilon=0}$ and arbitrarily large ${t}$). Typical parameters we may be interested in are ${k=128}$, ${m=2^{20}}$, ${t=2^{60}}$ and ${\epsilon = 2^{-40}}$, that is we want ${k}$ to be very small, ${m}$ to be large, ${t}$ to be huge, and ${\ epsilon}$ to be tiny. There are some unavoidable trade-offs between these parameters. Lemma 2 If ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is ${(t,2^{-k-1})}$ pseudorandom with ${t = O(m)}$, then ${k\geq m-1}$. Proof: Pick an arbitrary ${y \in \{ 0,1 \}^k}$. Define $\displaystyle T_y(x) = 1 \Leftrightarrow x = G(y)$ It is clear that we may implement ${T}$ with an algorithm of complexity ${O(m)}$: all this algorithm has to do is store the value of ${G(y)}$ (which takes space ${O(m)}$) and compare its input to the stored value (which takes time ${O(m)}$) for total complexity of ${O(m)}$. Now, note that $\displaystyle {\mathbb P}_{x\sim U_k } [ T(G(x)) = 1] \geq \frac{1}{2^k}$ since ${G(x) = G(y)}$ at least when ${x = y}$. Similarly, note that ${{\mathbb P} _{x\sim U_m} [ T(x) = 1] = \frac{1}{2^m}}$ since ${T(x) = 1}$ only when ${x = G(y)}$. Now, by the pseudorandomness of ${G}$, we have that ${\frac{1}{2^k} - \frac{1}{2^m} \leq \frac{1}{2^{k+1}}}$. With some rearranging, this expression implies that $\displaystyle \frac{1}{2^{k+1}} \leq \frac{1}{2^m}$ which then implies ${m \leq k + 1 }$ and consequently ${k \geq m - 1}$ ◻ Exercise 1 Prove that if ${G: \{ 0,1 \}^k \rightarrow \{ 0,1 \}^m}$ is ${(t,\epsilon)}$ pseudorandom, and ${k < m}$, then $\displaystyle t \cdot \frac 1 \epsilon \leq O( m \cdot 2^k)$ Suppose we have a pseudorandom generator as above. Consider the following encryption scheme: • Given a key ${K\in \{ 0,1 \}^k}$ and a message ${M \in \{ 0,1 \}^m}$, $\displaystyle Enc(K,M) := M \oplus G(K)$ • Given a ciphertext ${C\in \{ 0,1 \}^m}$ and a key ${K\in \{ 0,1 \}^k}$, $\displaystyle Dec(K,C) = C \oplus G(K)$ (The XOR operation is applied bit-wise.) It’s clear by construction that the encryption scheme is correct. Regarding the security, we have Lemma 3 If ${G}$ is ${(t,\epsilon)}$-pseudorandom, then ${(Enc,Dec)}$ as defined above is ${(t-m,2\epsilon)}$-message indistinguishable for one-time encryption. Proof: Suppose that ${G}$ is not ${(t-m, 2\epsilon)}$-message indistinguishable for one-time encryption. Then ${\exists}$ messages ${M_1, M_2}$ and ${\exists}$ algorithm ${T}$ of complexity at most $ {t - m}$ such that $\displaystyle \left | {\mathbb P}_{K \sim U_k} [T(Enc(K, M_1)) = 1] - {\mathbb P}_{K \sim U_k} [T(Enc(K, M_2)) = 1] \right | > 2\epsilon$ By using the definition of ${Enc}$ we obtain $\displaystyle \left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1)) = 1] - {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_2)) = 1] \right | > 2\epsilon$ Now, we can add and subtract the term ${{\mathbb P}_{R \sim U_m} [T(R) = 1]}$ and use the triangle inequality to obtain that ${\left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1) = 1] - {\mathbb P}_ {R \sim U_m} [T(R) = 1] \right |}$ added to ${\left | {\mathbb P}_{R \sim U_m} [T(R) = 1] - {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_2) = 1] \right |}$ is greater than ${2\epsilon}$. At least one of the two terms in the previous expression must be greater that ${\epsilon}$. Suppose without loss of generality that the first term is greater than ${\epsilon}$ $\displaystyle \left | {\mathbb P}_{K \sim U_k} [T(G(K) \oplus M_1)) = 1] - {\mathbb P}_{R \sim U_m} [T(R) = 1] \right | > \epsilon$ Now define ${T'(X) = T(X \oplus M_1)}$. Then since ${H(X) = X \oplus M_1}$ is a bijection, ${{\mathbb P}_{R \sim U_m} [T'(R) = 1] = {\mathbb P}_{R \sim U_m} [T(R) = 1]}$. Consequently, $\displaystyle \left | {\mathbb P}_{K \sim U_k} [T'(G(K)) = 1] - {\mathbb P}_{R \sim U_m} [T'(R) = 1] \right | > \epsilon$ Thus, since the complexity of ${T}$ is at most ${t - m}$ and ${T'}$ is ${T}$ plus an xor operation (which takes time ${m}$), ${T'}$ is of complexity at most ${t}$. Thus, ${G}$ is not ${(t, \epsilon)} $-pseudorandom since there exists an algorithm ${T'}$ of complexity at most ${t}$ that can distinguish between ${G}$‘s output and random strings with probability greater than ${\epsilon}$. Contradiction. Thus ${(Enc, Dec)}$ is ${(t-m, 2\epsilon)}$-message indistinguishable. ◻ 2. Security for Multiple Encryptions: Plain Version In the real world, we often need to send more than just one message. Consequently, we have to create new definitions of security for such situations, where we use the same key to send multiple messages. There are in fact multiple possible definitions of security in this scenario. Today we shall only introduce the simplest definition. Definition 4 [Message indistinguishability for multiple encryptions] ${(Enc,Dec)}$ is ${(t,\epsilon)}$-message indistinguishable for ${c}$ encryptions if for every ${2c}$ messages ${M_1,\ ldots,M_c}$, ${M'_1,\ldots,M'_c}$ and every ${T}$ of complexity ${\leq t}$ we have $\displaystyle | {\mathbb P} [ T(Enc(K,M_1), \ldots,Enc(K,M_c)) = 1]$ $\displaystyle -{\mathbb P} [ T(Enc(K,M'_1), \ldots,Enc(K,M'_c)) = 1] | \leq \epsilon$ Similarly, we define semantic security, and the asymptotic versions. Exercise 2 Prove that no encryption scheme ${(Enc,Dec)}$ in which ${Enc()}$ is deterministic (such as the scheme for one-time encryption described above) can be secure even for 2 encryptions. Encryption in some versions of Microsoft Office is deterministic and thus fails to satisfy this definition. (This is just a symptom of bigger problems; the schemes in those versions of Office are considered completely broken.) If we allow the encryption algorithm to keep state information, then a pseudorandom generator is sufficient to meet this definition. Indeed, usually pseudorandom generators designed for such applications, including RC4, are optimized for this kind of “stateful multiple encryption.” Next time, we shall consider a stronger model of multiple message security which will be secure against Chosen Plaintext Attacks. Recent Comments • Not a Troll on This story has a moral • Jim Hefferon on This story has a moral Leave a Reply Cancel reply
{"url":"http://lucatrevisan.wordpress.com/2009/01/30/cs276-lecture-3-pseudorandom-generators/","timestamp":"2014-04-17T09:42:15Z","content_type":null,"content_length":"100562","record_id":"<urn:uuid:518c11dd-6667-4c48-8043-e510b92354c3>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Diffraction and intensity of fringes Your understanding is correct. I think your textbook diagram is confusing, and partly wrong. I'll explain. I think the dotted red line is supposed to represent a two source pattern with no superimposed diffraction effects. It is wrong because (1) it omits the central fringe (2) it makes the bright fringes too sharp. The intensity should follow a 'cos squared' graph, which is sinusoidal in shape. This implies that at mid-intensity (halfway up the vertical axis) the widths of bright and dark fringes should be equal. They don't seem to be. The red solid line is the single slit diffraction pattern for a slit with a width of 2s, in which s is the distance between the slit centres used for the two slit graph. I find this confusing, because slits of this width couldn't have a separation s between their centres without merging into one wide slit. I suppose that the diagram makes no claim that the red dotted line and the red solid line should apply to the same set-up, but I'd rather they did. The blue line is the single slit diffraction pattern for a slit with a width of (2/3)s. I've no quarrel with this: two slits of this width, with centres separated by s, would not merge, and could be used to produce Young's fringes, but there seems to be no graph which shows the 'modulation' of the Young's fringes by the diffraction 'envelope'.
{"url":"http://www.physicsforums.com/showpost.php?p=4284316&postcount=2","timestamp":"2014-04-20T05:51:12Z","content_type":null,"content_length":"8041","record_id":"<urn:uuid:48131ac5-a3b9-4f87-abaa-a34cd878e840>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
No data available. Please log in to see this content. You have no subscription access to this content. No metrics data to plot. The attempt to load metrics for this article has failed. The attempt to plot a graph for these metrics has failed. Reverse engineering of complex dynamical networks in the presence of time-delayed interactions based on noisy time series FIG. 1. Diagonal elements of the dynamical correlation matrix as a function of node degree k for three dynamical processes with different values of the time delay on scale-free and random networks. Square, circle, triangle and reverse triangle denote , 0.05, 0.07, and 0.09, respectively. The curves are the theoretical prediction from Eq. (15). The sizes of model networks are 100 and the average degree is 10. The noise strength is 0.1 and the coupling strength c is 0.2. FIG. 2. Example of the distribution of the values of elements of the generalized inverse of the dynamical correlation matrix C for consensus dynamics associated with a scale-free network, where . The bimodal behavior is present for Kuramoto model and Rössler dynamics as well. FIG. 3. Success rate of prediction of existent links for (a) consensus dynamics, (b) Kuramoto oscillators, and (c) Rössler dynamics as a function of time delay for a number of model and real-world networks: scale-free networks (scale-free),^27 random network (random),^28 small-world network (small-world),^29 dolphin social network (dolphins),^30 network of American football games among colleges (football),^31 friendship network of karate club (karate),^32 and network of political book purchases (book).^33 Other parameters are the same as in Fig. 1. The success rate of nonexistent links is higher than 0.99 for all considered cases and thus are not shown. FIG. 4. Predicted time delay from Eq. (19) versus the true (pre-assumed) values for the three dynamical processes on a number of model and real-world networks. The symbols denote the same networks as in Fig. 3. The lines are . Other parameters are the same as in Fig. 1. FIG. 5. (a) Success rate of prediction of existent links as a function of the average time delay for different ranges of time delays for random consensus networks. (b) Predicted average time delay versus the original time delay for different ranges of time delay. The lines are . Other parameters are the same as in Fig. 1. Article metrics loading...
{"url":"http://scitation.aip.org/content/aip/journal/chaos/22/3/10.1063/1.4747708","timestamp":"2014-04-16T16:10:25Z","content_type":null,"content_length":"81580","record_id":"<urn:uuid:fa7cfe70-0383-4cff-906b-8ead7e1649a0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
A math problem. Actually it is dividing be zero that is undefined. Multiplying anything by zero equals zero. PastryGoddess, you have an oops, I think. In 6 - 0 + 1, you can't reduce that to 6 - 1, it would be 6 + 1. The - is attached to the 0 (it's the same as saying 6 + -0 + 1. So you would get 7 instead of 5. We were both math majors. The answer is 7 as explained by many above. I'm still taking math classes (yay for advanced calculus and statistics) and I say 7. The correct answer will not be posted until tomorrow night. It is causing quite a stir on the FB radio stations page. Anyone game?6-1x0+2 divided by 2=?I have no symbl on my keyboard for divide that I can see. I (and DH) learned absolutely unquestionably without parenthesis ones does math in order left to right. So as written the answer is 1.If written 6 - (1X0) + (2/2) it would be 7. I (and DH) learned absolutely unquestionably without parenthesis ones does math in order left to right. So as written the answer is 1.If written 6 - (1X0) + (2/2) it would be 7. You're telling me neither you nor your DH learned the order of operations? That's very odd. "I feel sarcasm is the lowest form of wit." "It is so low, in fact, that Miss Manners feels sure you would not want to resort to it yourself, even in your own defense. We do not believe in retaliatory rudeness." Judith Martin I (and DH) learned absolutely unquestionably without parenthesis ones does math in order left to right. So as written the answer is 1.If written 6 - (1X0) + (2/2) it would be 7. You're telling me neither you nor your DH learned the order of operations? That's very odd.http://www.math.com/school/subject2/lessons/S2U1L2DP.html The correct answer will not be posted until tomorrow night. It is causing quite a stir on the FB radio stations page. Anyone game?6-1x0+2 divided by 2=?I have no symbl on my keyboard for divide that I can see.Is it 6 - 1 x 0 +2 ÷ 2 = or6 - 1 x 0 + 2--------------- = 2 I (and DH) learned absolutely unquestionably without parenthesis ones does math in order left to right. So as written the answer is 1.If written 6 - (1X0) + (2/2) it would be 7. You're telling me neither you nor your DH learned the order of operations? That's very odd.http://www.math.com/school/subject2/lessons/S2U1L2DP.htmlYou just linked a page explaining order of operations. WillyNilly is saying that she and her DH learned to do the whole thing from left to right, instead of doing multiplication/division left to right first, then addition/subtraction left to right afterwards. Which means WillyNilly is saying that neither she nor her DH were taught the order of operations. Which I find odd. I (and DH) learned absolutely unquestionably without parenthesis ones does math in order left to right. So as written the answer is 1.If written 6 - (1X0) + (2/2) it would be 7. You're telling me neither you nor your DH learned the order of operations? That's very odd.http://www.math.com/school/subject2/lessons/S2U1L2DP.htmlYou just linked a page explaining order of operations. WillyNilly is saying that she and her DH learned to do the whole thing from left to right, instead of doing multiplication/division left to right first, then addition/subtraction left to right afterwards. Which means WillyNilly is saying that neither she nor her DH were taught the order of operations. Which I find odd.From that page: "In Example 1, without any parentheses, the problem is solved by working from left to right and performing all the addition and subtraction. When parentheses are used, you first perform the operations inside the parentheses, and you'll get a different answer!"
{"url":"http://www.etiquettehell.com/smf/index.php?topic=122381.15","timestamp":"2014-04-19T07:43:49Z","content_type":null,"content_length":"68014","record_id":"<urn:uuid:22beee0c-b0c3-4840-a1e7-0aac708aa795>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
A topological reflection principle equivalent to Shelah’s strong hypothesis Posted on September 30, 2011 by saf in categories: Publications. Abstract: We notice that Shelah’s Strong Hypothesis (SSH) is equivalent to the following reflection principle: Suppose $\mathbb X$ is an (infinite) first-countable space whose density is a regular cardinal, $\kappa$. If every separable subspace of $\mathbb X$ is of cardinality at most $ \kappa$, then the cardinality of $\mathbb X$ is $\kappa$. Citation information: A. Rinot, A topological reflection principle equivalent to Shelah’s Strong Hypothesis, Proc. Amer. Math. Soc., 136(12): 4413-4416, 2008. In a more recent paper, the arguments of the above paper were pushed further to show that SSH is also equivalent to the following: Suppose $\mathbb X$ is a countably tight space whose density is a regular cardinal, $\kappa$. If every separable subspace of $\mathbb X$ is countable, then the cardinality of $\mathbb X$ is $\kappa$. Leave a Reply Cancel reply This entry was posted in Publications and tagged 03E04, 03E65, 54G15, Shelah's Strong Hypothesis. Bookmark the permalink.
{"url":"http://blog.assafrinot.com/?p=318","timestamp":"2014-04-19T06:56:23Z","content_type":null,"content_length":"50323","record_id":"<urn:uuid:e304776a-91ae-4e85-9144-602ee2db1b8b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Word Problems This is a Algebra plane geometric figure problem as asked by an anonymous visitor which read, "catrina and tom want to buy a rug for a room that s 14 by 15 feet. They want to leave an even strip of flooring uncovered around the edges of the room. How wide a strip if they buy a rug with an area of 110 square feet?" Plane Geometric Figure Problem 3 Solution Here! This is an Algebra number word problem which was asked by an anonymous visitor that read, "55 bowls of the same size capacity of food. 1. everyone gets their own bowl of soup 2. every two gets one bowl of spaghetti to share 3. every three will get one bowl of salad 4. all r required to have their own helping of salad, spaghetti, and soup." Number Problem 14 Solution Here! In this Algebra finance word problem a anonymous user asked, "An administrative assistant orders cellular phones for people in her department. The brand A phones cost $89.95 and the brand b phones $34.95.If she orded 3 times as many brand b phones as brand a phones at a total cost of $584.40, how many of each did she order?" Finance Problem 4 Solution Here This is a Algebra finance word problem asked by an anonymous user which read, "Alex has made 42 of the 48 payments he owes on his car but is having trouble continuing to make the payment in full. He has made an arrangement with the bank to pay 2/3rds of his monthly payment, rather than the entire payment, every month. How many months will it take Alex to pay off his car loan with this new payment arrangement?" Finance Problem 3 Solution Here If you have a general Algebra question I just put up a new blog site which is a work in progress where you can ask general Algebra questions or review. Fractions, percentages, equation manipulation, binomial formula, completing the square, etc. You can check out the Algebra Help 101 blog site here. This is a algebra number problem asked by an anonymous poster which read, "A maths test contains 10 questions. Ten points are given for each correct answer and three points deducted for an incorrect answer. If Ralph scored 61, how many did he get correct?" Number Problem 13 Answer Here! This is a age algebra word problem that was asked by Joshua that read: "A man is 4 years older than his wife and 5 times as old as his son. When the son was born, the age of the wife was six-sevenths that of her husband's age. Find the age of each." Age Problem 8 Answer Here!
{"url":"http://algebra-word-problems.blogspot.com/","timestamp":"2014-04-20T16:17:28Z","content_type":null,"content_length":"97633","record_id":"<urn:uuid:05ab30ae-a60d-4631-b5db-0f6d10e65150>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
nature of turning points December 8th 2011, 09:09 AM #1 Senior Member Oct 2009 nature of turning points dy/dx = (cos t - sin t)/(sin t - cos t) The second derivative is 0. I am trying to find the nature of turning points but the second derivative can't be used and a nature table can't be made because dy/dx always equals -1 for any value of t. Re: nature of turning points When $\frac{dy}{dx}=0$, you have turning points. What can you deduce? Re: nature of turning points The turning points are where 0=cos t - sin t. I already have the points. How do I find their nature? Re: nature of turning points You clearly didn't read Quacky's Post. Notice that \displaystyle \begin{align*} \frac{\cos{t} - \sin{t}}{\sin{t} - \cos{t}} &= \frac{-\left(\sin{t} - \cos{t}\right)}{\sin{t} - \cos{t}} \\ &= -1 \end{align*} So the derivative is \displaystyle \begin{align*} -1 \end{align*}. You only have stationary points when the derivative is \displaystyle \begin{align*} 0 \end{align*}. What does this tell you about the function? Re: nature of turning points The curve is a spiral and it does have tuning points between 0<=t<=2pi. It is x=e^tsint, y=e^tcost. Re: nature of turning points The bottom line of dy/dx should have a plus symbol. I have finished this question now, thanks. Re: nature of turning points December 8th 2011, 09:15 AM #2 December 8th 2011, 09:52 AM #3 Senior Member Oct 2009 December 8th 2011, 03:24 PM #4 December 9th 2011, 12:41 AM #5 Senior Member Oct 2009 December 9th 2011, 02:05 AM #6 Senior Member Oct 2009 December 9th 2011, 03:12 AM #7
{"url":"http://mathhelpforum.com/calculus/193788-nature-turning-points.html","timestamp":"2014-04-17T04:48:11Z","content_type":null,"content_length":"48166","record_id":"<urn:uuid:4a3d4cab-8b16-4c89-ba1a-4759c3f137f2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Asymptotic Behavior for a Class of Nonclassical Parabolic Equations ISRN Applied Mathematics Volume 2013 (2013), Article ID 204270, 14 pages Research Article Asymptotic Behavior for a Class of Nonclassical Parabolic Equations School of Mathematics and Statistics, Northwest Normal University, Lanzhou 730070, China Received 18 May 2013; Accepted 18 June 2013 Academic Editors: Y.-K. Chang, X. Xue, and K.-V. Yuen Copyright © 2013 Yanjun Zhang and Qiaozhen Ma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper is devoted to the qualitative analysis of a class of nonclassical parabolic equations with critical nonlinearity, where and are two parameters. Firstly, we establish some uniform decay estimates for the solutions of the problem for , which are independent of the parameter . Secondly, some uniformly (with respect to ) asymptotic regularity about the solutions has been established for , which shows that the solutions are exponentially approaching a more regular, fixed subset uniformly (with respect to ). Finally, as an application of this regularity result, a family of finite dimensional exponential attractors has been constructed. Moreover, to characterize the relation with the reaction diffusion equation (), the upper semicontinuity, at , of the global attractors has been proved. 1. Introduction We study the long-time behavior of the following class of nonclassical parabolic equations: where is a bounded domain with smooth boundary , and are two parameters, the external force is time independent, and the nonlinearity satisfies some specified conditions later. When for the fixed constant , equation is a usual reaction-diffusion equation, and its asymptotic behavior has been studied extensively in terms of attractors by many authors; see [1–5]. For each fixed , equation is a nonclassical reaction-diffusion equation, which arises as models to describe physical phenomena such as non-Newtonian flow, soil mechanics, heat conduction; see [6–8] and references therein. Aifantis in [6] provided a quite, general approach for obtaining these equations. The asymptotic behavior of the solutions for this equation has been studied by many authors; see [9–16]. For the fixed constant , any , and the long-time behavior of the solutions of has been considered by some researchers; see [10, 13]. In [10] the author proved the existence of a class of attractors in with initial data and the upper semicontinuity of attractors in under subcritical assumptions and in the case of . In [13] similar results have been shown when and . In this paper, inspired by the ideas in [17, 18] and motivated by the dynamical results in [19–22], we study the uniform (with respect to the parameter ) qualitative analysis (a priori estimates) for the solutions of the nonclassical parabolic equations and then give some information about the relation between the solutions of and those of . Our main difficulty comes from the critical nonlinearity and the uniformness with respect to . This paper is organized as follows. In Section 2, we introduce basic notations and state our main results. In Section 3, we recall some abstract results that we will use later. In Section 4, we present several dissipative estimates about the solution of when , which hold uniformly with respect to . The main results are proved for in Section 5. Moreover, in Section 6, as an application, we construct a finite dimensional exponential attractor and prove the upper semicontinuity of the global attractor obtained in Section 5. 2. Main Results Before presenting our main results, we first state the basic mathematical assumptions for considering the long-time behaviors of the nonclassical parabolic equations and then introduce some notations that we will use throughout this paper.(i) with and satisfies the following conditions: where is a positive constant and is the first eigenvalue of on . The number is called the critical exponent. is not compact in this case, and this is one of the essential difficulties in studying the asymptotic regularity.(ii)Assumption on the parameters and . From the work in [18, 19], we know that a very large damping has the effect of freezing the system, if the damping acts only on the velocity , and this prevents the squeezing of the component . Therefore, the most dissipative situation occurs in between, that is, for a certain damping , which depends on the other coefficient of the equation. Therefore, in our frame, we choose such that as in order to obtain the uniformly (with respect to ) asymptotic regularity about the solutions of .(iii) with domain , and consider the family of Hilbert space , with the standard inner products and norms, respectively, In particular, and mean the inner product and norm, respectively.(iv), with the usual norm In particular, we denote and .(v)For each , we define () as and define as Then is a Banach Space for every . The global well-posedness of solutions and its asymptotic behavior for have been studied extensively under assumptions (1)-(2) by many authors in [9–14] and references therein in fact note that for each fixed . The main results of this paper are the following asymptotic regularity. Theorem 1. Under assumptions (1), (2), and , there exist a positive constant , a bounded (in ) subset , and a continuous increasing function such that, for any bounded (in) subset , where , and are all independent of , and is the semigroup generated by in . This result says that asymptotically, for each , the solutions are exponentially approaching a more regular fixed subset uniformly (with respect to ) for . Moreover, it implies the following results.(1)For each , has a global attractor in , and (2)Based on Theorem 1, applying the abstract result devised in [23, 24], for each we can prove the existence of a finite dimensional exponential attractor in . Moreover, our attraction is uniform (with respect to ) under the -norm (not only with the -norm); see Lemma 19.(3)Since the global attractor , it also implies that the fractal dimension of the global attractor is finite. Moreover, in line with Theorem 1, we prove the upper semicontinuity of at ; see Lemma 20. For the proof of Theorem 1, the main difficulty comes from the critical nonlinearity and the uniformness with respect to . Hereafter, we will also use the following notation: denote by the space of continuous increasing functions and by the space of continuous decreasing functions such that . Moreover, , , and are the generic constants, and , are generic functions, which are all independent of ; otherwise we will point out clearly. 3. Preliminaries In this section, we recall some results used in the main part of the paper. The first result comes from [17], which will be used to prove the asymptotic regularity for the case . Lemma 2 (see [17]). Let and be two Banach spaces and a -semigroup on with a bounded absorbing set . For every , assume that there exist two solution operators on and on satisfying the following properties.(i)For any two vectors and satisfying , (ii)There exists such that (iii)There are and such that Then, there exist positive constants , , and such that where . Next, we recall a criterion for the upper semicontinuity of attractors. Lemma 3 (see [25, 26]). Let be a family of semigroups defined on the Banach space , and for each , let have a global attractor . Assume further that is a nonisolated point of and that there exist , , and a compact set such that Then the global attractors are upper semicontinuous on at ; that is, Lemma 4 (see [27]). Let be an absolutely continuous positive function on , which satisfies for some the differential inequality for almost every , where and are functions on such that for some and , and for some . Then for some and For the proof, we refer the reader to [27, Lemma]. A standard Gronwall-type lemma will also be needed. Lemma 5. Let be an absolutely continuous positive function on , which satisfies for some the differential inequality for some and some . Then, 4. Uniformly Decaying Estimates in In this section, we always assume that (1), (2), and such that as hold and only belongs to , so all results in this section certainly hold for the case . The main purpose of this section is to deduce some dissipative estimates about the semigroups associated with in . Here, using the method in [19, 20, 22] for a strongly damped wave equation and a semilinear second order evolution equation, we will show that the radius of the absorbing set of associated with in can be chosen to be independent of . Lemma 6. There exists a positive constant , which depends only on , , and coefficients of (1)-(2), satisfying that for any and any bounded (in ) subset , there is a (which depends only on the bound of ) such that where both and are independent of . Proof. Throughout the proof, the generic constants are independent of . For clarity, we separate the proof into three claims. Claim1. There exists an which depends on , , (but independent of and ) such that where depends on but not on . Multiplying by , we have By virtue of (2), we conclude that there exists , such that At the same time, by the Hölder inequality, we get Substituting (26) and (27) into (25) and noticing as , we obtain where and is a small positive constant such that . And then applying Lemma 4 to above inequality, it follows that where , . Then, Claim 1 follows from (29) immediately. Claim2. There exists an which depends on , ,and (but is independent of and ) such that where is given in Claim 1. Noting (25) and taking in (26), it yields Then, for any , integrating (31) over and using Claim 1, we can complete this claim immediately. Claim3. Multiplying by , we find furthermore, Then, from assumptions (1)-(2), Claim 1, and using Hölder inequality, there holds On the other hand, from Claim 2 we know that for each there is a time such that where depends on . When , for each , integrating (33) over and applying (34)–(36), we obtain that Now, taking (is independent of ), we can complete our proof. Remark 7. Observing that above process of proof, we can also deduce that, for any and any , where is independent of and . Moreover, if is bounded in , then we can obtain for some constant which depends on ,. Indeed, from the fact that there is a constant such that for any , (39) can be obtained just by repeating the proof of Lemma 6 and taking in (35) since is bounded in . On the other hand, from the proof of Claim 3 as follows, we can get further estimates about Lemma 8. There exists a positive constant such that for any and any bounded (in ) subset , where , , is the time given in Claim 1, and only depends on but is independent of and . Proof. By differentiation of , we can obtain the following equation: Multiplying (42) by , we have When , using Lemma 6, there holds So, we obtain Therefore, as , for , integrating (45) over and substituting (40), we can complete our proof at once. For later applications, we present some Hölder continuity of in . Lemma 9. For any bounded subset , there exists a constant which depends only on and such that Proof. Let and be the solutions of corresponding to the initial data and. Then the difference satisfies with initial data . For (46), multiplying (48) by , we have where we used (38). Then, when applying Gronwall lemma, we can obtain (46). For (47), when , multiplying (48) by and combining with Lemma 8, we have Hence, by (47) we complete the proof. Hereafter, we denote the uniformly (with respect to ) bounded absorbing set obtained in Lemma 6 as , that is, and denote the time by such that Lemmas 6 and 8 hold for ; that is, holds for any and all . Moreover, similar to Remark 7, noting now that is bounded in , we have 5. Proof of the Main Results Throughout this section, we always assume that (1), (2), and hold for . 5.1. Decomposition of the Equation For the nonlinear function satisfying (1)-(2), from [12, 17, 19, 22] for our situation we know that allows the following decomposition , where and satisfy Now, decomposing the solution into the sum for any and any , where and are the solutions of the following equations: Applying the general results in [9, 12, 14], we know that both (59) and (60) are global well-posed in , and also forms a semigroup. Moreover, as in Section 4, we can deduce a similar estimate for in , and so . There exist constants ( is given in Lemma 6) and such that for any and any , 5.2. The First A Priori Estimate We begin with the decay estimates for the solution of (59). Lemma 10. There exists a constant and such that where both and are independent of . Proof. Multiplying (59) by , we have By means of (55), it follows that . Therefore, there exists such that for all and any . As a result, we multiply (59) by and obtain Then integrating with (55), (61), (62), and (65), we conclude Thus, using the following Lemma 11 with (67), allows us to complete our proof by taking and some increasing function . Lemma 11. Let be a continuous semigroup on the Banach space , satisfying Its proof is obvious and we omit it here. The next estimate is about the solution of (60). Lemma 12. For every (given) and any , there is a positive constant which only depends on ,and such that the solutions of (60) satisfy where both are independent of , and . Proof. Multiplying (60) by and integrating over , Then the proof is completely similar to that in [12,Lemma], so, we omit it. Based on Lemmas 10 and 12, following the idea in Zelik [21], we can now decompose as follows. Lemma 13. Let be the solution of corresponding to the initial data . Then, for any , we can decompose as where and satisfy the following estimates: with the constants and depending on , and , but both independent of . Proof. The proof is completely similar to that of [12, Lemma] and [22, Lemma], since the estimates in Lemmas 10 and 12 hold uniformly with respect to . Note that in the above decomposition in Lemma 13, we can require further that satisfies the following: there is a constant which depends only on , such that 5.3. The Second A Priori Estimate The main purpose of this subsection is to deduce some uniformly asymptotic (with respect to and ) the a priori estimates about the solution of . Lemma 14. There exists positive constants , , and such that for each , there is a subset satisfying and the exponential attraction where all , and are independent of , and denotes the Hausdorff semidistance with respect to the -norm. Proof. It is convenient to separate our proof into three steps. We emphasize, especially, that all the generic constants in the proof are independent of . Step1. We first claim that (recall ): , and such that for each , there is a subset satisfying and the exponential attraction We will apply Lemma 2 with and (note that for any ). From (54), we can write For any and , satisfying , we decompose the solution of as , where which uniquely solves the following equations, respectively: with and , and is the solution of (59) corresponding to the initial data . For (80), from (54),(56),(78), and Lemmas 10 and 12, we can directly calculate that where , is given in Lemma 10. Multiplying by (80), we have Furthermore, using the similar estimates of Lemma 6, we get where is a small positive constant such that for all . And then applying Lemma 5 to above inequality, there holds For (81), since then Using Hölder inequality we get where we used (53), (62), and Lemmas 10 and 12. Hence, multiplying by (83), we have Furthermore, we have where is a small positive constant given in (84). Then, using Lemma 5 we obtain Therefore, combining (85) and (91), we can verify that all the conditions of Lemma 2 are satisfied for the cases , , and . Moreover, since there is a (independent of ) such that for any and the constants in our estimates are all independent of ; consequently, , , and are all independent of , and then we can deduce our claim. Step2. We claim that there exists a constant which depends only on such that Multiplying by , we only need to note the following: First, since , we have and then while where we used (73). Moreover, since , we have and then where is given in Lemma 13. Hence, substituting the above estimates into (93), applying the Poincaré inequality we have Then using the Gronwall inequality and integrating over (from Lemma 12), we obtain Taking (in Lemma 13) small enough such that , we have Substituting above (100) and (102) into (99), we get that for all Step3. Based on Step 1 and Step2, applying the attraction transitivity lemma given in [28, Theorem] and noticing the Holder continuity Lemma 9, we can prove our lemma by performing a standard bootstrap argument, whose proof is now simple since Step 1 makes the nonlinear term become subcritical to some extent. 5.4. Proof of Theorem 1 Lemma 14 has shown some asymptotic regularities; however, the radius of depends on and the distances only under the -norm. To prove Theorem 1, we first give two lemmas as preliminary. Lemma 15. There exsits a constant such that for any bounded (in ) subset , there exsits such that Proof. Multiplying by , we find Noting , from Lemma 6, yields hence, we obtain where is a small, positive constant. Similarly, with using Lemma 4 we finally complete the proof. Lemma 16. There exists a constant such that for any bounded (in ) subset , there is a such that Proof. From Lemma 15, we only need to estimate that the bound of is independent of . Applying Lemma 15 again, we have Taking which may provide that and , integrating (109) on , and from Lemma 15, when we yield Hence, multiplying by , we can complete our proof by applying the uniform Gronwall lemma. Now, we are ready to prove Theorem 1. Proof of Theorem 1. Set where the constant comes from Lemma 16. From Lemmas 16 and 14, we know that there is a such that (recall that is given in (78)) for all and any . On the other hand, note that such that Then, from Lemma 9, there exists which depends only on and (so only on , ) such that Therefore, from Lemma 14, we have Hence, noting that , and are all fixed, we can complete the proof by taking and applying Lemma 11. 6. Applications of Theorem 1 As for the applications of Theorem 1, in this subsection, we consider the existence of finite dimensional exponential attractors and the upper semicontinuity of global attractors for problem under assumptions (1), (2), and . 6.1. A Priori Estimates For the subset defined in (113), and from Lemmas 6 and 8 we know that there is a such that where . Now, for each , define as follows: where is the time given in Lemma 16 corresponding to . Then, for each we have
{"url":"http://www.hindawi.com/journals/isrn.applied.mathematics/2013/204270/","timestamp":"2014-04-18T23:51:28Z","content_type":null,"content_length":"1048747","record_id":"<urn:uuid:7dce6364-e635-488e-834c-17d22ea1e332>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Skyscraper Puzzle online on flash, 23 levels! Play Now: Math Games, Logic Games, Board Games, Puzzle Games with high scores. IQ Flash - Our games will make you Brilliant. Skyscraper Puzzle online on flash, 23 levels! In a skyscraper puzzle, you have to fill each square with an integer from 1 to N where N is the size of the puzzle (the size of the grid). • Every square contains a skyscraper. • Complete the grid such that every row and column contains the numbers 1 to N. • Every row/column contains each number exactly once. No number may appear twice in any row or column. • The clues around the grid tell you how many skyscrapers you can see. They indicate the number of buildings which you would see from that direction. • You can't see a shorter skyscraper behind a taller one. For example, consider the row 1, 4, 5, 2, 3. From the left, buildings 1, 4, and 5 are visible. So, the clue would be 3. From the right, buildings 3 and 5 are visible giving a clue of 2. Here is a completed puzzle 4x4: Game control: • Click on an appropriate square to set height of the skyscraper. • Click Store to save current position. • Click Restore to restore last saved position. • Click Check to checking the correctness of your choice. Click here when you think that you have solved the puzzle. • Click the red arrows to increase/decrease the level of the puzzle. • Click the cyan button in upper left window to clear current position. • Click the cyan button around the grid to clear appropriate row/column.
{"url":"http://www.iqflash.com/skyscraper-puzzle.shtml","timestamp":"2014-04-21T12:20:37Z","content_type":null,"content_length":"5932","record_id":"<urn:uuid:b1a23455-6f39-4a3a-853f-4292b12a05af>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: arXiv:math.GT/9712255v21Mar1999 Annals of Mathematics, 149 (1999), 497­510 Scharlemann's manifold is standard By Selman Akbulut* Dedicated to Robion Kirby on the occasion of his 60th In his 1974 thesis, Martin Scharlemann constructed a fake homotopy equivalence from a closed smooth manifold f : Q S3 × S1#S2 × S2, and asked the question whether or not the manifold Q itself is diffeomorphic to S3 × S1#S2 × S2. Here we answer this question affirmatively. In [Sc] Scharlemann showed that if 3 is the Poincar´e homology 3-sphere, by surgering the 4-manifold × S1 , along a loop in × 1 × S1 generating the fundamental group of , one obtains a closed smooth manifold Q and homotopy equivalence: f : Q - S3 × S1
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/348/0067998.html","timestamp":"2014-04-18T09:12:46Z","content_type":null,"content_length":"7837","record_id":"<urn:uuid:22b43893-fefe-4eee-ab39-c5ffe5587068>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
A. Higher order correlation functions The two-point correlation function is not a unique descriptor of clustering, it is merely the first of an infinite hierarchy of such descriptors describing the galaxy distribution of galaxies taken N at a time. Two quite different distributions can have the same two-point correlation function. In particular, the fact that a point distribution generated by any random walk (e.g., as a Lévy flight as proposed by Mandlebrot (1975) has the correct two-point correlation function does not mean much unless other statistical measures of clustering are tested. The present day galaxy distribution is manifestly not a Gaussian random process: there is, for example, no symmetry about the mean density. This fact alone tells us that there is more to galaxy clustering than the two-point correlation function. So what kind of descriptors should we look for? Generalizations of the two-point functions to 3-, 4- and higher order functions are certainly possible, but they are difficult to calculate and not particularly edifying. However, they do the job of providing some of the needed extra information and through such constructs as the BBGKY hierarchy ^9 they do relate to the underlying physics of the clustering process. We shall describe the observed scaling of the 3-point correlation function below. One alternative is to go for different clustering models: anything but correlation functions. These may have the virtue of providing immediate gratification in terms of visualization of the process, but they are often difficult to relate to any kind of dynamical process. If we knew all higher order correlation functions we would have a complete description of the galaxy clustering process. However, calculating an estimate of a two point function from a sample of N galaxies requires taking all pairs from the sample of N, while calculating a three point functions requires taking all triples from N. The amount of computation escalates rapidly and restrictions have to be imposed on what is actually being calculated. Nevertheless, calculating restricted N-point functions may be useful: these functions may be related to one another and have interesting scale dependence. Gaztañaga (1992) has calculated restricted N -point functions and showed that these have power law behavior over the range of scales where they can be determined. ^9 The BBGKY hierarchy, (after Bogolyubov, Born, Green, Kirkwood and Yvon), is an infinite chain of equations adapted from plasma physics (Ichimaru, 1992) to describe self-gravitating non-linear clustering (see for example Fall and Severne (1976), Peebles (1980), and Saslaw (2000).) Back.
{"url":"http://ned.ipac.caltech.edu/level5/March04/Jones/Jones6.html","timestamp":"2014-04-17T12:41:57Z","content_type":null,"content_length":"4572","record_id":"<urn:uuid:13c5c9b7-bda6-4818-a56c-9b3c4ed462f4>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
A PPENDIX B U NITS , M EASUREMENTS AND C ON - M easurements are comparisons. The standard used for the comparison is called a unit. any different systems of units have been used throughout the world. Unit systems are standards; they always confer a lot of power to the organisation in charge of them, as can be seen most clearly in the computer industry; in the past the same applied to measure- ment units. To avoid misuse by authoritarian institutions, to eliminate at the same time all problems with differing, changing and irreproducible standards, and – this is not a joke – to simplify tax collection, already in the 18th century scientists, politicians and economists e e have agreed on a set of units. It is called the Syst` me International d’Unit´ s, abbreviated SI, and is defined by an international treaty, the ‘Convention du M` tre’. The units are main- e e e tained by an international organisation, the ‘Conf´ rence G´ n´ rale des Poids et Mesures’, and its daughter organisations, the ‘Commission Internationale des Poids et Mesures’ and the ‘Bureau International des Poids et Mesures’, which all originated in the times just before the French revolution. Ref. 975 All SI units are built from seven base units whose official definitions, translated from French into English, are the following, together with the date of their formulation: ‘The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.’ (1967) ∗ ‘The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second.’ (1983) ‘The kilogram is the unit of mass; it is equal to the mass of the international prototype of the kilogram.’ (1901) ∗ ‘The ampere is that constant current which, if maintained in two straight parallel con- ductors of infinite length, of negligible circular cross-section, and placed 1 metre apart in vacuum, would produce between these conductors a force equal to 2·10−7 newton per metre of length.’ (1948) ‘The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermo- dynamic temperature of the triple point of water.’ (1967) ∗ ‘The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12.’ (1971) ∗ ‘The candela is the luminous intensity, in a given direction, of a source that emits mono- chromatic radiation of frequency 540 · 1012 hertz and has a radiant intensity in that direction Appendix B Units, Measurements and Constants 931 of (1/683) watt per steradian.’ (1979) ∗ Note that both time and length units are defined as certain properties of a standard ex- ample of motion, namely light. This is an additional example making the point that the observation of motion as the fundamental type of change is a prerequisite for the defini- tion and construction of time and space. By the way, the proposal of using light was made already in 1827 by Jacques Babinet. ∗ From these basic units, all other units are defined by multiplication and division. In this way, all SI units have the following properties: They form a system with state-of-the-art precision; all units are defined in such a way that the precision of their definition is higher than the precision of commonly used meas- urements. Moreover, the precision of the definitions are regularly improved. The present relative uncertainty of the definition of the second is around 10−14 , for the metre about 10−10 , for the ampere 10−7 , for the kilogram about 10−9 , for the kelvin 10−6, for the mole less than 10−6 and for the candela 10−3 . They form an absolute system; all units are defined in such a way that they can be repro- duced in every suitably equipped laboratory, independently, and with high precision. This avoids as much as possible any misuse by the standard setting organisation. (At present, the kilogram, still defined with help of an artefact, is the last exception to this requirement; ex- tensive research is under way to eliminate this artefact from the definition – an international race that will take a few more years. A definition can be based only on two ways: counting particles or fixing h. The former can be achieved in crystals, the latter using any formula where h appears, such as the de Broglie wavelength, Josephson junctions, etc.) They form a practical system: base units are adapted to daily life quantities. Frequently used units have standard names and abbreviations. The complete list includes the seven base units, the derived, the supplementary and the admitted units: The derived units with special names, in their official English spelling, i.e. without capital letters and accents, are: name abbreviation & definition name abbreviation & definition hertz Hz = 1/s newton N = kg m/s2 pascal Pa = N/m2 = kg/m s2 joule J = Nm = kg m2 /s2 watt W = kg m2 /s3 coulomb C = As volt V = kg m2 /As3 farad F = As/V = A2 s4 /kg m2 ohm Ω = V/A = kg m2 /A2 s3 siemens S = 1/Ω weber Wb = Vs = kg m2/As2 tesla T = Wb/m2 = kg/As2 = kg/Cs henry H = Vs/A = kg m2 /A2 s2 degree Celsius ∗ ◦C ∗ The international prototype of the kilogram is a platinum–iridium cylinder kept at the BIPM in S` vres, in Ref. 976 France. For more details on the levels of the caesium atom, consult a book on atomic physics. The Celsius scale of temperature θ is defined as: θ /◦ C = T /K − 273.15 ; note the small difference with the number appearing in the definition of the kelvin. When the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles. In its definition, it is understood that the carbon 12 atoms are unbound, at rest and in their ground state. The frequency of the light in the definition of the candela corresponds to 555.5 nm, i.e. green colour, and is the wavelength for which the eye is most sensitive. ∗ Jacques Babinet (1794–1874), French physicist who published important work in optics. Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 932 Appendix B Units, Measurements and Constants name abbreviation & definition name abbreviation & definition lumen lm = cd sr lux lx = lm/m2 = cd sr/m2 becquerel Bq = 1/s gray Gy = J/kg = m2 /s2 sievert Sv = J/kg = m2 /s2 katal kat = mol/s We note that in all definitions of units, the kilogram only appears to the powers of 1, 0 and -1. The final explanation for this fact appeared only recently. P. 896 The radian (rad) and the steradian (sr) are supplementary SI units for angle, defined as the ratio of arc length and radius, and for solid angle, defined as the ratio of the subtended area and the square of the radius, respectively. The admitted non-SI units are minute, hour, day (for time), degree 1◦ = π /180 rad, minute 1 = π /10 800 rad, second 1 = π /648 000 rad (for angles), litre and tonne. All other units are to be avoided. All SI units are made more practical by the introduction of standard names and abbrevi- ations for the powers of ten, the so-called prefixes: ∗ name abbr. name abbr. name abbr. name abbr. 101 deca da 10−1 deci d 1018 Exa E 10−18 atto a 102 hecto h 10−2 centi c 1021 Zetta Z 10−21 zepto z 103 kilo k 10−3 milli m 1024 Yotta Y 10−24 yocto y 106 Mega M 10−6 micro µ unofficial: Ref. 977 109 Giga G 10−9 nano n 1027 Xenta X 10−27 xenno x 1012 Tera T 10−12 pico p 1030 Wekta W 10−30 weko w 1015 Peta P 10−15 femto f 1033 Vendekta V 10−33 vendeko v 1036 Udekta U 10−36 udeko u SI units form a complete system; they cover in a systematic way the complete set of observables of physics. Moreover, they fix the units of measurements for physics and for all other sciences as well. They form a universal system; they can be used in trade, in industry, in commerce, at home, in education and in research. They could even be used by other civilisations, if they They form a coherent system; the product or quotient of two SI units is also a SI unit. This means that in principle, the same abbreviation ‘SI’ could be used for every SI unit. ∗ Some of these names are invented (yocto to sound similar to Latin octo ‘eight’, zepto to sound similar to Latin septem, yotta and zetta to resemble them, exa and peta to sound like the Greek words of six and five, the unofficial ones to sound similar to the Greek words for nine, ten, eleven and twelve), some are from Dan- ish/Norwegian (atto from atten ‘eighteen’, femto from femten ‘fifteen’), some are from Latin (from mille ‘thou- sand’, from centum ‘hundred’, from decem ‘ten’, from nanus ‘dwarf’), some are from Italian (from piccolo ‘small’), some are Greek (micro is from Ñ Öã ‘small’, deca/deka from á ‘ten’, hecto from ØãÒ ‘hundred’, kilo from ÕåÐ Ó ‘thousand’, mega from Ñá ‘large’, giga from å ‘giant’, tera from ØáÖ Translate: I was caught in such a traffic jam that I needed a microcentury for a picoparsec and that my car’s Challenge 1299 e fuel consumption was two tenths of a square millimetre. Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 Appendix B Units, Measurements and Constants 933 Christoph Schiller A hike beyond space and time along the concepts of modern physics To the kind reader In exchange for getting this section for free, I ask you to send a short email that com- ments on one or more of the following: Which figures could be added? What was hard to understand? What could be improved or left out? Most welcome of all is support on the specific points listed on the web page. Thank you in advance for your input, also in the name of all other readers. Like the whole of this physics text, also this section lives and grows through the feedback from readers like you, who help to improve and to complete it. For a partic- ularly useful contribution you will be men- tioned in the foreword, or receive a reward, or both. But above all, enjoy the reading. C. Schiller, mm@motionmountain.net Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 934 Appendix B Units, Measurements and Constants The SI units are not the only possible set that fulfils all these requirements, but they form the only existing system doing so. ∗ We remind that since every measurement is a comparison with a standard, any measure- ment requires matter to realise the standard (yes, even for the speed standard) and radiation to achieve the comparison. Our concept of measurement thus assumes that matter and radi- ation exist and can be clearly separated from each other. P. 850 Planck’s natural units Since the exact form of many equations depends on the used system of units, theoretical physicists often use unit systems optimised for producing simple equations. In microscopic physics, the system of Planck’s natural units is frequently used. They are automatically introduced by setting c = 1, h = 1, G = 1, k = 1, εo = 1/4π and µo = 4π in equations writ- ten in SI units. Planck units are thus defined from combinations of fundamental constants; those corresponding to the fundamental SI units are given in the table. ∗∗ The table is also useful for converting equations written in natural units back to SI units; every quantity X is substituted by X/XPl . Table 69 Planck’s natural units Name definition value Basic units the Planck length lPl = hG/c3 ¯ = 1.616 0(12) · 10−35 m the Planck time tPl = hG/c5 ¯ = 5.390 6(40) · 10−44 s the Planck mass mPl = hc/G ¯ = 21.767(16) Ñg the Planck current IPl = 4πεo c6 /G = 3.479 3(22) · 1025 A the Planck temperature TPl = hc5 /Gk2 ¯ = 1.417 1(91) · 1032 K Trivial units the Planck velocity vPl = c = 0.3 Gm/s the Planck angular momentum LPl = h¯ = 1.1 · 10−34 Js the Planck action SaPl = h ¯ = 1.1 · 10−34 Js ∗ Most non-SI units still in use in the world are of Roman origin: the mile comes from ‘milia passum’ (used to be one thousand strides of about 1480 mm each; today a nautical mile, after having been defined as minute of arc, is exactly 1852 m), inch comes from ‘uncia/onzia’ (a twelfth – now of a foot); pound (from pondere ‘to weigh’) is used as a translation of ‘libra’ – balance – which is the origin of its abbreviation lb; even the habit of counting in dozens instead of tens is Roman in origin. These and all other similarly funny units – like the system in which all units start with ‘f’ and which uses furlong/fortnight as unit for velocity – are now officially defined as multiples of SI units. ∗∗ The natural units xPl given here are those commonly used today, i.e. those defined using the constant h, and not, as Planck originally did, by using the constant h = 2π h. A similar, additional freedom of choice arises for the electromagnetic units, which can be defined with other factors than 4πεo in the expressions; for example, using 4πεo α , with the fine structure constant α , gives qPl = e. For the explanation of the numbers between brackets, the standard deviations, see page 939. Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 Appendix B Units, Measurements and Constants 935 Name definition value the Planck entropy SePl = k = 13.8 yJ/K Composed units the Planck mass density ρPl = c5 /G2 h ¯ = 5.2 · 1096 kg/m3 the Planck energy EPl = hc5 /G ¯ = 2.0 GJ = 1.2 · 1028 eV the Planck momentum pPl = hc3 /G ¯ = 6.5 Ns the Planck force FPl = c4 /G = 1.2 · 1044 N the Planck power PPl = c5 /G = 3.6 · 1052 W the Planck acceleration aPl = c7 /¯ G h = 5.6 · 1051 m/s2 the Planck frequency f Pl = c5 /¯ G h = 1.9 · 1043 Hz the Planck electric charge qPl = 4πεo c¯ h = 1.9 aC = 11.7 e the Planck voltage UPl = c4 /4πεo G = 1.0 · 1027 V the Planck resistance RPl = 1/4πεo c = 30.0 Ω the Planck capacitance CPl = 4πεo hG/c3 ¯ = 1.8 · 10−45 F the Planck inductance LPl = (1/4πεo ) hG/c7 ¯ = 1.6 · 10−42 H the Planck electric field EPl = c7 /4πεo hG2 ¯ = 6.5 · 1061 V/m the Planck magnetic flux density BPl = c5 /4πεo hG2 ¯ = 2.2 · 1053 T The natural units are important for another reason: whenever a quantity is sloppily called ‘infinitely small (or large)’, the correct expression is ‘small (or large) as the corresponding Planck unit’. As explained in special relativity, general relativity and quantum theory, the P. 768 third part, this substitution is correct because almost all Planck units provide, within a factor of the order 1, the extreme value for the corresponding observable. Unfortunately, these factors have not entered the mainstream yet; if G is substituted by 4G, h by h/2 and 4πεo by ¯ ¯ 8πεo α in all formulae, the exact extremal value for each observable in nature are obtained. These extremal values are the true natural units. Exceeding extremal values is possible only for extensive quantities, i.e. for those quantities for which many particle systems can exceed single particle limits, such as mass or electrical resistance. Other unit systems In fundamental theoretical physics another system is also common. One aim of research being the calculation of the strength of all interactions, setting the gravitational constant G to unity, as is done when using Planck units, makes this aim more difficult to express in equations. Therefore one often only sets c = h = k = 1 and µo = 1/εo = 4π , ∗ leaving ∗ Other definitions for the proportionality constants in electrodynamics lead to the Gaussian unit system often used in theoretical calculations, the Heaviside–Lorentz unit system, the electrostatic unit system, and the elec- tromagnetic unit system, among others. For more details, see the standard text by J O H N D A V I D J A C K S O N , Classical Electrodynamics, 3rd edition, Wiley, . Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 936 Appendix B Units, Measurements and Constants only the gravitational constant G in the equations. In this system, only one fundamental unit exists, but its choice is still free. Often a standard length is chosen as fundamental unit, length being the archetype of a measured quantity. The most important physical observables are related by [l] = 1/[E] = [t] = [C] = [L] , 1/[l] = [E] = [m] = [p] = [a] = [ f ] = [I] = [U] = [T ] , [l]2 = 1/[E]2= [G] = [P] = 1/[B] = 1/[Eel.] and 1 = [v] = [q] = [e] = [R] = [Saction ] = [Sentropy] = h = c = k = [α ] with the usual convention to write [x] for the unit of quantity x. Using the same unit for speed and electric resistance is not to everybody’s taste, however, and therefore electricians do not use this system. ∗ In many situations, in order to get an impression of the energies needed to observe the effect under study, a standard energy is chosen as fundamental unit. In particle physics the common energy unit is the electron Volt (eV), defined as the kinetic energy acquired by an electron when accelerated by an electrical potential difference of 1 Volt (‘proton Volt’ would be a better name). Therefore one has 1 eV=1.6 · 10−19 J, or roughly 1 eV ≈ 1 6 aJ (666) which is easily remembered. The simplification c = h = 1 yields G = 6.9 · 10−57 eV−2 and allows to use the unit eV also for mass, momentum, temperature, frequency, time and length, with the respective correspondences 1 eV = 1.8 · 10−36 kg = 5.4 · 10−28 Ns = 242 THz Challenge 1301 e = 11.6 kK and 1 eV−1 = 4.1 fs = 1.2 Ñm. To get some feeling for the unit eV, the following relations are useful. Room temperature, usually taken as 20 ◦ C or 293 K, corresponds to a kinetic energy per particle of 0.025 eV or 4.0 zJ. The highest particle energy measured so far is a cosmic ray of energy of 3 · 1020 eV or 48 J. Down here on the earth, an accelerator with an energy of about 105 GeV or 17 nJ for Ref. 978 electrons and antielectrons has been built, and one with an energy of 10 TeV or 1.6 ÑJ for protons will be built soon. Both are owned by CERN in Geneva and have a circumference of 27 km. The lowest temperature measured up to now is 280 pK, in a system of Rhodium nuclei in- side a special cooling system. The interior of that cryostat possibly is the coolest point in the Ref. 979 whole universe. At the same time, the kinetic energy per particle corresponding to that tem- perature is also the smallest ever measured; it corresponds to 24 feV or 3.8 vJ=3.8 · 10−33 J. For isolated particles, the record seems to be for neutrons: kinetic energies as low as 10−7 eV have been achieved, corresponding to De Broglie wavelengths of 60 nm. ∗ The web page http://www.chemie.fu-berlin.de/chemistry/general/units-en.html allows to convert various units into each other. In general relativity still another system is sometimes used, in which the Schwarzschild radius defined as rs = 2Gm/c2 is used to measure masses, by setting c = G = 1. In this case, in opposition to above, mass and length have the same dimension, and h has dimension of an area. Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 Appendix B Units, Measurements and Constants 937 Here are a few facts making the concept of unit more vivid. A gray is the amount of radioactivity that deposes 1 J on 1 kg of matter. A sievert is a unit adjusted to human scale, where the different types of human tissues are weighted with a factor describing the effectiveness of radiation deposition. Four to five sievert are a lethal dose to humans. In comparison, the natural radioactivity present inside human bodies leads to a dose of 0.2 mSv per year. An average X-ray image is an irradiation of 1 mSv; a CAT Ref. 980 scan 8 mSv. Are you confused by the candela? The definition simply says that 683 cd = 683 lm/sr correspond to 1 W/sr. The candela is thus a unit for light power per angle, except that it is corrected for the eye’s sensitivity: the candela measures only visible power per angle. Similarly, 683 lm = 683 cd · sr correspond to 1 W, i.e. both the lumen and the watt measure power, or energy flux, except that the lumen measures only the visible part of the power. In English quantity names, the change is expressed by substituting ‘radiant’ by ‘luminous’; e.g. the Watt measures radiant flux, whereas the lumen measure luminous flux. The factor 683 is historical. A usual candle indeed emits a luminous intensity of about a candela. Therefore, at night, a candle can be seen up to a distance of one or two dozen kilo- Challenge 1302 e metres. A 100 W incandescent light bulb produces 1700 lm and the brightest light emitting diodes about 5 lm. The irradiance of sunlight is about 1300 W/m2 on a sunny day; the illuminance is 120 klm/m2 = 120 klux or 170 W/m2 . The numbers show that most energy radiated from the sun to the earth is outside the visible spectrum. On a glacier, near the sea shore, on the top of mountains, or under particular weather condition the brightness can reach 150 klux. Lamps used during surgical operations usually produce 120 klux; humans need at least 100 lux for reading, but water paintings are des- troyed by more than 100 lux, oil paintings by more than 200 lux. The full moon produces 0.1 lux; the eyes lose lose their ability to distinguish colours somewhere between 0.1 lux and 0.01 lux. The human body itself shines with about 1 plux, a value too small to be detected with the eye, but easily measured with apparatuses. The origin is unclear. The highest achieved light intensities are in excess of 1018 W/m2 , more than 15 orders of magnitude higher than the intensity of sunlight, and are achieved by tight focusing of pulsed lasers. The electric fields in such light pulses is of the same order of the field inside Ref. 981 atoms; such a beam ionizes all matter it encounters. The Planck length is roughly the de Broglie wavelength λB = h/mv of a man walk- Ref. 982 ing comfortably (m = 80 kg, v = 0.5 m/s); this motion is therefore aptly called the ‘Planck The Planck mass is equal to the mass of about 1019 protons. This is roughly the mass of a human embryo at about ten days of age. The second does not correspond to 1/86 400th of the day any more (it did so in the year 1900); the earth now takes about 86 400.002 s for a rotation, so that regularly the International Earth Rotation Service introduces a leap second to ensure that the sun is at Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 938 Appendix B Units, Measurements and Constants the highest point in the sky at 12.00 o’clock sharp. ∗ The time so defined is called Universal Time Coordinate. The velocity of rotation of the earth also changes irregularly from day to day due to the weather; the average rotation speed even changes from winter to summer due to the change in polar ice caps and in addition that average decreases over time, due to the friction produced by the tides. The rate of insertion of leap seconds is therefore faster than every 500 days, and not completely constant in time. The most precisely measured quantities in nature are the frequency of certain milli- second pulsars, ∗∗ the frequency of certain narrow atomic transitions and the Rydberg con- stant of atomic hydrogen, which can all be measured as exactly as the second is defined. At present, this gives about 14 digits of precision. The most precise clock ever built, using microwaves, had a stability of 10−16 during a running time of 500 s. For longer time periods, the record in 1997 was about 10−15 ; but the Ref. 983 area of 10−17 seems within technological reach. The precision of clocks is limited for short Ref. 984 measuring times by noise and for long measuring times by drifts, i.e. by systematic effects. The region of highest stability depends on the clock type and usually lies between 1 ms for optical clocks and 5000 s for masers. Pulsars are the only clock for which this region is not known yet; it lies at more than 20 years, which is the time elapsed since their discovery. The shortest times measured are the life times of certain ‘elementary’ particles; in par- ticular, the D meson was measured to live less than 10−22 s. Such times are measured in Ref. 985 a bubble chamber, where the track is photographed. Can you estimate how long the track is? (Watch out – if your result cannot be observed with an optical microscope, you made a Challenge 1303 n mistake in your calculation). The longest measured times are the lifetimes of certain radioisotopes, over 1015 years, and the lower limit of certain proton decays, over 1032 years. These times are thus much larger than the age of the universe, estimated to be fourteen thousand million years. Ref. 986 The least precisely measured fundamental quantities are the gravitational constant G and the strong coupling constant αs . Other, even less precisely known quantities, are the age of the universe and its density (see the astrophysical table below). P. 942 The precision of mass measurements of solids is limited by such simple effects as the adsorption of water on the weight. Can you estimate what a monolayer of water does on a weight of 1 kg? Challenge 1304 n Variations of quantities are often much easier to measure than their values. For ex- ample, in gravitational wave detectors, the sensitivity achieved in 1992 was ∆l/l = 3 · 10−19 for lengths of the order of 1 m. In other words, for a block of about a cubic metre of metal it Ref. 987 is possible to measure length changes about 3000 times smaller than a proton radius. These set-ups are now being superseded by ring interferometers. Ring interferometers measur- ing frequency differences of 10−21 have already been built; they are still being improved towards higher values. Ref. 988 ∗ Their web site at http://hpiers.obspm.fr gives more information on the details of these insertions, as does http://maia.usno.navy.mil, one of the few useful military web sites. See also http://www.bipm.fr, the site of the ∗∗ An overview of this fascinating work is given by J .H . T A Y L O R , Pulsar timing and relativistic gravity, Philosophical Transactions of the Royal Society, London A 341, pp. –, . Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 Appendix B Units, Measurements and Constants 939 The swedish astronomer Anders Celsius (1701–1744) originally set the freezing point at 100 degrees and the boiling point of water at 0 degrees. Then the numbers were switched Ref. 989 to get today’s scale, with a small detail though. With the official definition of the Kelvin and the degree Celsius, at the standard pressure of 1013.25 Pa, water boils at 99.974 ◦ C. Can Challenge 1305 n you explain why it is not 100 ◦ C any more? The size of SI units is adapted to humans: heartbeat, human size, human weight, human temperature, human substance, etc. In a somewhat unexpected way they realise the saying by Protagoras, 25 centuries ago: ‘Man is the measure of all things.’ The table of SI prefixes covers seventy-two measurement decades. How many additional prefixes will be needed? Even an extended list will include only a small part of the infinite e e e range of decades. Will the Conf´ rence G´ n´ rale des Poids et Mesures have to go on and on, Challenge 1306 n defining an infinite number of SI prefixes? It is well-known that the French philosopher Voltaire, after meeting Newton, publicised the now famous story that the connection between the fall of objects and the motion of the moon was discovered by Newton when he saw an apple falling from a tree. More than a century later, just before the French revolution, a committee of scientists decided to take as unit of force precisely the force exerted by gravity on a standard apple, and to name it after the English scientist. After extensive study, it was found that the mass of the standard apple was 101.9716 g; its weight was called 1 newton. Since then, in the museum in S` vres near Paris, visitors can admire the standard metre, the standard kilogram and the standard apple. ∗ Precision and accuracy of measurements As explained on page 199, precision measures how well a result is reproduced when the measurement is repeated; accuracy is the degree to which a measurement corresponds to the actual value. Lack of precision is due to accidental or random errors; they are best measured by the standard deviation, usually abbreviated σ ; it is defined through 1 n σ2 = ∑ (xi − x)2 n − 1 i=1 ¯ (667) where x is the average of the measurements xi . (Can you imagine why n − 1 is used in the Challenge 1307 n formula instead of n?) By the way, for a Gaussian distribution, 2.35 σ is the full width at half maximum. Lack of accuracy is due to systematic errors; usually they can only be estimated. This es- timate is often added to the random errors to produce a total experimental error, sometimes Ref. 991 also called total uncertainty. The following tables give the values of the most important physical constants and particle properties in SI units and in a few other common units, as published in the standard Ref. 992 references. The values are the world average of the best measurements up to December ∗ To be clear, this is a joke; no standard apple exists. In contrast to the apple story it is not a joke however, that owners of several apple trees in Britain and in the US claim descent, by rerooting, from the original tree under Ref. 990 which Newton had his insight. DNA tests have even been performed to decide if all these derive from the same tree, with the result that the tree at MIT, in contrast to the British ones, is a fake – of course. Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 940 Appendix B Units, Measurements and Constants 1998. As usual, experimental errors, including both random and estimated systematic errors, are expressed by giving the one standard deviation uncertainty in the last digits; e.g. 0.31(6) means – roughly speaking – 0.31 ± 0.06. In fact, behind each of the numbers in the fol- lowing tables there is a long story which would be worth telling, but for which there is not enough room here. ∗ What are the limits to accuracy and precision? First of all, there is no way, even in principle, to measure a quantity x to a precision higher than about 61 digits, because ∆x/x lPl /dhorizon = 10−61 . In the third part of our text, studies of clocks and meter bars will further reduce this theoretical limit. P. 802 But it is not difficult to deduce more stringent practical limits. No reasonable machine can measure quantities with a higher precision than measuring the diameter of the earth within the smallest length ever measured, about 10−19 m; that makes about 26 digits. Using a more realistic limit of a 1000 m sized machine implies a limit of 22 digits. If, as predicted above, time measurements really achieve 17 digits of precision, then they are nearing the practical limit, because apart from size, there is an additional practical restriction: cost. Indeed, an additional digit in measurement precision means often an additional digit in equipment cost. Basic physical constants In principle, all experimental measurements of matter properties, such as colour, density, Ref. 992 or elastic properties, can be predicted using the values of the following constants, using them in quantum theory calculations. Specifically, this is possible using the equations of the standard model of high energy physics. P. ?? Table 70 Basic physical constants Quantity name value in SI units uncertainty vacuum speed of lighta c 299 792 458 m/s 0 vacuum number of space-time dimensions 3+1 down to 10−19 m, up to 1026 m vacuum permeabilitya µo 4π · 10−7 H/m 0 = 1.256 637 061 435 917 295 385 ... ÑH/m vacuum permittivitya εo = 1/µo c2 8.854 187 817 620 ... pF/m 0 Planck constant h 6.626 068 76(52) · 10−34 Js 7.8 · 10−8 reduced Planck constant h ¯ 1.054 571 596(82) · 10−34 Js 7.8 · 10−8 positron charge e 0.160 217 646 2(63) aC 3.9 · 10−8 Boltzmann constant k 1.380 650 3(24) · 10 −23 J/K 1.7 · 10−6 gravitational constant G 6.673(10) · 10 −11 Nm2 /kg2 1.5 · 10−3 gravitational coupling constant κ = 8π G/c4 2.076(3) · 10 −43 s2 /kg m 1.5 · 10−3 fine structure constant,b e2 α = 4πεo hc ¯ 1/137.035 999 76(50) 3.7 · 10−9 e.m. coupling constant = gem(m2 c2 ) e = 0.007 297 352 533(27) 3.7 · 10−9 ∗ Some of them can be found in the text by N .W. W I S E , The Values of Precision, Princeton University Press, . The field of high precision measurements, from which the results on these pages stem, is a very special world. A beautiful introduction to it is Near Zero: Frontiers of Physics, edited by J .D . F A I R B A N K S , B .S . D E A V E R , C .W. E V E R I T T & P .F . M I C H A E L S O N , Freeman, . Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 Appendix B Units, Measurements and Constants 941 Quantity name value in SI units uncertainty Fermi coupling constant,b GF /(¯ c)3 h 1.166 39(1) · 10−5 GeV−2 8.6 · 10−6 weak coupling constant αw (MZ ) = g2 /4π w 1/30.1(3) weak mixing angle sin2 θW (MS) 0.231 24(24) 1.0 · 10−3 weak mixing angle sin2 θW (on shell) 0.2224(19) 8.7 · 10−3 = 1 − (mW /mZ )2 strong coupling constantb αs (MZ) = g2 s/4π 0.118(3) 25 · 10−3 a. Defining constant. b. All coupling constants depend on the four-momentum transfer, as explained in the section on P. ?? renormalisation. Fine structure constant is the traditional name for the electromagnetic coupling constant gem in the case of a four momentum transfer of Q2 = m2 c2 , which is the smallest one possible. At higher momentum transfers it has larger values, e.g. gem (Q2 = MWc2 ) ≈ 1/128. The strong coupling constant has higher values at lower momentum transfers; e.g. one has αs (34 GeV) = Why do all these constants have the values they have? The answer depends on the con- stant. For any constant having a unit, such as the quantum of action h, the numerical value has no intrinsic meaning. It is 1.054 · 10 −34 Js because of the SI definition of the joule and the second. However, the question why the value of a constant with units is not larger or smaller al- ways requires to understand the origin of some dimensionless number. For example, h, G ¯ and c are not smaller or larger because the everyday world, in basic units, is of the dimen- sions we observe. The same happens if we ask about the size of atoms, people, trees and stars, about the duration of molecular and atomic processes, or about the mass of nuclei and mountains. Understanding the values of all dimensionless constants is thus the key to understanding nature. The basic constants yield the following useful high-precision observations. Table 71 Derived physical constants Quantity name value in SI units uncertainty Vacuum wave resistance Zo = µo /εo 376.730 313 461 77... Ω 0 Avogadro’s number NA 6.022 141 99(47) · 1023 7.9 · 10−8 Rydberg constant a R∞ = me cα 2 /2h 10 973 731.568 549(83) m−1 7.6 · 10−12 mag. flux quantum ϕo = h/2e 2.067 833 636(81) pWb 3.9 · 10−8 Josephson freq. ratio 2e/h 483.597 898(19) THz/V 3.9 · 10−8 von Klitzing constant h/e2 = µo c/2α 25 812.807 572(95) Ω 3.7 · 10−9 Bohr magneton µB = e¯ /2me h 9.274 008 99(37) · 10−24 J/T 4.0 · 10−8 classical electron re = e2 /4πεo me c2 2.817 940 285(31) fm 1.1 · 10−8 Compton wavelength λc = h/me c 2.426 310 215(18) pm 7.3 · 10−9 of the electron λc = h/me c = re /α ¯ ¯ 0.386 159 264 2(28) pm 7.3 · 10−9 Bohr radius a a∞ = re /α 2 52.917 720 83(19) pm 3.7 · 10−9 cyclotron frequency f c /B = e/2π me 27.992 4925(11) GHz/T 4.0 · 10−8 Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 942 Appendix B Units, Measurements and Constants Quantity name value in SI units uncertainty of the electron nuclear magneton µN = e¯ /2mp h 5.050 783 17(20) · 10−27 J/T 4.0 · 10−8 proton electron mass ratio mp /me 1 836.152 667 5(39) 2.1 · 10−9 Stefan–Boltzmann constant σ = π 2 k4 /60¯ 3 c2 h 5.670 400(40) · 10−8 W/m2 K4 7.0 · 10−6 Wien displacement law constant b = λmax T 2.897 7686(51) mmK 1.7 · 10−6 bits to entropy conv. const. 1023 bit = 0.956 994 5(17) J/K TNT energy content 3.7 to 4.0 MJ/kg=4 · 103 m2 /s2 a. For infinite mass of the nucleus. Some properties of the universe as a whole are listed in the following. Table 72 Astrophysical constants Quantity name value gravitational constant G 6.672 59(85) · 10−11 m3 /kg s2 cosmological constant Λ ca. 1 · 10−52 m−2 tropical year 1900 a a 31 556 925.974 7 s tropical year 1994 a 31 556 925.2 s mean sidereal day d 23h 56 4.090 53 astronomical unit b AU 149 597 870.691(30) km light year al 9.460 528 173 ... Pm parsec pc 30.856 775 806 Pm = 3.261 634 al age of the universe c to > 3.5(4) · 1017 s or > 11.5(1.5) · 109 a (from matter, via galaxies and stars, using quantum theory: early 1997 results) age of the universe c to 4.32(7) · 1017 s = 13.7(2) · 109 a (from space-time, via expansion, using general relativity) universe’s horizon’s dist. c do = 3cto 5.2(1.4) · 1026 m = 13.8(4.5) Gpc universe’s topology unknown number of space dimensions 3 Hubble parameter c Ho 2.2(1.0) · 10−18 s−1 = 0.7(3) · 10−10 a−1 = ho · 100 km/sMpc = ho · 1.0227 · 10−10 a−1 reduced Hubble par. c ho 0.59 < ho < 0.7 critical density ρc = 3Ho /8π G 2 h2 · 1.878 82(24) · 10−26 kg/m3 of the universe density parameter c ΩMo = ρo /ρc ca. 0.3 luminous matter density ca. 2 · 10−28 kg/m3 stars in the universe ns 1022±1 baryons in the universe nb 1081±1 baryon mass mb 1.7 · 10−27 kg baryon number density 1 to 6 /m3 photons in the universe nγ 1089 photon energy density ργ = π 2 k4 /15To4 4.6 · 10−31 kg/m3 photon number density 400 /cm3 (To /2.7 K)3, at present 410.89/cm3 background temperature d To 2.726(5) K Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 Appendix B Units, Measurements and Constants 943 Quantity name value Planck length lPl = hG/c3 ¯ 1.62 · 10−35 m Planck time tPl = hG/c5 ¯ 5.39 · 10−44 s Planck mass mPl = hc/G ¯ 21.8 Ñg instants in history c to /tPl 8.7(2.8) · 1060 space-time points No = (Ro /lPl )3 · 10244±1 inside the horizon c (to /tPl ) mass inside horizon M 1054±1 kg a. Defining constant, from vernal equinox to vernal equinox; it was once used to define the second. (Remember: π seconds is a nanocentury.) The value for 1990 is about 0.7 s less, corresponding to Challenge 1308 e a slowdown of roughly −0.2 ms/a. (Why?) There is even an empirical formula available for the Ref. 993 change of the length of the year over time. b. Average distance earth–sun. The truly amazing precision of 30 m results from time averages of signals sent from Viking orbiters and Mars landers taken over a period of over twenty years. c. The index o indicates present day values. d. The radiation originated when the universe was between 105 to 106 years old and about 3000 K hot; the fluctuations ∆To which lead to galaxy formation are today of the size of 16 ± 4 ÑK = P. 322 6(2) · 10−6 To . Attention: in the third part of this text it is shown that many constants in Table 72 are not physically sensible quantities. They have to be taken with lots of grains of salt. The more specific constants given in the following table are all sensible though. Table 73 Astronomical constants Quantity name value earth’s mass M 5.972 23(8) · 1024 kg earth’s gravitational length l = 2GM/c2 8.870(1) mm earth radius, equatorial a Req 6378.1367(1) km earth’s radius, polar a Rp 6356.7517(1) km equator–pole distance a 10 001.966 km (average) earth’s flattening a e 1/298.25231(1) earth’s av. density ρ 5.5 Mg/m3 moon’s radius Rmv 1738 km in direction of earth moon’s radius Rmh 17.. km in other two directions moon’s mass Mm 7.35 · 1022 kg moon’s mean distance b dm 384 401 km moon’s perigeon typically 363 Mm, hist. minimum 359 861 km moon’s apogeon typically 404 Mm, hist. maximum 406 720 km moon’s angular size c avg. 0.5181◦ = 31.08 , min. 0.49◦, max. 0.55◦ moon’s av. density ρ 3.3 Mg/m3 sun’s mass M 1.988 43(3) · 1030 kg sun’s grav. length l = 2GM /c2 2.953 250 08 km sun’s luminosity L 384.6 YW solar radius, equatorial R 695.98(7) Mm Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 944 Appendix B Units, Measurements and Constants Quantity name value sun’s angular size 0.53◦ average; minimum on 4th of July (aphelion) 1888 , maximum on 4th of January (perihelion) 1952 suns’s av. density ρ 1.4 Mg/m3 sun’s distance, average AU 149 597 870.691(30) km solar velocity v g 220(20) km/s around centre of galaxy solar velocity v b 370.6(5) km/s against cosmic background distance to galaxy centre 8.0(5) kpc = 26.1(1.6) kal most distant galaxy 0140+326RD1 12.2 · 109 al = 1.2 · 1026 m, red-shift 5.34 a. The shape of the earth is described most precisely with the World Geodetic System. The last edition dates from 1984. For an extensive presentation of its background and its details, see the http://www.eurocontrol.be/projects/eatchip/wgs84/start.html web site. The International Geodesic Union has refined the data in 2000. The radii and the flattening given here are those for the ‘mean tide system’. They differ from those of the ‘zero tide system’ and other systems by about 0.7 m. The details are a science by its own. b. Measured centre to centre. To know the precise position of the moon at a given date, see the http://www.fourmilab.ch/earthview/moon-ap-per.html site, whereas for the planets see http://www.fourmilab.ch/solar/solar.html as well as the other pages on this site. c. Angles are defined as follows: 1 degree = 1◦ = π /180 rad, 1 (first) minute = 1 = 1◦ /60, 1 second (minute) = 1 = 1 /60. The ancient units ‘third minute’ and ‘fourth minute’, each 1/60th of the preceding, are not accepted any more. (‘Minute’ originally means ‘very small’, as it still does in modern English.) Useful numbers π 3.14159 26535 89793 23846 26433 83279 50288 41971 69399 375105 e 2.71828 18284 59045 23536 02874 71352 66249 77572 47093 699959 γ 0.57721 56649 01532 86060 65120 90082 40243 10421 59335 939923 Ref. 994 ln2 0.69314 71805 59945 30941 72321 21458 17656 80755 00134 360255 √ 2.30258 50929 94045 68401 79914 54684 36420 76011 01488 628772 10 3.16227 76601 68379 33199 88935 44432 71853 37195 55139 325216 If the number π were normal, i.e. if all digits and digit combinations would appear with the same probability, then every text written or to be written, as well as every word spoken or to be spoken, can be found coded in its sequence. The property of normality has not yet been proven, even though it is suspected to be true. What is the significance? Is all wisdom encoded in the simple circle? No. The property is nothing special, as it also applies to the number 0.123456789101112131415161718192021... and many others. Can you specify a few? Challenge 1309 n By the way, in the graph of the exponential function ex , the point (0, 1) is the only one with two rational coordinates. If you imagine to paint in blue all points on the plane with two rational coordinates, the plane would look quite bluish. Nevertheless, the graph goes only through one of these points and manages to avoid all the others. Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 Appendix B Units, Measurements and Constants 945 e e 975 Le Syst` me International d’Unit´ s, Bureau International des Poids et Mesures, Pavillon de Breteuil, Parc de Saint Cloud, 92 310 S` vres, France. All new developments concern- ing SI units are published in the journal Metrologia, edited by the same body. Showing the slow pace of an old institution, the BIPM was on the internet only in 1998; it is now reachable on its simple site at http://www.bipm.fr. The site of its British equivalent, http://www.npl.co.uk/npl/reference/si units.html, is much better; it gives many other details as well as the English version of the SI unit definitions. Cited on page 930. 976 The bible in the field of time measurement are the two volumes by J. V A N I E R & C . A U D O I N , The Quantum Physics of Atomic Frequency Standards, Adam Hilge, . A popular account is T O N Y J O N E S , Splitting the Second, Institute of Physics Publishing, . The site http://opdaf1.obspm.fr/www/lexique.html gives a glossary of terms used in the field. On length measurements, see ... On mass and atomic mass measurements, see page 235. On electric current measurements, see ... On precision temperature measurements, see page 203. Cited on page 931. 977 The unofficial prefixes have been originally proposed in the 1990s by Jeff K. Aronson, professor at the University of Oxford, and might come into general usage. Cited on page 932. 978 David J. B I R D & al., Evidence for correlated changes in the spectrum and composition of cosmic rays at extremely high energies, Physical Review Letters 71, pp. –, . Cited on page 936. 979 Pertti J. H A K O N E N & al., Nuclear antiferromagnetism in Rhodium metal at positive and neg- ative nanokelvin temperature, Physical Review Letters 70, pp. –, . See also his article in the Scientific American, January . Cited on page 936. 980 G. C H A R P A K & R .L. G A R W I N , The DARI, Europhysics News 33, pp. –, Janu- ary/February . Cited on page 937. 981 See e.g. K. C O D L I N G & L.J. F R A S I N S K I , Coulomb explosion of simple molecules in intense laser fields, Contemporary Physics 35, pp. –, . Cited on page 937. 982 A. Z E I L I N G E R , The Planck stroll, American Journal of Physics 58, p. , . Cited on page 937. 983 The most precise clock ever built is ... Cited on page 938. 984 J. B E R G Q U I S T , editor, Proceedings of the Fifth Symposium on Frequency Standards and Met- rology, World Scientific, . Cited on page 938. 985 About short lifetime measurements, see e.g. the paper on D particle lifetime ... Cited on page 986 About the long life of tantalum 180, see D. B E L I C & al., Photoactivation of 180 Tam and its implications for the nucleosynthesis of nature’s rarest naturally occurring isotope, Physical Review Letters 83, pp. –, 20 December . Cited on page 938. 987 About the detection of gravitational waves, see ... Cited on page 938. 988 See the clear and extensive paper by G.E. S T E D M A N , Ring laser tests of fundamental physics and geophysics, Reports on Progress of Physics 60, pp. –, . Cited on page 938. 989 Following a private communication by Richard Rusby, this is the value of 1997, whereas it was estimated as 99.975 ◦ C in 1989, as reported by G A R E T H J O N E S & R I C H A R D R U S B Y , Offi- cial: water boils at 99.975 ◦ C, Physics World, pp. –, September 1989, and R .L. R U S B Y , Ironing out the standard scale, Nature 338, p. , March . For more on temperature measurements, see page 203. Cited on page 939. Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 946 Appendix B Units, Measurements and Constants 990 See Newton’s apples fall from grace, New Scientist, p. , 6 September . More details can be found in R .G. K E E S I N G , The history of Newton’s apple tree, Contemporary Physics 39, pp. –, . Cited on page 939. 991 The various concepts are even the topic of a separate international standard, ISO 5725, with the title Accuracy and precision of measurement methods and results. A good introduction is the book with the locomotive hanging out the window as title picture, namely J O H N R . T A Y L O R , An Introduction to Error Analysis: the Study of Uncertainties in Physical Measurements, 2nd edition, University Science Books, Sausalito, . Cited on page 939. 992 P.J. M O H R & B .N. T A Y L O R , Reviews of Modern Physics 59, p. , . This is the set of constants resulting from an international adjustment and recommended for international use by the Committee on Data for Science and Technology (CODATA), a body in the International Council of Scientific Unions, which regroups the International Union of Pure and Applied Phys- ics (IUPAP), the International Union of Pure and Applied Chemistry (IUPAC) and many more. The IUPAC has a horrible web site at http://chemistry.rsc.org/rsc/iupac.htm. Cited on pages 939 and 940. 993 The details are given in the well-known astronomical reference, P. K E N N E T H S E I D E L M A N N , Explanatory Supplement to the Astronomical Almanac, . Cited on page 994 For information about the number π , as well as about other constants, the web address http://www.cecm.sfu.ca/pi/pi.html provides lots of data and references. It also has a link to the pretty overview paper on http://www.astro.virginia.edu/˜eww6n/math/Pi.html and to many other sites on the topic. Simple formulae for π are n 2n π +3 = ∑ 2n or the beautiful formula discovered in 1996 by Bailey, Borwein and Plouffe π= ∑ 16n ( 8n + 1 − 8n + 4 − 8n + 5 − 8n + 6 ) . (669) The site also explains the newly discovered methods to calculate specific binary digits of π without having to calculate all the preceding ones. By the way, the number of (con- secutive) digits known in 1999 was over 1.2 million million, as told in Science News 162, 14 December . They pass all tests for a random string of numbers, as the http://www.ast.univie.ac.at/˜wasi/PI/pi normal.html web site explains. However, this property, called normality, has never been proven; it is the biggest open question about π . It is possible that the theory of chaotic dynamics will lead to a solution of this puzzle in the coming years. Another method to calculate π and other constants was discovered and published by D A V I D V. C H U D N O V S K Y & G R E G O R Y V. C H U D N O V S K Y , The computation of classical constants, Proc. Natl. Acad. Sci. USA, volume 86, pp. –, . The Chudnowsky brothers have built a supercomputer in Gregory’s apartment for about 70 000 Euro, and for many years held the record for the largest number of digits for π . They battle already for decades with Kanada Yasumasa, who holds the record in 2000, calculated on an industrial supercomputer. New for- mulae to calculate π are still irregularly discovered. For the calculation of Euler’s constant γ see also D.W. D E T E M P L E , A quicker convergence to Euler’s constant, The Mathematical Intelligencer, pp. –, May . Note that little is known about properties of numbers; e.g. it is still not known whether π + e is Challenge 1310 r a rational number or not! (It is believed that it is not.) Do you want to become a mathematician? Challenge 1311 n Cited on page 944. Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003 Appendix B Units, Measurements and Constants 947 Motion Mountain www.motionmountain.net Copyright c Christoph Schiller November 1997 – September 2003
{"url":"http://www.docstoc.com/docs/12392831/CB-UNITS","timestamp":"2014-04-16T20:50:39Z","content_type":null,"content_length":"125122","record_id":"<urn:uuid:f65a3b2d-ca34-4c13-956b-f8089f60adff>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Econometrics by Simulation CLT is a powerful property * Stata has a wonderfully effective simulate function that allows users to easily simulate data and analysis in a very rapid fashion. * The only drawback is that when you run it, it will replace the data in memory with the simulated data automatically. * Which is not a big problem if you stick a preserve in front of your simulate command. * However, you may want to run sequential simulates and keep the data form all of the simulations together rather than only temporarily accessed. * Fortunately we can accomplished this task by writing a small program. cap program drop msim program define msim * Gettoken will split the arguments fed into msim into those before colon and those after. gettoken before after : 0 , parse(":") * I really like this feature of Stata! * First let's strip the colon. The 1 is important since we want to make sure to only remove the first colon. local simulation = subinstr("`after'", ":", "", 1) * Now what I propose is that the argument in `before' is used as an extension for names of variables created by the simulate command. * First let's save the current data set. * Generate an id that we will later use to merge in more data cap gen id = _n * Save the current data to a temporary location tempfile tempsave save `tempsave' * Now we run the simulation which wipes out the current data. * First we will rename all of the variables to have an extension equal to the first argument foreach v of varlist * { cap rename `v' `v'`before' * Now we need to generate the ID to merge into cap gen id = _n merge 1:1 id using `tempsave' * Get rid of the _merge variable generated from the above command. drop _merge * Let's write a nice little program that we would like to simulate. cap program drop simCLT program define simCLT set obs `1' * 1 is defined as the first argument of the program sim * Let's say we would like to see how many observations we need for the central limit theorem (CLT) to make the means of a bernoulli distribution look normal. Remember, so long as the mean and variance is defined the generally central limit theorem will eventually force any random distribution of means to approximate a normal distribution as the number of observations gets large. gen x = rbinomial(1,.25) sum x * So let's see first how the simulate command works initially simulate, rep(200): simCLT 100 * The simulate command will automatically save the returns from the sum command as variables (at least in version 12) hist mean, kden * The mean is looking good but not normal * Now normally what we need to do would be to run simulate again with a different argument. * But instead let's try our new command with 200! * But instead let's try our new command! * Clear out the old results msim 100: simulate, rep(200): simCLT 100 msim 200: simulate, rep(200): simCLT 200 * Looks good! msim 400: simulate, rep(200): simCLT 400 msim 1000: simulate, rep(200): simCLT 1000 msim 10000: simulate, rep(200): simCLT 10000 msim 100000: simulate, rep(200): simCLT 100000 msim 1000000: simulate, rep(200): simCLT 100000 * The next two commands can take a little while. msim 10000000: simulate, rep(200): simCLT 1000000 msim 100000000: simulate, rep(200): simCLT 10000000 sum mean* * We can see that the standard deviations are getting smaller with a larger sample size. * How is the histograms looking? foreach v in 100 200 400 100 1000 10000 100000 1000000 10000000 { hist mean`v', name(s`v', replace) nodraw title(`v') kden graph combine s100 s200 s400 s1000 s1000 s10000 s100000 s1000000 s10000000 /// , title("CLT between 100 and 10,000,000 observations") * We can see that the distribution of means approximates the normal distribution as the number of draws in each sample gets large. * This is one of the fundamental findings of statistics and pretty awesome if you think about it.
{"url":"http://www.econometricsbysimulation.com/2013/03/msimulate-command.html","timestamp":"2014-04-20T18:23:44Z","content_type":null,"content_length":"188365","record_id":"<urn:uuid:6223dea8-20c7-49c9-8b54-a3049f873a0c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Discontinous (proof required) November 9th 2009, 07:12 AM Discontinous (proof required) f(x)= X^2 ---------for all rational x = -X --------for all irrational x Prove that it is continous at x=0 and discontinous at every point. I've proved for continous part (with epsilon-delata language). But couldn't do the discontinous part. Tried contradiction, but am getting nothing out of it. If anyone can do it rigorously, it would be highly appreciated. November 9th 2009, 10:59 AM f(x)= X^2 ---------for all rational x = -X --------for all irrational x Prove that it is continous at x=0 and discontinous at every point. I've proved for continous part (with epsilon-delata language). But couldn't do the discontinous part. Tried contradiction, but am getting nothing out of it. If anyone can do it rigorously, it would be highly appreciated. Note that whatever $\delta>0$ you choose. The interval about some point $]c-\delta,c+\delta[$, will always include both rational and irrational numbers. Now if you pick $\varepsilon<c^2+c$ then there is no $\delta>0$ such that when $0<\vert c - x \vert <\delta$ , $\vert f(x)-f(c)\vert<\varepsilon$ Can you finish this ? November 9th 2009, 11:16 AM still not that clear. I understand the 1st paragraph, but second paragraph is elusive. sorry, i'm not that good at it, so am needing help. you have said there is no delta>0. but that is the thing we need to show rigorously, isn't it which will solve the problem completely. so can you help in showing rigorously that such delta doesn't exist,can you clarify a bit more? that would be great. thanks for your help. Note that whatever $\delta>0$ you choose. The interval about some point $]c-\delta,c+\delta[$, will always include both rational and irrational numbers. Now if you pick $\varepsilon<c^2+c$ then there is no $\delta>0$ such that when $0<\vert c - x \vert <\delta$ , $\vert f(x)-f(c)\vert<\varepsilon$ Can you finish this ? November 9th 2009, 12:50 PM The first step is "knowing" what you are going to proof. If you are not sure wether something is true or not you can't prove it. So if the function is going to be continuous then there must excist a delta for every epsilon. You know that no matter what delta you choose there will be rational and irrational numbers there. So if you have a rational number c and irrational one b. Then f(c)=c^2 and f(b)=-b. If delta is very small then c is very close to b so they are almost the same number, so what happens if you pick epsilon that is smaller then the difference between f(c) and f(b) ? That is the basic outline. I'll show you some steps. Assume that the function is continous at $ce0$, then for every $\varepsilon>0$ there excists a $\delta>0$ such that $\vert f(x)-f(c)\vert<\varepsilon$ if $0<\vert x - c \vert<\delta$. Now let $0<\varepsilon_1<\lvert\frac{c^2+c}{2}\rvert$ then you know that there excists a $\delta_1>0$ corresponding to that epsilon. Now you can pick a rational number a and irrational number b such that $0<\vert a-c\vert<\delta_1$ and $0<\vert b-c\vert<\delta_1$, and their absolute value is grater than c. Then $\vert f(a)-f(b)\vert=\vert a^2+b\vert>\vert c^2+c\vert>\varepsilon_1$ a clear contradiction that $\vert f(x)-f(c)\vert<\varepsilon_1$ if $0<\vert x - c \vert<\delta_1$ Now this is probably not the most elegant solution, but you get the idea? Now it is your turn to write something like this, or not like this. You don't have to use proof by contradiction. Just think about the problem, if epsilon is small enough then there can be no delta (Itwasntme). Hope that helps. November 9th 2009, 12:55 PM thanks a lot. this has made it much clear and now i already get it.
{"url":"http://mathhelpforum.com/differential-geometry/113440-discontinous-proof-required-print.html","timestamp":"2014-04-19T04:33:03Z","content_type":null,"content_length":"13418","record_id":"<urn:uuid:f96a2d20-c51c-488e-87b8-130325463ac8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem with Distance Forumula September 18th 2009, 10:51 AM #1 Nov 2008 Problem with Distance Forumula Use the Pythagorean theorem to show that the given points are the vertices of a right triangle. Find the area, A, and the perimeter, P, of the triangle. Give your answers correct to 2 decimal places. (-4, 3), (-8, -5), (2, -10) A = ? p = Around 39.10? This is for something on webassign, so if you could round the answers to two correct decimal places that would fantastic and mucho thanked. Also, is there a site on which you can make custom triangles on? (Not something like MAtlab where is takes forever to get something going and making an account and something) September 18th 2009, 11:35 AM #2 MHF Contributor Dec 2007 Ottawa, Canada September 18th 2009, 03:12 PM #3
{"url":"http://mathhelpforum.com/algebra/102981-problem-distance-forumula.html","timestamp":"2014-04-19T03:51:32Z","content_type":null,"content_length":"37025","record_id":"<urn:uuid:392112cb-8c6d-476f-8f6e-cdd6188af5a9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Working with group cosets in MAGMA up vote 2 down vote favorite When working with group cosets in MAGMA is there a way of treating the cosets as subsets of the overlying group. Specifically I have a group $G$ and subgroups $H$ and $K$ . I wish to look at the intersection of a pair of cosets $Hh$ and $Kk$ for some $h,k\in G$ , but am unable to perform such operations in MAGMA when they are considered as cosets. magma gr.group-theory finite-groups add comment 2 Answers active oldest votes As far as I can see, the only way to do that directly with cosets $ C1$ and $C2$ of $G$ is $\{ x : x\ {\rm in}\ G\ |\ x\ {\rm in}\ C1\ {\rm and}\ x\ {\rm in}\ C2 \}$ up vote 2 down vote which looks very inefficient, because it is iterating over all of $G$. I would suggest first find a right transversal $T$ of $H \cap K$ in $H$, and then search through $T$ looking for an element $t \in T$ with $thk^{-1} \in K$. If you find such a $t$, then the intersection is the coset $(H \cap K)th$, and otherwise it is empty. Thanks for that. I was using a similar kind of approach and looping through the cosets of the two groups. – dward1996 Apr 3 '12 at 8:45 add comment Well, this is trivial in GAP. Here is an example: gap> G:=SymmetricGroup(7);; gap> H:=Stabilizer(G,1);; gap> K:=SylowSubgroup(G,2);; gap> c1:=RightCoset(H,(1,2));; up vote 8 down gap> c2:=RightCoset(K,(1,2,3));; vote gap> Intersection(c1,c2); [ (1,2,3), (1,2,3)(5,6), (1,2,3,4), (1,2,3,4)(5,6) ] By the way, GAP is free, unlike Magma... 1 Thanks. I was trying to avoid using GAP for now, as I'm only just getting the hang of MAGMA, but it appears that it is worth investing the time to learn the language. – dward1996 Apr 3 '12 at 8:45 IMHO, GAP language is easier to use, in particular it's much less strict in the ways it handles types of data. And you can read (and modify, if needed) the GAP source code if you're really stuck :-) – Dima Pasechnik Apr 3 '12 at 17:06 Just out of interest are there any other good sources for learning GAP other than the GAP manuals (gap-system.org/Doc/manuals.html)? – dward1996 Apr 4 '12 at 8:38 gap-system.org/Doc/Learning/learning.html contains quite a few links to tutorials, examples, etc. gap-system.org/Doc/forumarchive.html (GAP Forum) is a mailing list you can subscribe to. – Dima Pasechnik Apr 4 '12 at 14:09 add comment Not the answer you're looking for? Browse other questions tagged magma gr.group-theory finite-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/92891/working-with-group-cosets-in-magma?answertab=oldest","timestamp":"2014-04-18T01:12:14Z","content_type":null,"content_length":"59245","record_id":"<urn:uuid:13c65203-673b-46bc-b271-25cee27d307e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
An EM Simulator for MEMS and Real Life MMICs Product Feature An EM Simulator for MEMS and Real Life MMICs MEM Research Spoltore, Pescara, Italy Generally speaking, there are two large families of commercially available electromagnetic (EM) solvers. The first one is a general-purpose set, covering the analysis of arbitrarily shaped structures and using finite elements or differences, either in the frequency- or time-domains, or some variations of these principles. The second one, based on the method of moments (MoM), and generally developed in a spectral domain (SD) formulation, includes programs expressly designed for the efficient analysis of printed planar structures. While the first solver family is in principle able to address virtually any problem, it is faced with severe difficulties whenever the structure under analysis has a critical aspect-ratio, that is, the ratio between the largest and smallest dimensions involved. This condition is often encountered when addressing planar multilayer structures. Such limitation stems from the need of discretizing, that is, dividing in sub-domains (mesh), the whole space surrounding the structure. Smaller objects impose a constraint on the sub-domain dimensions. This is true even when graded meshes are allowed. While the second family solver lacks such generality, it provides instead unpaired performance when addressing planar structures - according to the underlying technique, only surface electric currents over infinitely thin conductors (or magnetic currents over gaps) have to be subdivided. Information about the surrounding space is embedded within an operator known as the Green function. The Green function is by definition an operator linking an arbitrary current distribution to the electric and magnetic fields. While obtaining such an operator requires additional preprocessing, the efficiency and accuracy are excellent, as the number of unknowns in the final system is greatly reduced. A drawback is that only infinitely thin conductors are normally handled (hence they are indicated as 2.5D methods), even though this limitation is partially circumvented by some devices. Fig. 1 The shunt RF MEM switch reported by Pacheco, et al. Real Life Problems The two software families seem to cover the entire set of EM problems. However, monolithic microwave integrated circuits (MMIC) and microelectromechanical systems (MEMS) are generally a challenge for both families, lying between the two classes of problems each of them addresses. They are on the one hand not completely planar, in the sense that conductor (and dielectric) thickness is often of the same order of magnitude as other dimensions, while on the other hand, having definite critical aspect-ratios. A couple of examples will clarify the scenario. A coplanar waveguide (CPW) on a modulation-doped substrate (MOD), such as the ones involved in field effect transistors (MOD-FET), may be engineered on a bulk substrate several hundred microns thick. At the same time the structure may support a thin so-called two-dimensional electron gas (2DEG) that acts as a lossy layer with thickness in the order of 0.1 micron, while conductors may be a few microns spaced and thick. More generally, MMICs feature complex multilayer structures - hence critical aspect-ratios - while not being reasonably planar. An equally interesting example is that of a capacitive MEM shunt switch. In its simpler configuration, it is basically a CPW with a bridge, so that an electrostatic potential may snap down to create a high shunt capacitance able to reflect an incoming RF signal. In order to avoid a static short-circuit in this condition, a passivation dielectric area, with thickness ranging around 0.1 mm, is grown under the bridge. At the same time conductors may be a few microns thick. MEMS are attracting more and more attention, as witnessed by the number of technical papers dedicated to them and the investment in MEMS R&D. According to MEMS Clearing House (www.memsnet.org): "...MEMS is destined to become a hallmark 21st-century manufacturing technology with numerous and diverse applications having a dramatic impact on everything from aerospace technology to biotechnology." They appear to be a virtual panacea in high frequency design, providing significant reductions in power consumption and signal loss, as needed in order to shrink devices. In spite of this, advanced RF simulation and modeling tools for MEMS design are still needed. Fig. 2 An EM3DS model of the MEM switch. EM3DS Foundations In this context EM3DS is a novel EM simulator, conceived to address those problems lying to some extent between the capabilities of the two aforementioned families. Its underlying principle, the generalized transverse resonance-diffraction (GTRD) approach, was developed to address the EM modeling of active linear devices. Generally speaking, its EM engine may be classified as a 3D MoM approach specifically developed in order to take advantage of the peculiarities of multilayer quasi-planar structures. EM3DS uses volume currents to model conductors and dielectric discontinuities^1 rather than surface currents. Using volume currents involves an additional computational load, with respect to traditional 2.5D MoM simulators, but several analytical techniques, including its asymptotic accelerator, are implemented so as to reduce the required resources to those normally required by 2.5D Figure 1 shows the shunt RF MEM switch reported by Pacheco, et al.,^2 while Figure 2 shows its model in EM3DS. Results are shown in Figure 3 for the transmitting position (switch unactuated) and the reflecting state, including the comparison with the measurements.^2 Figure 4 shows the current distributions over the underlying CPW at 1 and 40 GHz, where the effect of the high bridge capacitance is visually evident. Fig. 3 Calculated and measured S-parameters for the (a) transmitting and (b) reflecting states. Performances and Features This kind of agreement is almost typical for EM3DS in MEMS analysis. The simulation takes only a few minutes on a desktop computer. The same analysis could require hours, if not days, to be performed by a space-discretizing technique. The best performance of EM3DS is obtained for 3D structures involving complex substrates, those structures known to be a challenge to available commercial software packages. Its frequency-domain formulation ensures reliable results even for strongly resonant structures that usually prove to be very difficult for time-domain approaches. At the same time a new algorithm, an asymptotic estimator, guarantees reduced computational time for broadband simulations. EM3DS is designed to be run on a PC under the Microsoft Windows platform, with an advanced and professional graphical user interface (GUI). A key feature is that the user interface guarantees simple understanding and access to the program capabilities. Developed "ab initio" as an object-oriented tool, the time required to properly use EM3DS is usually very short. Fig. 4 Current distributed on the switch CPW at (a) 1 and (b) 40 GHz. The structure to be modelled is entered by using a highly intuitive editor - the user defines a dielectric stack by selecting a set of layers with desired properties. Each dielectric layer is accessed by the editor. The user selects a layer and enters the items that it should contain. The material constituting each item may be arbitrarly defined, allowing thick lossy conductors as well as dielectric discontinuities to be modeled. In the previous MEM example, a dielectric brick is defined in order to model the silicon nitride region preventing the bridge from sticking over the CPW in its actuated position. The object thickness is the same of the embedding layer - every object, unlike in 2.5D approaches, has its finite thickness. A 3D view, one that the user can rotate, zoom and handle as desired, is updated in real-time. A set of tools are also available in order to simplify entering complex structures (circular spirals, rings and circles, for example). EM3DS also includes GDSII and DXF translators allowing the simulator to reuse the design of other sources, or simply using external editors. There is no grid constraining the user design. This feature eases matching real dimensions of the structure being imported. Once the desired structure is completed, the user has only to select a set of excitation points (ports) and the frequency range, and then to press a Go button. S-parameters, as well as a number of additional measurements (Z and Y parameters, quality factor and characteristic parameters of the feeding lines), are available. Parameters may be updated in real-time, and the user can see results during computation. Raw and calibrated results are always available. A powerful post-processor enables the simulator to display network parameters in multiple charts, either rectangular or Smith plots. Network parameters may be exported and imported as Touchstone text files, while each view may be copied into the clipboard so as to be used by other applications, as well as exported in standard graphical formats (BMP, EMF). At the end of a simulation, the volume current distribution may be displayed, animated in time or frequency, and animations may be saved in standard formats (AVI, GIF). An intuitive Data Browser allows simple access to the multiple graphs and external data. Post-processing includes a quick Spice-model extractor for electrically short devices, a linear circuit solver allowing lumped and distributed elements to be connected to the EM simulation, and TRL numerical de-embedding for rectangular waveguide structures. Fig. 5 A double MEM switch with tuning line. Some Examples Figure 5 shows one more MEM example - a double MEM capacitive switch.^3 It includes two membranes and a high impedance line selected in order to lower reflections over the band of the switch. Figure 6 shows a comparison between EM3DS results and experimental ones in the transmitting position. These simulations take advantage of several features of EM3DS, such as the ability to deal with conductors having finite thickness and losses, and to handle dielectric blocks, along with an effective built-in de-embedding algorithm. Fig. 6 S-parameter results for the double MEM switch in the transmitting position. The capabilities of EM3DS may also be successfully used to address other quasi-planar and planar structures. Figure 7 depicts a three-pole low pass filter obtained using a photonic band-gap structure.^4 It is an interesting structure since it involves slots in the ground plane, a microstrip structure and radiation losses. The comparison with the experimental results is reported in Figure 8 , showing a satisfactory agreement even in this case. Fig. 7 A low pass PBG filter. Coplanar waveguides with very thick conductors (if compared to the interelectrode spacing), as encountered in electro-optical modulators, are a natural test-bed for EM3DS. In this case, neglecting the conductor thickness as in 2.5D approaches results in errors ranging near 50 percent (or more). On the other hand, using space-discretizing-based tools require large computational time, owing to the complex substrates involved. As a final note, due to its 3D nature and to a new calibration technique, EM3DS is also able to handle a class of two-port waveguide components. Fig. 8 S-parameters for the low pass PBG filter. A new, low cost software package, implementing cutting-edge theory and addressing the 3D EM analysis of structures involving complex substrates, as usually encountered in common MMIC design, has been Additional details about EM3DS may be found at the company's Web site at www.memresearch.com, where a complete version (including manuals and examples) may be downloaded and activated for a 60-day trial period. Moreover, a completely free companion of EM3DS, Free EM3DS (no time-limitation but functionally limited), is also available. 1. M. Farina and T. Rozzi, "A 3D Integral Equation-based Approach to Analysis of Real Life MMICs: Application to Microelectromechanical Systems," IEEE Transactions on Microwave Theory and Techniques , Vol. 49, No. 12, December 2001, pp. 2235-2240. 2. S.P. Pacheco, L.P.B. Katehi and C.T.C. Nguyen, "Design of Low Actuation Voltage RF MEMS Switch," IEEE 2000 International Microwave Symposium Proceedings , Boston, MA, June 11-16, 2000. 3. J.B. Muldavin and G.M. Rebeiz, "High Isolation CPW MEMS Shunt Switches - Part II: Design," IEEE Transactions on Microwave Theory and Techniques , Vol. 48. No. 6, June 2000, pp. 1053-1056. 4. D. Ahn, et al., "A Design of the Low Pass Filter Using the Novel Microstrip Defected Ground Structure," IEEE Transactions on Microwave Theory and Techniques , Vol. 49, No. 1, January 2001, pp. MEM Research, Spoltore, Pescara, Italy +39 347 8279009. Circle No. 303
{"url":"http://www.microwavejournal.com/articles/3441-an-em-simulator-for-mems-and-real-life-mmics","timestamp":"2014-04-18T03:05:40Z","content_type":null,"content_length":"65276","record_id":"<urn:uuid:d0089220-f509-4b50-af31-9dd235a0bc31>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Structure and properties of the algebra of partially transposed operators Seminar Room 1, Newton Institute We consider the structure of algebra of n-fold tensor product operators, partially transposed on the last term. Using purely algebraical methods we show that this algebra is semi-simple and then, considering its regular representation, we derive basic properties of the algebra. In particular we describe all irreducible representations of the algebra of partially transformed operators. It appears that there are two kinds of irreducible representations of the algebra. The first one is strictly connected with the representations of the group S(n-1) induced by irreducible representations of the group S(n-2). The second kind is structurally connected with irreducible representations of the group S(n-1). The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/MQI/seminars/2013102214001.html","timestamp":"2014-04-19T13:14:36Z","content_type":null,"content_length":"6162","record_id":"<urn:uuid:0119857b-8894-468c-8c9d-019ab01664fc>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Analysis Tom M. Apostol It provides a transition from elementary calculus to advanced courses in real and complex function theory and introduces the reader to some of the abstract thinking that pervades modern analysis. User ratings 5 stars 4 stars 3 stars 2 stars 1 star Review: Mathematical Analysis User Review - Ross Lund - Goodreads Probably the only mathematical text I have ever read cover-to-cover. A very good introduction to Analysis as long as you already have some maths background. Naturally as a pure maths text it is very dry, with some positively arid sections, even for one interested in learning the subject. Read full review The Real and Complex Number Systems 1 Some Basic Notions of Set Theory 32 Elements of Point Set Topology 47 15 other sections not shown References from web pages Mathematical analysis - Wikipedia, the free encyclopedia In the 14th century, the roots of mathematical analysis began with work done by Madhava of Sangamagrama, regarded by some as the "founder of mathematical ... en.wikipedia.org/ wiki/ Mathematical_analysis MATH-4210, MATHEMATICAL ANALYSIS II I have ordered the book Strichartz, because Mathematical Analysis I was taught from it. It has a very intuitive approach and presents important results from ... www.rpi.edu/ ~kovacg/ classes/ analysis2/ 421.html Material for Preliminary Exam Analysis Section [1] Tom M. Apostol, “Mathematical Analysis” (second edition), Addison-Wesley, ... [3] Walter Rudin, “Principles of Mathematical Analysis” (third edition), ... www.uta.edu/ math/ pages/ main/ phdreqs/ 5307.htm JSTOR: A Course in Mathematical Analysis, Vol. II. A Course in Mathematical Analysis, Vol. 11', Intermediate Analysis. By nb Haaser, jp lasalle, and ja Sullivan. Blaisdell, New York, 1964. xii +677 pp. ... links.jstor.org/ sici?sici=0002-9890(196502)72%3A2%3C204%3AACIMAV%3E2.0.CO%3B2-C to give a mathematical analysis of. the Gabor two block case, extend his results to ...... Mathematical Analysis. a,. Heading, Mass.: hddison-. Wesley, ... www.blackwell-synergy.com/ doi/ abs/ 10.1111/ j.1467-999X.1971.tb00169.x Mathematical Analysis by Apostol Mathematical Analysis by Tom M. Apostol. Chapter 3: Elements of Point Set Theory: Notes. Let E. 1. denote the set of all real numbers (the real line). Let E ... www.dilan4.com/ pdf/ notes/ year4/ g4m00book1.pdf Document sans-titre APOSTOL tm, Mathematical Analysis, 2nd edition, Addison-Wesley, 1974. ... BURKILL jc, A First Course in Mathematical Analysis, 1962. ... www.cirs-tm.org/ books/ mathematics/ ANALYSIS.htm tom apostol mathematical analysis torrent downloads tom apostol mathematical analysis torrent search results. Download one of listed tom apostol mathematical analysis torrents or choose from category bit ... torrentz.ws/ search/ tom-apostol-mathematical-analysis Addison Wesley Publishing Company [request_ebook] Mathematical ... Rapidshare Files Search(Tips:You can search "Addison Wesley Publishing Company [request_ebook] Mathematical Analysis, Second Edition by Tom M. Apostol" to ... www.rsfind.com/ tag/ Addison+Wesley+Publishing+Company+%5Brequest_ebook%5D+Mathematical+Analysis,+Second+Edition++by+T... Mathematical Analysis - tripatlas.Com Analysis has its beginnings in the rigorous formulation of calculus. It is the branch of mathematics most explicitly concerned with the notion of a limit, ... tripatlas.com/ Mathematical_analysis Bibliographic information
{"url":"http://books.google.com/books?id=Le5QAAAAMAAJ&q=subinterval&dq=related:ISBN0070006571&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-17T06:55:52Z","content_type":null,"content_length":"127405","record_id":"<urn:uuid:78cbdb3a-8d75-4ec1-b8f1-824c416fc510>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Fuzzy Preferences for Multi-Criteria Negotiation Bartel Van de Walle and Peyman Faratin We are interested in how cooperation can arise in types of environments, such as open systems, where little or nothing is known about the other agents. We view the negotiation problem as a strategic and communication rich process between different local preference/decision models. This contrasts with the classical cooperative game theoretic (axiomatic) view of negotiation process as a centralized and linear optimization problem. Although unconcerned with the processes of negotiation, such axiomatic models of negotiation (in particular those of mechanism design tradition) has assumed optimality can be achieved through design of normative roles of interactions that incents agents to act rationally (Neumann and Morgemstern 194.4),(Rosenschein and Zlotkin 1994),(Binmore 1990),(Shehory and Kraus 1995),(Sandholm 1999). Likewise, in eration research the focus is the design of optimal solution algorithms based on mathematical programming techniques (Kraus 1997),(Ehtamo, Ketteunen, and Hamalainen 2001),(Heiskanen 1999),(Teich et al. 1996). In both cases optimality is achieved because: a) the geometry of the solution set is assumed to be described by a closed and convex set (therefore there is a bounded number of solution points), b) the objective functions of the individuals (the utility function) are concave and differentiable and c) some global information (such as the actual utility of the agents or the utility gradient increase vector (Ehtamo, Ketteunen, and Hamalainen 2001)) is stored by a (hypothetical) centralized mediator that (incents) acts to direct problem solvers towards pareto-optimality. However, although analytically elegant, such optimality can not be guaranteed in decentralized autonomous agent systems operating in open environments where information is sparse. Indeed, lack of information or knowledge oftenteads to inefficiencies and suboptimality in the negotiated outcome. This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://aaai.org/Library/Symposia/Fall/2001/fs01-03-014.php","timestamp":"2014-04-16T16:15:08Z","content_type":null,"content_length":"3835","record_id":"<urn:uuid:3d6ea32e-61cf-4173-aca1-5ed885bae153>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: June 1997 [00100] [Date Index] [Thread Index] [Author Index] RE: problems with precision • To: mathgroup at smc.vnet.net • Subject: [mg7673] RE: [mg7646] problems with precision • From: Ersek_Ted%PAX1A at mr.nawcad.navy.mil • Date: Thu, 26 Jun 1997 01:36:57 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com A French student wrote; |We had to calculate with high accuracy (more than 20 digits) the solution |of a 2 dimensional non-linear system. The problem to solve was very easy. |But Maple and Mathematica agreed only on the 15 first digits. I used the |function SetPrecision. Is there another function to set calculus precision? |Here is the listing of my routine in Mathematica : |, {p, 1, 20, 1}] In the above FindRoot used MachinePrecision (the default precision). Then SetPrecision was used to pad the binary form of the solution with zeros so that it has 20 decimal digits of precision. Note: after tacking on a bunch of binary zeros we don't get a bunch of decimal zeros. What we did is like weighing a mass in chem lab on a garden variety scale and saying it weighs 34.1500000000000000002367 grams. Now how do we get the desired answer. First, try looking at the FindRoot Options. In[1]:= Options[FindRoot] Out[1]= {AccuracyGoal->Automatic, Compiled->True, DampingFactor->1, Jacobian->Automatic, MaxIterations->15, Now lets learn about WorkingPrecision. In[2]:= ?WorkingPrecision Out[2]= WorkingPrecision is an option for various numerical operations specifies how many digits of precision should be maintained in internal The line below will give the right solution for one of the cases. In[3]:= soln = FindRoot[ {x + Cos[y] * Sinh[x] == 0, y + Sin[y] * Cosh[x] == 0}, {x, Abs[Cos[ -35 Pi /4 ] ] Sqrt[Pi + (Pi/4) / Sin[5 Pi/4]^2 - 1] }, {y, 5/4 Pi}, WorkingPrecision -> 25] Ou[3]:= { x-> 2.250728611601860542840015, y-> 4.21239223049066060098377 } But be careful you may need to use more than 20 digits of Working Precision get a solution with 20 digite of Precision. Why? Because the Precision of f[x] is less than the Precision of x when the magnitude of f'[x] is large. The Options command is a very handy feature. Especially for some of the commands. For example try the following: In[4]:= Options[Plot] Ted Ersek ersek_ted%pax1a at mr.nawcad.navy.mil
{"url":"http://forums.wolfram.com/mathgroup/archive/1997/Jun/msg00100.html","timestamp":"2014-04-17T21:49:14Z","content_type":null,"content_length":"36430","record_id":"<urn:uuid:baf00d12-33a8-42f4-8a3c-d3b0d7eccbc4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Ubuntuland & The Dream Valley The emulator can be found at genymotion.com. I should preface this post by stating that I have only used the Genymotion emulator for around twenty minutes, but it has already impressed me to the point where I would consider using as my main app testing device (I currently use a hardware device). I have a few problems with the official Android emulator: • Hard to enable x86 emulation on Linux • Difficult to spoof GPS coordinates • Not very user friendly In the short period of testing Genymotion, all of these points were resolved. Installation was literally less than five minutes, the emulator ran buttery smooth, and there were lots of configuration options for testing and running apps. It even comes with images that include the official Google apps so you can essentially use the emulator like a regular hardware device. Compass and Ruler (C.a.R.) is a dynamic geometry software developed by Rene Grothmann since 1989. You can find here an history of C.a.R. in the author's wiki. You'll find a lot of informations about this software on the official site of C.a.R. Rene Grothmann, mathematic's teacher at the catholic university of Eichstätt (Germany), made for his software powerful and reliable algorithms to manage geometrical objects and relations between them : this makes it possible to make very complex geometrical constructions. For the graphical interface, Rene also did a very great work concerning the interaction between geometric objects and user, what also makes it possible to use C.a.R. with young pupils in primary 1.- The rEFInd Boot Manager for computers based on the Extensible Firmware Interface (EFI) and Unified EFI (UEFI). This page describes rEFInd, my fork of the rEFIt boot manager for computers based on the Extensible Firmware Interface (EFI) and Unified EFI (UEFI). Like rEFIt, rEFInd is a boot manager, meaning that it presents a menu of options to the user when the computer first starts up, as shown below. rEFInd is not a boot loader, which is a program that loads an OS kernel and hands off control to it. 2.- Bugsx a program to display and evolve biomorphs. Bugsx draws the biomorphs based on parametric plots of Fourier sine and cosine series and let's you play with them using the genetic algorithm. A paper describing the theoretic backgrounds of bugsx in included in the source only package. What is bugsx? bugsx runs under MIT's X11 window system. It was written under UNIX but should be easily portable. It is a program which draws the 3.- Axiom is a general purpose Computer Algebra system. This project is not part of the GNU Project. Axiom is a general purpose Computer Algebra system. It is useful for doing mathematics by computer and for research and development of mathematical algorithms. It defines a strongly typed, mathematically correct type hierarchy. It has a programming language and a built-in compiler. Axiom has been in development since 1973 and was sold as a 4.- Calc is an interactive calculator which provides for easy large numeric calculations. Calc is an interactive calculator which provides for easy large numeric calculations, but which also can be easily programmed for difficult or long calculations. It can accept a command line argument, in which case it executes that single command and exits. Otherwise, it enters interactive mode. In this mode, it accepts commands one at a time, processes them, and displays the answers. In 5.- Top Tech Innovations of 2013 [Infographic]. Whenever the latest piece of time-saving technology is released we ask ourselves, “how did I ever get on without this?” Okay, maybe we could have lived our lives without the vibrating fork that lets you know when you’re eating too fast, but you get the idea. The iPad was a lifesaver for many of us young people whose laptops and phones weren’t enough. Just a short time after its debut release 6.- January 2014: 10 Most Popular Posts from Ubuntuland & The Dream Valley. 1.- Who Uses Torrents to get Free Music/Video? According to an info-graphic published by BitRebels a year ago, movies like Avatar, Batman – the Dark Knight , Transformers, Star Trek or Inception were just 5 out of 10 most downloaded movies of all time using torrents. The average user age who downloads free music or videos using torrents is somewhere around 22 years old and strongly believes in 7.- Who Uses Torrents to get Free Music/Video? According to an info-graphic published by BitRebels a year ago, movies like Avatar, Batman – the Dark Knight , Transformers, Star Trek or Inception were just 5 out of 10 most downloaded movies of all time using torrents. The average user age who downloads free music or videos using torrents is somewhere around 22 years old and strongly believes in online freedom. 70% of online users find 8.- December 2013: 10 Most Popular Posts from Ubuntuland & The Dream Valley. 1.- Ubuntu Will Add Torrent Search to Embed Free Culture Into Us... A new scope set to be included in Ubuntu by default will allow users of The Pirate Bay to conduct BitTorrent searches directly from Unity desktop. The tool’s creator informs TorrentFreak that while there is still work to be done, the aim of the scope – which is endorsed by Canonical founder Mark Shuttleworth – is to embed Free 9.- Ubuntu Will Add Torrent Search to Embed Free Culture Into User Experience A new scope set to be included in Ubuntu by default will allow users of The Pirate Bay to conduct BitTorrent searches directly from Unity desktop. The tool’s creator informs TorrentFreak that while there is still work to be done, the aim of the scope – which is endorsed by Canonical founder Mark Shuttleworth – is to embed Free Culture directly into the Ubuntu user experience. In early 10.- Django the web framework for perfectionists with deadlines. Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. Developed by a fast-moving online-news operation, Django was designed to handle two challenges: the intensive deadlines of a newsroom and the stringent requirements of the experienced Web developers who wrote it. It lets you build high-performing, elegant Web applications quickly. If you liked this article, subscribe to the feed by clicking the image below to keep informed about new contents of the blog: This page describes rEFInd, my fork of the rEFIt boot manager for computers based on the Extensible Firmware Interface (EFI) and Unified EFI (UEFI). Like rEFIt, rEFInd is a boot manager, meaning that it presents a menu of options to the user when the computer first starts up, as shown below. rEFInd is not a boot loader, which is a program that loads an OS kernel and hands off control to it. (Since version 3.3.0, the Linux kernel has included a built-in boot loader, though, so this distinction is rather artificial these days, at least for Linux.) Many popular boot managers, such as the Grand Unified Bootloader (GRUB), are also boot loaders, which can blur the distinction in many users' minds. All EFI-capable OSes include boot loaders, so this limitation isn't a problem. If you're using Linux, you should be aware that several EFI boot loaders are available, so choosing between them can be a challenge. In fact, the Linux kernel can function as an EFI boot loader for itself, which gives rEFInd characteristics similar to a boot loader for Linux. See my Web page on this topic for more information. In theory, EFI implementations should provide boot managers. Unfortunately, in practice these boot managers are often so poor as to be useless. The worst I've personally encountered is on Gigabyte's Hybrid EFI, which provides you with no boot options whatsoever, beyond choosing the boot device (hard disk vs. optical disc, for instance). I've heard of others that are just as bad. For this reason, a good EFI boot manager—either standalone or as part of a boot loader—is a practical necessity for multi-booting on an EFI computer. That's where rEFInd comes into play. I decided to fork the earlier rEFIt project because, although rEFIt is a useful program, it's got several important limitations, such as poor control over the boot loader detection process and an ability to display at most a handful of boot loader entries on its main screen. Christoph Pfisterer, rEFIt's author, stopped updating rEFIt with version 0.14, which was released in March of 2010. Since I forked rEFIt to rEFInd, Christoph has begun pointing rEFIt users to rEFInd as a successor project. The rEFIt Web page has a distinct Mac bias, and the provided binaries work only on Macs because they're 32-/64-bit "fat" binaries, which Macs can handle but UEFI-based PCs can't. rEFIt can be recompiled to work on UEFI-based PCs, but prebuilt binaries for such systems are relatively rare. Although I do own a Mac Mini, my interest lies more on the side of standard PC hardware, and hence with UEFI. My development platform is Linux, and my installation instructions and binaries are much more platform-neutral. I'm aware that many Mac users will consider this a step backward, but I ask their indulgence; I only have so many hours a week to work on this project, and I prefer to devote my efforts to improvements that will benefit all rEFInd users, at least initially. As already noted, rEFInd is a boot manager for EFI and UEFI computers. (I use "EFI" to refer to either version unless the distinction is important.) You're likely to benefit from it on computers that boot multiple OSes, such as two or more of Linux, Mac OS X, and Windows. You will not find rEFInd useful on older BIOS-based computers. Prior to mid-2011, few computers outside of Intel-based Macs used EFI; but starting in 2011, computer manufacturers began adopting UEFI in droves, so most computers bought since then use EFI. Even so, many modern PCs support both EFI-style booting and BIOS-style booting, the latter via a BIOS compatibility mode that's known as the Compatibility Support Module (CSM). Thus, you may be using BIOS-style booting on an EFI-based computer. If you're unsure which boot method your computer uses, check the first of the subsections, What's Your Boot Mode. Subsequent sections of this document are on separate pages. Be aware that you probably don't need to read them all; just skip to the sections that interest you: Note: I consider rEFInd to be beta-quality software! I'm discovering bugs (old and new) and fixing them every few days. That said, rEFInd is a usable program in its current form on many systems. If you have problems, feel free to drop me a line. If you liked this article, subscribe to the feed by clicking the image below to keep informed about new contents of the blog: Bugsx draws the biomorphs based on parametric plots of Fourier sine and cosine series and let's you play with them using the genetic algorithm. A paper describing the theoretic backgrounds of bugsx in included in the source only package. What is bugsx? bugsx runs under MIT's X11 window system. It was written under UNIX but should be easily portable. It is a program which draws the biomorphs based on parametric plots of Fourier sine and cosine series and let's you play with them using the genetic algorithm. The original version which ran under Suntools and XViews was written by Joshua R. Smith sometime 1990. See the 'credits' section for more details. For more information about the theoretic backgrounds of bugsx consult Joshua R. Smith's paper distributed with this program in gzip'ed postscript format as bugs.ps.gz. You have to uncompress this file with 'gunzip' before you can print it. Gunzip should be available at an archive near you. The paper can also be gotten from: Command line parameters bugsx accepts the following parameters as command line options: +rv reverse video (use to override xrdb entry) +synchronous syncronous mode (use to override xrdb entry) -? help -background <arg> backgound color -batch run program in batch mode -bg <arg> same as -background -bordercolor <arg> border color -borderwidth <arg> border width -cycle <arg> re-initialize population after n batch turns -display display -extend_print show extended reproduction info while running -fg <arg> same as -forground -font <arg> font -foreground <arg> forground color (also file system bar color) -geometry <arg> geometry -help help -iconic iconic -interval <arg> interval used per turn -mb show menu border -name <arg> run bugsx under this name -nobreed do not breed when running in batch mode -number <arg> number of biomorphs to draw (must be a square #) -printpop print the population when breeding -rv reverse video -seed <arg> use this seed for random number generator -segments <arg> use this many segments to draw an organism -showbreed show breeding subpopulation when in batch mode -showgenes show a graphic representation of the genes -synchronous synchronous mode -v verbose -xrm make no entry in resrouce database help help bugsx recognizes the following XResources. Usually bugsx will search for resources under the program name but you can override this with the -name flag. If you do not wish to use a specific application defaults file, you can execute xrdb -merge to merge your resource specifications into the XResource database. bugsx first checks in the directory pointed at by the environment variable XAPPLRESDIR. If this doesn't yield any resource definitions it checks the APP_DEFAULTS_DIR. This is defined in your headers or in bugsx.h. If you want to change this you'll have to recompile bugsx. background universal backgound color batch run program in batch mode batchbreed do not breed when running in bactch mode borderColor border color borderWidth border width cycle re-initialize population after n batch turns display display extend_print show extended reproduction info while running font font foreground universal foreground color help show help mainWin.geometry main window geometry iconic start program in iconic mode interval interval used per turn minimize minimize window size menuborder draw menu borders name run bugsxunder this name number number of biomorphs to draw (must be a square #) printpop print the population when breeding reverseVideo reverse video seed use this seed for random number generator segments use this many segments to draw an organism showbreed show breeding subpopulation when in batch mode showgenes show a graphic representation of the genes synchronous syncronous mode verbose verbose mode To install bugsx just follow these instructions. Check the multiverse repository is enabled. Inspect /etc/apt/sources.list using your favourite editor with sudo which will ensure that you have the correct permissions. sudo gedit /etc/apt/sources.list Ensure that multiverse is included. After any changes you should run this command to update your system. sudo apt-get update You can now install the package like this. sudo apt-get install bugsx Which will install bugsx and any other packages on which it depends. If you liked this article, subscribe to the feed by clicking the image below to keep informed about new contents of the blog: This project is not part of the GNU Project. Axiom is a general purpose Computer Algebra system. It is useful for doing mathematics by computer and for research and development of mathematical algorithms. It defines a strongly typed, mathematically correct type hierarchy. It has a programming language and a built-in compiler. Axiom has been in development since 1973 and was sold as a commercial product. It has been released as free software. Efforts are underway to extend this software to (a) develop a better user interface (b) make it useful as a teaching tool (c) develop an algebra server protocol (d) integrate additional mathematics (e) rebuild the algebra in a literate programming style (f) integrate logic programming (g) develop an Axiom Journal with refereed submissions. Axiom is sponsored by CAISS, the Center for Algorithms and Interactive Scientific Software, at The City College of New York. The Axiom project focuses on the “30 Year Horizon”. The primary philosophy is that Axiom needs to develop several fundamental features in order to be useful to the next generation of computational mathematicians. Knuth's literate programming technique is used throughout the source code. Axiom plans to use proof technology to prove the correctness of the algorithms (such as Coq and ACL2). In Axiom, all objects have a type. Examples of types are mathematical structures (such as rings, fields, polynomials) as well as data structures from computer science (e.g., lists, trees, hash A function can take a type as argument, and its return value can also be a type. For example, Fraction is a function, that takes an IntegralDomain as argument, and returns the field of fractions of its argument. As another example, the ring of 4\times 4 matrices with rational entries would be constructed as SquareMatrix(4, Fraction Integer). Of course, when working in this domain, 1 is interpreted as the identity matrix and A^-1 would give the inverse of the matrix A, if it exists. Several operations can have the same name, and the types of both the arguments and the result are used to determine which operation is applied (cf. function overloading). Axiom comes with an extension language called SPAD. All the mathematical knowledge of Axiom is written in this language. The interpreter accepts roughly the same language. SPAD was further developed under the name A# and later Aldor. The latter can still be used as an alternative extension language. It is, however, distributed under a different license. Within the interpreter environment, Axiom uses type inference and a heuristic algorithm to make explicit type annotations mostly unnecessary. It features 'HyperDoc', an interactive browser-like help system, and can display two and three dimensional graphics, also providing interactive features like rotation and lighting. It also has a specialised interaction mode for Emacs, as well as a plugin for the TeXmacs editor. Pre-compiled binaries Axiom has been compiled to run on various platforms. This table contains links to various tar-gzipped version of files. In general you need to know the name of the file you download, usually something ending in .tgz (tar-gzip). You also need to know where the file gets untarred, this is referred to as (where) below. When you cd to the (where) location you should see the top level Makefile for Axiom, the changelog, etc. Axiom builds on various platforms and uses the convention that the last name in the AXIOM shell variable denotes the type of system. This is referred to as the SYSNAME. You need to know which SYSNAME you downloaded. To use one of these binaries just do: download the binary and untar it. cd axiom export AXIOM=`pwd`/mnt/SYSNAME <= replace SYSNAME with actual name export PATH=$AXIOM/bin:$PATH src bin Source code Axiom source code is maintained in a Gold and Silver version. The Gold version is the "released" version. Gold versions are released every two months. The Silver version is the current "bleeding edge" that contains changes which will be tested and released into Gold every two months. Unless you need a recent feature or bug fix, or are working as a developer, there is no reason to use Silver GOLD SOURCES The Gold (November 2008) release of Axiom is available. The source code tarball from November, 2008 is here wget http://axiom.axiom-developer.org/axiom-website/downloads/axiom-july2008-src.tgz tar -zxf axiom-july2008-src.tgz cd axiom export AXIOM=`pwd`/mnt/ (see table below) export PATH=$AXIOM/bin:$PATH You can clone the git repository from GitHub: git clone git://github.com/daly/axiom.git cd axiom export AXIOM=`pwd`/mnt/ (see table below) export PATH=$AXIOM/bin:$PATH Or you can download the sourcecode from savannah: cd axiom export AXIOM=`pwd`/mnt/ (see table below) export PATH=$AXIOM/bin:$PATH If you liked this article, subscribe to the feed by clicking the image below to keep informed about new contents of the blog: Calc is an interactive calculator which provides for easy large numeric calculations, but which also can be easily programmed for difficult or long calculations. It can accept a command line argument, in which case it executes that single command and exits. Otherwise, it enters interactive mode. In this mode, it accepts commands one at a time, processes them, and displays the answers. In the simplest case, commands are simply expressions which are evaluated. For example, the following line can be input: 3 * (4 + 1) and the calculator will print: All numbers are represented as fractions with arbitrarily large numerators and denominators which are always reduced to lowest terms. Real or exponential format numbers can be input and are converted to the equivalent fraction. Hex, binary, or octal numbers can be input by using numbers with leading '0x', '0b' or '0' characters. Complex numbers can be input using a trailing 'i', as in '2+3i'. Strings and characters are input by using single or double quotes. Commands are statements in a C-like language, where each input line is treated as the body of a procedure. Thus the command line can contain variable declarations, expressions, labels, conditional tests, and loops. Assignments to any variable name will automatically define that name as a global variable. The other important thing to know is that all non-assignment expressions which are evaluated are automatically printed. Thus, you can evaluate an expression's value by simply typing it in. Many useful built-in mathematical functions are available. Use the: help builtin command to list them. Useful calc links: If are interested in algorithms for finding very large primes: Some of the methods used by the Amdahl 6 to discover (what was at the time of discovery, the largest known non-Mersenne prime): (391581*2^216193-1) are contained in the calc script lucas.cal. This calc script is available as part of calc distribution. If are interested the product of curious primes: See the curious calc web page. Calc is free The calc program is free. It is distributed under the GNU Lesser General Public License. Calc is open software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation version 2.1 of the License. Many people have expressed their thanks the many people who wrote calc, tested calc, contributed code to calc, patched calc, and helped to improve it since it started back in 1984. We, the authors of calc wrote calc because they wanted to write it, because it was fun writing it, and because we wanted you to use and enjoy it. If you use and enjoy calc, and you want to show your appreciation for calc, you might consider doing some of the following: • Write and contribute some .cal resource files • Write and contribute some calc shell scripts • Work on one of the calc TODO items (type "help todo") Download & Install. Calc mirror sites will find the following special files: • calc-*.i686.rpm: installs calc executable & shared libraries • calc-devel-*.i686.rpm: installs headers, static executable & static libraries • calc-*.src.rpm: calc src as an rpm • calc-debuginfo-*.i686.rpm: non-stripped executable & libraries suitable for advanced debugging • *_IS_LATEST_STABLE: indicates the most recent stable version • *_IS_LATEST_UNSTABLE: indicates the most recent beta-test version • HOWTO.INSTALL: How to install from the calc gziped tarball in 4 easy steps • README: general download notes • CHANGES: changes made as of most recently built version • COPYING: calc GNU Lesser General Public License information • COPYING-LGPL: GNU Lesser General Public License • checksum.*: a checksum of the various calc tarballs If you liked this article, subscribe to the feed by clicking the image below to keep informed about new contents of the blog:
{"url":"http://ubuntulandforever.blogspot.com/","timestamp":"2014-04-17T18:22:32Z","content_type":null,"content_length":"216482","record_id":"<urn:uuid:37161b2a-3469-43af-aa37-59a079a23367>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Every black hole may hold a hidden universe We could be living inside a black hole. This head-spinning idea is one cosmologist's conclusion based on a modification of Einstein's equations of general relativity that changes our picture of what happens at the core of a black hole. In an analysis of the motion of particles entering a black hole, published in March, Nikodem Poplawski of Indiana University in Bloomington showed that inside each black hole there could exist another universe (Physics Letters B, DOI: 10.1016/j.physletb.2010.03.029). "Maybe the huge black holes at the centre of the Milky Way and other galaxies are bridges to different universes," Poplawski says. If that is correct - and it's a big "if" - there is nothing to rule out our universe itself being inside a black hole. In Einstein's general relativity (GR), the insides of black holes are "singularities" - regions where the density of matter reaches infinity. Whether the singularity is an actual point of infinite density or just a mathematical inadequacy of GR is unclear, as the equations of GR break down inside black holes. Either way, the modified version of Einstein's equations used by Poplawski does away with the singularity altogether. For his analysis, Poplawski turned to a variant of GR called the Einstein-Cartan-Kibble-Sciama (ECKS) theory of gravity. Unlike Einstein's equations, ECKS gravity takes account of the spin or angular momentum of elementary particles. Including the spin of matter makes it possible to calculate a property of the geometry of space-time called torsion. When the density of matter reaches gargantuan proportions (more than about 1050 kilograms per cubic metre) inside a black hole, torsion manifests itself as a force that counters gravity. This prevents matter compressing indefinitely to reach infinite density, so there is no singularity. Instead, says Poplawski, matter rebounds and starts expanding again. Now, in what is sure to be a controversial study, Poplawski has applied these ideas to model the behaviour of space-time inside a black hole the instant it starts rebounding (arxiv.org/abs/1007.0587 ). The scenario resembles what happens when you compress a spring: Poplawski has calculated that gravity initially overcomes torsion's repulsive force and keeps compressing matter, but eventually the repulsive force gets so strong that the matter stops collapsing and rebounds. Poplawski's calculations show that space-time inside the black hole expands to about 1.4 times its smallest size in as little as 10-46 seconds. This staggeringly fast bounce-back, says Poplawski, could have been what led to the expanding universe we observe today. How would we know if we are living inside a black hole? Well, a spinning black hole would have imparted some spin to the space-time inside it, and this should show up as a "preferred direction" in our universe, says Poplawski. Such a preferred direction would result in the violation of a property of space-time called Lorentz symmetry, which links space and time. It has been suggested that such a violation could be responsible for the observed oscillations of neutrinos from one type to another (Physical Review D, DOI: 10.1103/PhysRevD.74.105009). If we are living inside a black hole, it would have imparted a 'special direction' to our universe Sadly, there is no point in us looking for other universes inside black holes. As you approach a black hole, the increasing gravitational field makes time tick slower and slower. So, for an external observer, any new universe inside would form only after an infinite amount of time had elapsed. This article originally appeared in New Scientist magazine. Black hole by Hikari Riku via PhotoBucket 1 80Reply
{"url":"http://io9.com/5596712/every-black-hole-may-hold-a-hidden-universe?tag=cosmology","timestamp":"2014-04-18T13:34:52Z","content_type":null,"content_length":"86469","record_id":"<urn:uuid:b896fad4-94dc-4e77-8626-66e0892fcfc8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Conway’s Game of Life in R with ggplot2 and animation June 5, 2011 By John Ramey In undergrad I had a computer science professor that piqued my interest in applied mathematics, beginning with Conway’s Game of Life. At first, the Game of Life (not the board game) appears to be quite simple — perhaps, too simple — but it has been widely explored and is useful for modeling systems over time. It has been forever since I wrote my first version of this in C++, and I happily report that there will be no nonsense here. The basic idea is to start with a grid of cells, where each cell is either a zero (dead) or a one (alive). We are interested in watching the population behavior over time to see if the population dies off, has some sort of equilibrium, etc. John Conway studied many possible ways to examine population behaviors and ultimately decided on the following rules, which we apply to each cell for the current tick (or generation). 1. Any live cell with fewer than two live neighbours dies, as if caused by under-population. 2. Any live cell with two or three live neighbours lives on to the next generation. 3. Any live cell with more than three live neighbours dies, as if by overcrowding. 4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction Although there are other versions of this in R, I decided to give it a shot myself. I am not going to provide a walkthrough of the code as I may normally do, but the code should be simple enough to understand for one proficient in R. It may have been unnecessary to implement this with the foreach package, but I wanted to get some more familiarity with foreach, so I did. The set of grids is stored as a list, where each element is a matrix of zeros and ones. Each matrix is then converted to an image with ggplot2, and the sequence of images is exported as a MP4 video with the animation package. Let me know if you improve on my code any. I’m always interested in learning how to do things better. ``` r library(‘foreach’) library(‘ggplot2’) library(‘animation’) library(‘reshape2’) Determines how many neighboring cells around the (j,k)th cell have living organisms. # The conditionals are used to check if we are at a boundary of the grid. how_many_neighbors <- function(grid, j, k) { size <- nrow(grid) count <- 0 if(j > 1) { count <- count + grid[j-1, k] if (k > 1) count <- count + grid[j-1, k-1] if (k < size) count <- count + grid[j-1, k+1] } if(j < size) { count <- count + grid[j+1,k] if (k > 1) count <- count + grid[j+1, k-1] if (k < size) count <- count + grid[j+1, k+1] } if(k > 1) count <- count + grid[j, k-1] if(k < size) count <- count + grid[j, k+1] count } Creates a list of matrices, each of which is an iteration of the Game of Life. # Arguments # size: the edge length of the square # prob: a vector (of length 2) that generates cells with probability of death and life, respectively # returns a list of grids (matrices) game_of_life <- function(size = 10, num_reps = 50, prob = c(0.5, 0.5)) { grid <- list() grid[[1]] <- replicate(size, sample(c(0,1), size, replace = TRUE, prob = prob)) dev_null <- foreach(i = seq_len (num_reps) + 1) %do% { grid[[i]] <- grid[[i-1]] foreach(j = seq_len(size)) %:% foreach(k = seq_len(size)) %do% { # Apply game rules. num_neighbors <- how_many_neighbors(grid[[i]], j, k) alive <- grid[[i]][j,k] == 1 if(alive && num_neighbors <= 1) grid[[i]][j,k] <- 0 if(alive && num_neighbors >= 4) grid[[i]][j,k] <- 0 if(!alive && num_neighbors == 3) grid[[i]][j,k] <- 1 } } grid } Converts the current grid (matrix) to a ggplot2 image grid_to_ggplot <- function(grid) { # Permutes the matrix so that melt labels this correctly. grid <- grid[seq.int(nrow(grid), 1), ] grid <- melt(grid) grid$value <- factor(ifelse(grid$value, “Alive”, “Dead”)) p <- ggplot(grid, aes(x=Var1, y=Var2, z = value, color = value)) p <- p + geom_tile(aes(fill = value)) p + scale_fill_manual(values = c(“Dead” = “white”, “Alive” = “black”)) } ``` As an example, I have created a 50-by-50 grid with a 10% chance that its initial values will be alive. The simulation has 500 iterations. You may add more, but this takes long enough already. Note that the default frame rate, which is controlled by interval, is 1 second. I set it to 0.05 based to give a decent video. r set.seed(42) game_grids <- game_of_life(size = 50, num_reps = 500, prob = c(0.1, 0.9)) grid_ggplot <- lapply(game_grids, grid_to_ggplot) saveVideo(lapply(grid_ggplot, print), video.name = "animation.mp4", clean = TRUE, interval = 0.05) I uploaded the resulting video to YouTube for your viewing pleasure. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/conways-game-of-life-in-r-with-ggplot2-and-animation/","timestamp":"2014-04-18T00:28:58Z","content_type":null,"content_length":"44183","record_id":"<urn:uuid:d2ee3d8b-8b70-4329-bcfd-0449aef67f6b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Constructing A Golden Rectangle - Method One Isn't it strange that the Golden Ratio came up in such unexpected places? Well let's see if we can find out why. The Greeks were the first to call phi the Golden Ratio. They associated the number with perfection. It seems to be part of human nature or instinct for us to find things that contain the Golden Ratio naturally attractive - such as the "perfect" rectangle. Realizing this, designers have tried to incorporate the Golden Ratio into their designs so as to make them more pleasing to the eye. Doors, notebook paper, textbooks, etc. all seem more attractive if their sides have a ratio close to phi. Now, let's see if we can construct our own "perfect" rectangle. You will need a piece of paper, a pencil, and a protractor to complete this activity. Note: to see how your figure should change, pass your mouse cursor over the images below. You should try to duplicate the figures outlined in black; the golden area is what your rectangle will eventually look like. We'll start by making a square, any square (just remember that all sides have to have the same length, and all angles have to measure 90 degrees!): Please note that everyone will have different size squares to begin with. Now, let's divide the square in half (bisect it). Be sure to use your protractor to divide the base and to form another 90 degree angle: Notice that we have made two rectangles. Now, draw in one of the diagonals of one of the rectangles: Measure the length of the diagonal and make a note of it. Now extend the base of the square from the midpoint of the base by a distance equal to the length of the diagonal (the length of the diagonal should be equal to the distance from the midpoint of the OLD base to the edge of your NEW base): Construct a new line perpendicular to the base at the end of our new line, and then connect to form a rectangle: Measure the length and the width of your rectangle. Now, find the ratio of the length to the width. Are you surprised by the result? The rectangle you have made is called a Golden Rectangle because it is "perfectly" proportional.
{"url":"http://cuip.uchicago.edu/~dlnarain/golden/activity4.htm","timestamp":"2014-04-21T12:08:59Z","content_type":null,"content_length":"15543","record_id":"<urn:uuid:19a20f87-8f84-4161-ba67-3ad8aae975cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
What the Bull Giveth, the Bear Taketh Away Butler|Philbrick|Gordillo & Associates By Adam Butler, Mike Philbrick, Rodrigo Gordillo April 16, 2013 Those who cannot remember the past are condemned to repeat it. - Santayana The question of whether to commit new funds to stocks here is nuanced and complex, not least because it isn't obvious that traditional alternatives - bonds or cash - offer any better value. We are very near all-time low interest rates across most developed government bond markets, credit spreads are near all-time tights, and rates are negative out to 5 or more years in real terms. If these options are representative of the complete opportunity set, then one might be justified in apportioning some capital to equities, if only because it is difficult to identify which investment stinks most profoundly. However, those who do choose to allocate to equities should be aware of where we are relative to other bull-bear cycles throughout history. We have rambled-on about the poor prospects for equity returns over the next 10 - 20 years in many prior articles (see here for a full analysis, and here for a summary of research from other respected firms), but the true authority on stock market valuation is John Hussman. We would strongly encourage readers to investigate Dr. Hussman's Weekly Market Comments for all the gory details. This article approaches the issue from a completely new direction than our other work and the work of Dr. Hussman. It is mostly constructed as a thought experiment that explores the logic of compounding, but the conclusion is troubling for those currently overweight U.S. equities. For the purpose of the study below, we examined the S&P 500 price series from Shiller's publicly available database to understand the duration and magnitude of all bull and bear market periods in U.S. stocks since 1871. We defined a bear market as a drop in prices of at least 20% from any peak, and which lasted at least 3 months. Bull markets were then defined as a rise of at least 50% from the bottom of a bear market, over a period lasting at least 6 months. Chart 1 and Table 1 describe every bull market since 1871 in the S&P, including duration and magnitude information. The lesson from this analysis is uninspiring for equity bulls, as we will see. The core hurdle is that the current bull market has (through end of February) already delivered 105% of gains, against the median 124% bull market run through history (using monthly data). Of course, this means that, should this bull market deliver an average surge, investors can hope for less than 20% more growth from this cycle. Further, given that the median bull market has historically lasted 50 months, and we are currently in our 49th bull month, we are about due for a wipeout. Chart 1. Bull Markets since 1871 Source: Shiller (2013) Table 1. Bull Markets since 1871 - Statistics Source: Shiller (2013) It's troubling enough that the current bull market has already delivered 85% of the gains, and lasted about as long, as the median historical bull market. More disconcerting still is the fact that, when the bear market comes, as Chart 2. and Table 2. demonstrate, it is likely to wipe out 38% of all prior gains. And this has profound mathematical implications for current equity investors. Chart 2. Bear Markets since 1871 Source: Shiller (2013) Table 2. Bear Markets since 1871 - Statistics Source: Shiller (2013) Portfolio growth is governed by the mathematics of compounding, which means that, for example, a 100% gain is erased by a 50% loss, and a 50% loss requires a 100% gain to get back to even. Applying the same principles to where we are in the current bull/bear cycle is illuminating. If we assume that the next bear market will deliver losses in-line with what we have experienced from bear markets through history, then at the bottom of the next bear market investors will have lost 38% of their portfolio value. The question is, how much must current investors expect stocks to gain before peaking to justify owning them here instead of waiting to purchase them in the next bear The most unbiased estimate of the magnitude of the next bear market is the historical median of 38%. Using the math of compounding, we can determine that a 38% loss requires a 61% gain to break-even [1 / (1 - 38%)]. Logically then, and by extension, investors who choose to hold stocks today must expect gains of at least 61% in order to rationalize their investment; otherwise they would eliminate the anxiety of riding the equity roller-coaster and simply invest in cash, waiting to pounce on stocks at equivalent or lower value at some point during the next bear market. Note that this argument is not meant to justify any sort of typical 'market timing' approach; most of these are rubbish and very difficult to adhere to for a variety of emotional reasons. Rather, it is a compelling argument for investors to seek out truly different sources of returns, such as tactical alpha strategies, CTAs, or diversified risk strategies inclusive of a wide variety of assets. Note: Here are some additional Advisor Perspectives articles by the Butler-Philbrick-Gordillo team: Adam Butler and Mike Philbrick are Portfolio Managers with Butler|Philbrick|Gordillo & Associates at Macquarie Private Wealth in Toronto, Canada. (c) Butler|Philbrick|Gordillo & Associates Remember, if you have a question or comment, send it to .
{"url":"http://www.advisorperspectives.com/commentaries/bp_041613.php","timestamp":"2014-04-20T10:51:58Z","content_type":null,"content_length":"62095","record_id":"<urn:uuid:9cc819d3-5612-48f6-a4ce-9665fbce74e5>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: mysureg with constraints [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: mysureg with constraints From jpitblado@stata.com (Jeff Pitblado, StataCorp LP) To statalist@hsphsun2.harvard.edu Subject Re: st: mysureg with constraints Date Tue, 20 Sep 2005 14:17:38 -0500 Federica Maiorano <federica_maiorano@yahoo.it> seems to be having some trouble with constraints: > I am estimating a translog cost function + 1 input > share cost equation using mysureg. > When I attempt to estimate a constrained model, then I > get an error message from mysureg. As an example, I > copy below what happens with one constraint only. > Please note that there do not seem to be any problems > when the same commands are performed using sureg. > I am obviously doing something wrong, but I cannot > work out what it is. Any suggestions? > ----------------------------------------------- > constraint define 1 [sl]_cons = [lnvc]lnpl > mysureg (lnvc lnk lnpl lnpm lnaccess lncalls time k2 > pl2 pm2 acc2 call2 t2 kpl kpm kacc kcalls kt plpm > placc plcalls plt pmacc pmcalls pmt acccalls acct > callt) (sl lnpl lnpm lnaccess lncalls lnk time), > constraint(1) cluster(idcomp) > Fitting constant-only model: > Constraints invalid: > [lnpl] not found > ----------------------------------------------- Aside from being an example estimation command in the Stata Press book "Maximum Likelihood Estimation with Stata, 2nd Edition", there does not seem to be a publicly available user-written estimation command called -mysureg-. So I'll assume that Federica is using -mysureg- introduced on pg. 238 (the ado-file for -mysureg- is given on pg. 286). Federica discovered a bug in -ml model- that was fixed for Stata 9 in the ado-file update on 15sep2005. If Federica is using Stata 9, she should use -update- to get Stata up-to-date, and the problem will go away. For Stata 8, Federica can change -mysureg- to parse the -constraints()- option as a "passthru", and add `constraints' back to the `mlopts' macro after -mysureg- fits the constant-only model. Here are the lines that will be affected (look for !!): ***** BEGIN: mysureg.ado program mysureg, sortpreserve syntax anything(id="equations" equalok) /// [if] [in] [fweight pweight] [, /// noLOg /// -ml model- options Robust CLuster(varname) /// noLRTEST svy noSVYadjust /// Level(integer `c(level)') /// -Replay- options corr /// CONSTraints(passthru) /// !! NEW LINE * /// -mlopts/svyopts- options mlopts mlopts, `options' local cns `constraints' // !! LINE MODIFIED `qui' di as txt _n "Fitting constant-only model:" ml model d2 mysuregc_d2 `eqns0' /// nocnsnotes missing maximize local initopt continue search(off) if "`lrtest'" == "" { local lf0 lf0(`k_eq' `e(ll)') local mlopts `"`mlopts' `constraints'"' // !! NEW LINE ***** END: mysureg.ado So, you ask: "What's going on? I want details." As originally developed, -mysureg- passes constraints to each call of -ml model-, even when fitting the constant-only model. Under version 8.1 (and 8.2 for Stata 8), -ml model- does not use -makecns- to tolerantly construct the constraint matrix; it stops with an error message instead of just noting when there was a problem with a constraint then move on. Thus -ml model- exits with an error if any of the constraints involve predictors that are not present in the constant-only model fit. As mentioned above, this bug was fixed in Stata 9. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2005-09/msg00658.html","timestamp":"2014-04-19T00:02:49Z","content_type":null,"content_length":"7973","record_id":"<urn:uuid:0a6101b2-1656-4edb-b0dc-62a4cebc4088>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Most Cited Computational Geometry Articles The most cited articles published since 2009, extracted from Volume 42, Issue 4, May 2009, Pages 342-351 Nekrich, Y. In this paper we describe space-efficient data structures for the two-dimensional range searching problem. We present a dynamic linear space data structure that supports orthogonal range reporting queries in O(logn+k logn) time, where k is the size of the answer. Our data structure also supports emptiness and one-reporting queries in O(logn) time and thus achieves optimal time and space for this type of queries. In the case of integer point coordinates, we describe a static and a randomized dynamic linear space data structures that support range reporting, emptiness and one-reporting queries in sub-logarithmic time. These are the first linear space data structures for these problems that achieve sub-logarithmic query time. We also present a dynamic linear space data structure for range counting queries with O(( logn/loglogn)2) time and a dynamic O(nlogn/loglogn) space data structure for semigroup range queries with O(( logn/loglogn)2) query time. © 2008 Elsevier B.V. Volume 43, Issues 6-7, August 2010, Pages 601-610 Bringmann, K. | Friedrich, T. We consider the computation of the volume of the union of high-dimensional geometric objects. While showing that this problem is #P-hard already for very simple bodies, we give a fast FPRAS for all objects where one can (1) test whether a given point lies inside the object, (2) sample a point uniformly, and (3) calculate the volume of the object in polynomial time. It suffices to be able to answer all three questions approximately. We show that this holds for a large class of objects. It implies that Klee?s measure problem can be approximated efficiently even though it is #P-hard and hence cannot be solved exactly in polynomial time in the number of dimensions unless P = NP. Our algorithm also allows to efficiently approximate the volume of the union of convex bodies given by weak membership oracles. For the analogous problem of the intersection of high-dimensional geometric objects we prove #P-hardness for boxes and show that there is no multiplicative polynomialtime 2 d1 -approximation for certain boxes unless NP = BPP, but give a simple additive polynomial-time ε-approximation. © 2010 Elsevier B.V. Volume 42, Issue 1, January 2009, Pages 60-80 Bose, P. | Hurtado, F. We review results concerning edge flips in planar graphs concentrating mainly on various aspects of the following problem: Given two different planar graphs of the same size, how many edge flips are necessary and sufficient to transform one graph into another? We overview both the combinatorial perspective (where only a combinatorial embedding of the graph is specified) and the geometric perspective (where the graph is embedded in the plane, vertices are points and edges are straight-line segments). We highlight the similarities and differences of the two settings, describe many extensions and generalizations, highlight algorithmic issues, outline several applications and mention open problems. © 2008 Elsevier B.V. Volume 42, Issues 6-7, August 2009, Pages 606-616 Doraiswamy, H. | Natarajan, V. The Reeb graph tracks topology changes in level sets of a scalar function and finds applications in scientific visualization and geometric modeling. We describe an algorithm that constructs the Reeb graph of a Morse function defined on a 3-manifold. Our algorithm maintains connected components of the two dimensional levels sets as a dynamic graph and constructs the Reeb graph in O(nlogn+nlogg( loglogg)3) time, where n is the number of triangles in the tetrahedral mesh representing the 3-manifold and g is the maximum genus over all level sets of the function. We extend this algorithm to construct Reeb graphs of d-manifolds in O(nlogn( loglogn)3) time, where n is the number of triangles in the simplicial complex that represents the d-manifold. Our result is a significant improvement over the previously known O( n2) algorithm. Finally, we present experimental results of our implementation and demonstrate that our algorithm for 3-manifolds performs efficiently in practice. © 2008 Elsevier B.V. All rights reserved. Volume 43, Issue 1, January 2010, Pages 42-58 Carr, H. | Snoeyink, J. | Van De Panne, M. The contour tree is an abstraction of a scalar field that encodes the nesting relationships of isosurfaces. We show how to use the contour tree to represent individual contours of a scalar field, how to simplify both the contour tree and the topology of the scalar field, how to compute and store geometric properties for all possible contours in the contour tree, and how to use the simplified contour tree as an interface for exploratory visualization. © 2009 Elsevier B.V. All rights reserved. Volume 43, Issue 3, April 2010, Pages 234-242 Löffler, M. | Snoeyink, J. An assumption of nearly all algorithms in computational geometry is that the input points are given precisely, so it is interesting to ask what is the value of imprecise information about points. We show how to preprocess a set of n disjoint unit disks in the plane in O(nlogn) time so that if one point per disk is specified with precise coordinates, the Delaunay triangulation can be computed in linear time. From the Delaunay, one can obtain the Gabriel graph and a Euclidean minimum spanning tree; it is interesting to note the roles that these two structures play in our algorithm to quickly compute the Delaunay. © 2009 Elsevier B.V. Volume 43, Issue 8, October 2010, Pages 663-677 Batista, V.H.F. | Millman, D.L. | Pion, S. | Singler, J. Computers with multiple processor cores using shared memory are now ubiquitous. In this paper, we present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The algorithms we describe are (a) 2-/3-dimensional spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) d-dimensional axisaligned box intersection computation, and finally (c) 3D bulk insertion of points into Delaunay triangulations, which can be used for mesh generation algorithms, or simply for constructing 3D Delaunay triangulations. For the latter, we introduce as a foundational element the design of a container data structure that both provides concurrent addition and removal operations and is compact in memory. This makes it especially well-suited for storing large dynamic graphs such as Delaunay triangulations. We show experimental results for these algorithms, using our implementations based on the Computational Geometry Algorithms Library (CGAL). This work is a step towards what we hope will become a parallel mode for CGAL, where algorithms automatically use the available parallel resources without requiring significant user intervention. © 2010 Elsevier B.V. All rights reserved. Volume 42, Issues 6-7, August 2009, Pages 617-626 Aichholzer, O. | Bereg, S. | Dumitrescu, A. | García, A. | Huemer, C. | Hurtado, F. | Kano, M. | Márquez, A. | Rappaport, D. | Smorodinsky, S. | Souvaine, D. | Urrutia, J. | Wood, D.R. This paper studies non-crossing geometric perfect matchings. Two such perfect matchings are compatible if they have the same vertex set and their union is also non-crossing. Our first result states that for any two perfect matchings M and M′ of the same set of n points, for some k∈O(logn), there is a sequence of perfect matchings M= M0, M1,⋯, Mk= M′, such that each Mi is compatible with Mi+ 1. This improves the previous best bound of k≤n-2. We then study the conjecture: every perfect matching with an even number of edges has an edge-disjoint compatible perfect matching. We introduce a sequence of stronger conjectures that imply this conjecture, and prove the strongest of these conjectures in the case of perfect matchings that consist of vertical and horizontal segments. Finally, we prove that every perfect matching with n edges has an edge-disjoint compatible matching with approximately 4n/5 edges. © 2009 Elsevier B.V. All rights reserved. Volume 43, Issue 3, April 2010, Pages 257-278 Berberich, E. | Kerber, M. | Sagraloff, M. We present a method to compute the exact topology of a real algebraic surface S, implicitly given by a polynomial f∈Q[x,y,z] of arbitrary total degree N. Additionally, our analysis provides geometric information as it supports the computation of arbitrary precise samples of S including critical points. We compute a stratification ΩS of S into O( N5) non-singular cells, including the complete adjacency information between these cells. This is done by a projection approach. We construct a special planar arrangement AS with fewer cells than a cad in the projection plane. Furthermore, our approach applies numerical and combinatorial methods to minimize costly symbolic computations. The algorithm handles all sorts of degeneracies without transforming the surface into a generic position. Based on ΩS we also compute a simplicial complex which is isotopic to S. A complete C++-implementation of the stratification algorithm is presented. It shows good performance for many well-known examples from algebraic geometry. © 2009 Elsevier B.V. Volume 42, Issue 3, April 2009, Pages 196-206 Cheng, H.-L. | Shi, X. Quality surface meshes for molecular models are desirable in the studies of protein shapes and functionalities. However, there is still no robust software that is capable to generate such meshes with good quality. In this paper, we present a Delaunay-based surface triangulation algorithm generating quality surface meshes for the molecular skin model. We expand the restricted union of balls along the surface and generate an -sampling of the skin surface incrementally. At the same time, a quality surface mesh is extracted from the Delaunay triangulation of the sample points. The algorithm supports robust and efficient implementation and guarantees the mesh quality and topology as well. Our results facilitate molecular visualization and have made a contribution towards generating quality volumetric tetrahedral meshes for the macromolecules. Volume 43, Issue 4, May 2010, Pages 419-433 Löffler, M. | Van Kreveld, M. Imprecision of input data is one of the main obstacles that prevent geometric algorithms from being used in practice. We model an imprecise point by a region in which the point must lie. Given a set of imprecise points, we study computing the largest and smallest possible values of various basic geometric measures on point sets, such as the diameter, width, closest pair, smallest enclosing circle, and smallest enclosing bounding box. We give efficient algorithms for most of these problems, and identify the hardness of others. © 2009 Elsevier B.V. Volume 44, Issue 3, April 2011, Pages 148-159 Yu, C.-C. | Hon, W.-K. | Wang, B.-F. Let P be a set of n points that lie on an n×n grid. The well-known orthogonal range reporting problem is to preprocess P so that for any query rectangle R, we can report all points in PR efficiently. In many applications driven by the information retrieval or the bioinformatics communities, we do not need all the points in P, but need only just the point that has the smallest y-coordinate; this motivates the study of a variation called the orthogonal range successor problem. If space is the major concern, the best-known result is by Mäkinen and Navarro, which requires an optimal index space of n+o(n) words and supports each query in O(logn) time. In contrast, if query time is the major concern, the best-known result is by Crochemore et al., which supports each query in O(1) time with O (n1+ε) index space. In this paper, we first propose another optimal-space index with a faster O(logn/loglogn) query time. The improvement stems from the design of an index with O(1) query time when the points are restricted to lie on a narrow grid, and the subsequent application of the wavelet tree technique to support the desired query. Based on the proposed index, we directly obtain improved results for the successive indexing problem and the position-restricted pattern matching problem in the literature. We next propose an O(n1+ε)-word index that supports each query in O(1) time. When compared with the result by Crochemore et al., our scheme is conceptually simpler and easier for construction. In addition, our scheme can be easily extended to work for high-dimensional cases. © 2010 Elsevier B.V. All rights reserved. Volume 43, Issue 3, April 2010, Pages 312-328 Been, K. | Nöllenburg, M. | Poon, S.-H. | Wolff, A. Map labeling encounters unique issues in the context of dynamic maps with continuous zooming and panning - an application with increasing practical importance. In consistent dynamic map labeling, distracting behavior such as popping and jumping is avoided. We use a model for consistent dynamic labeling in which a label is represented by a 3d-solid, with scale as the third dimension. Each solid can be truncated to a single scale interval, called its active range, corresponding to the scales at which the label will be selected. The active range optimization (ARO) problem is to select active ranges so that no two truncated solids intersect and the sum of the heights of the active ranges is maximized. Simple ARO is a variant in which the active ranges are restricted so that a label is never deselected when zooming in. We investigate both the general and simple variants, for 1d- as well as 2d-maps. Different label shapes define different ARO variants. We show that 2d-ARO and general 1d-ARO are NP-complete, even for quite simple shapes. We solve simple 1d-ARO optimally with dynamic programming, and present a toolbox of algorithms that yield constant-factor approximations for a number of 1d- and 2d-variants. © 2009 Elsevier B.V. Volume 42, Issue 1, January 2009, Pages 1-19 Nguyen, H. | Burkardt, J. | Gunzburger, M. | Ju, L. | Saka, Y. Mesh generation in regions in Euclidean space is a central task in computational science, and especially for commonly used numerical methods for the solution of partial differential equations, e.g., finite element and finite volume methods. We focus on the uniform Delaunay triangulation of planar regions and, in particular, on how one selects the positions of the vertices of the triangulation. We discuss a recently developed method, based on the centroidal Voronoi tessellation (CVT) concept, for effecting such triangulations and present two algorithms, including one new one, for CVT-based grid generation. We also compare several methods, including CVT-based methods, for triangulating planar domains. To this end, we define several quantitative measures of the quality of uniform grids. We then generate triangulations of several planar regions, including some having complexities that are representative of what one may encounter in practice. We subject the resulting grids to visual and quantitative comparisons and conclude that all the methods considered produce high-quality uniform grids and that the CVT-based grids are at least as good as any of the others. © 2008 Elsevier Volume 42, Issues 6-7, August 2009, Pages 704-721 Estrella-Balderrama, A. | Fowler, J.J. | Kobourov, S.G. Consider a graph G with vertex set V in which each of the n vertices is assigned a number from the set {1,...,k} for some positive integer k. This assignment φ is a labeling if all k numbers are used. If φ does not assign adjacent vertices the same label, then φ forms a leveling that partitions V into k levels. If G has a planar drawing in which the y-coordinate of all vertices match their labels and edges are drawn strictly y-monotone, then G is level planar. In this paper, we consider the class of level trees that are level planar regardless of their labeling. We call such trees unlabeled level planar (ULP). Our contributions are three-fold. First, we describe which trees are ULP and provide linear-time level planar drawing algorithms for any labeling. Second, we characterize ULP trees in terms of forbidden subtrees so that any other tree must contain a subtree homeomorphic to one of these. Third, we provide a linear-time recognition algorithm for ULP trees. © 2009 Elsevier B.V. All rights reserved. Volume 42, Issue 2, February 2009, Pages 127-133 Pach, J. | Tóth, G. Let m(k) denote the smallest positive integer m such that any m-fold covering of the plane with axis-parallel unit squares splits into at least k coverings. J. Pach [J. Pach, Covering the plane with convex polygons, Discrete and Computational Geometry 1 (1986) 73-81] showed that m(k) exists and gave an exponential upper bound. We show that m(k)=O( k2), and generalize this result to translates of any centrally symmetric convex polygon in the place of squares. From the other direction, we know only that m(k)≥⌊4k/ 3⌋-1. © 2008 Elsevier B.V. Volume 43, Issue 4, May 2010, Pages 434-444 Da Fonseca, G.D. | Mount, D.M. Range searching is a well known problem in the area of geometric data structures. We consider this problem in the context of approximation, where an approximation parameter >0 is provided. Most prior work on this problem has focused on the case of relative errors, where each range shape R is bounded, and points within distance diam(R) of the range's boundary may or may not be included. We consider a different approximation model, called the absolute model, in which points within distance of the range's boundary may or may not be included, regardless of the diameter of the range. We consider range spaces consisting of halfspaces, Euclidean balls, simplices, axis-aligned rectangles, and general convex bodies. We consider a variety of problem formulations, including range searching under general commutative semigroups, idempotent semigroups, groups, and range emptiness. We show how idempotence can be used to improve not only approximate, but also exact halfspace range searching. Our data structures are much simpler than both their exact and relative model counterparts, and so are amenable to efficient implementation. © 2009 Elsevier B.V. Volume 42, Issue 9, November 2009, Pages 842-851 Mukkamala, P. | Szegedy, M. We show that every connected cubic graph can be drawn in the plane with straight-line edges using only four distinct slopes and disconnected cubic graphs with five distinct slopes. Volume 43, Issue 3, April 2010, Pages 243-250 Chan, T.M. Given n axis-parallel boxes in a fixed dimension d≥3, how efficiently can we compute the volume of the union? This standard problem in computational geometry, commonly referred to as Klee's measure problem, can be solved in time O(nd/ 2logn) by an algorithm of Overmars and Yap (FOCS 1988). We give the first (albeit small) improvement: our new algorithm runs in time nd/ 22 O( logn), where log denotes the iterated logarithm. For the related problem of computing the depth in an arrangement of n boxes, we further improve the time bound to near O(nd/ 2/logd/ 2-1n), ignoring loglogn factors. Other applications and lower-bound possibilities are discussed. The ideas behind the improved algorithms are simple. © 2009 Elsevier B.V. Volume 42, Issue 2, February 2009, Pages 109-118 Üngör, A. We introduce a new type of Steiner points, called off-centers, as an alternative to circumcenters, to improve the quality of Delaunay triangulations in two dimensions. We propose a new Delaunay refinement algorithm based on iterative insertion of off-centers. We show that this new algorithm has the same quality and size optimality guarantees of the best known refinement algorithms. In practice, however, the new algorithm inserts fewer Steiner points, runs faster, and generates smaller triangulations than the best previous algorithms. Performance improvements are significant especially when user-specified minimum angle is large, e.g., when the smallest angle in the output triangulation is 30°, the number of Steiner points is reduced by about 40%, while the mesh size is down by about 30%. As a result of its shown benefits, the algorithm described here has already replaced the well-known circumcenter insertion algorithm of Ruppert and has been the default quality triangulation method in the popular meshing software Triangle. 1 © 2008 Elsevier B.V. Volume 44, Issue 9, November 2011, Pages 465-476 Buchin, K. | Buchin, M. | Van Kreveld, M. | Luo, J. A natural time-dependent similarity measure for two trajectories is their average distance at corresponding times. We give algorithms for computing the most similar subtrajectories under this measure, assuming the two trajectories are given as two polygonal, possibly self-intersecting lines with time stamps. For the case when a minimum duration of the subtrajectories is specified and the subtrajectories must start at corresponding times, we give a linear-time algorithm. The algorithm is based on a result of independent interest: We present a linear-time algorithm to find, for a piece-wise monotone function, an interval of at least a given length that has minimum average value. In the case that the subtrajectories may start at non-corresponding times, it appears difficult to give exact algorithms, even if the duration of the subtrajectories is fixed. For this case, we give (1+ε)-approximation algorithms, for both fixed duration and when only a minimum duration is specified. © 2011 Elsevier B.V. All rights reserved. Volume 42, Issue 9, November 2009, Pages 885-902 Klein, R. | Langetepe, E. | Nilforoushan, Z. Abstract Voronoi diagrams [R. Klein, Concrete and Abstract Voronoi Diagrams, Lecture Notes in Computer Science, vol. 400, Springer-Verlag, 1987] were designed as a unifying concept that should include as many concrete types of diagrams as possible. To ensure that abstract Voronoi diagrams, built from given sets of bisecting curves, are finite graphs, it was required that any two bisecting curves intersect only finitely often; this axiom was a cornerstone of the theory. In [A.G. Corbalan, M. Mazon, T. Recio, Geometry of bisectors for strictly convex distance functions, International Journal of Computational Geometry and Applications 6 (1) (1996) 45-58], Corbalan et al. gave an example of a smooth convex distance function whose bisectors have infinitely many intersections, so that it was not covered by the existing AVD theory. In this paper we give a new axiomatic foundation of abstract Voronoi diagrams that works without the finite intersection property. © 2009 Elsevier B.V. All rights reserved. Volume 42, Issue 4, May 2009, Pages 269-288 Egeblad, J. | Nielsen, B.K. | Brazil, M. We present an efficient solution method for packing d-dimensional polytopes within the bounds of a polytope container. The central geometric operation of the method is an exact one-dimensional translation of a given polytope to a position which minimizes its volume of overlap with all other polytopes. We give a detailed description and a proof of a simple algorithm for this operation in which one only needs to know the set of (d-1)-dimensional facets in each polytope. Handling non-convex polytopes or even interior holes is a natural part of this algorithm. The translation algorithm is used as part of a local search heuristic and a meta-heuristic technique, guided local search, is used to escape local minima. Additional details are given for the three-dimensional case and results are reported for the problem of packing polyhedra in a rectangular parallelepiped. Utilization of container space is improved by an average of more than 14 percentage points compared to previous methods. The translation algorithm can also be used to solve the problem of maximizing the volume of intersection of two polytopes given a fixed translation direction. For two polytopes with complexity O(n) and O(m) and a fixed dimension, the running time is O(nmlog(nm)) for both the minimization and maximization variants of the translation algorithm. © 2008 Elsevier B.V. Volume 42, Issues 6-7, August 2009, Pages 664-676 Di Giacomo, E. | Didimo, W. | Liotta, G. | Meijer, H. | Wismath, S.K. Given a graph G with n vertices and a set S of n points in the plane, a point-set embedding of G on S is a planar drawing such that each vertex of G is mapped to a distinct point of S. A geometric point-set embedding is a point-set embedding with no edge bends. This paper studies the following problem: The input is a set S of n points, a planar graph G with n vertices, and a geometric point-set embedding of a subgraph G′G on a subset of S. The desired output is a point-set embedding of G on S that includes the given partial drawing of G′. We concentrate on trees and show how to compute the output in O( n2logn) time in a real-RAM model and with at most n-k edges with at most 1+2⌈k/2⌉ bends, where k is the number of vertices of the given subdrawing. We also prove that there are instances of the problem which require at least k-3 bends on n-k edges. © 2009 Elsevier B.V. All rights reserved. Volume 44, Issue 2, February 2011, Pages 121-127 Bose, P. | Devroye, L. | Löffler, M. | Snoeyink, J. | Verma, V. Consider the Delaunay triangulation T of a set P of points in the plane as a Euclidean graph, in which the weight of every edge is its length. It has long been conjectured that the stretch factor in T of any pair p, p′ ∈P, which is the ratio of the length of the shortest path from p to p′ in T over the Euclidean distance ∥pp∥, can be at most φ/2 ≈ 1.5708. In this paper, we show how to construct point sets in convex position with stretch factor > 1.5810 and in general position with stretch factor > 1.5846. Furthermore, we show that a sufficiently large set of points drawn independently from any distribution will in the limit approach the worst-case stretch factor for that distribution. © 2010 Elsevier B.V. All rights reserved.
{"url":"http://www.journals.elsevier.com/computational-geometry/most-cited-articles/","timestamp":"2014-04-17T09:45:09Z","content_type":null,"content_length":"92819","record_id":"<urn:uuid:a058a392-ff27-427d-8ed8-5f13c5cfa3e1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Combinatorics Question November 15th 2013, 02:23 PM #1 Nov 2013 United Kingdom Combinatorics Question 6 people each own a desk each. The company owner wants to swap them round, so at most one of them stays where they were before. How many arrangements are possible? I've tried 264, which was the apparent sum of the arrangements where everybody changes position plus the arrangements with one in the same place. Can anyone help? Re: Combinatorics Question You need to know about derangements. Scroll down to the counting formula. Let $D(n)$ be the number of derangements on $n$ objects. The answer to your question is $D(6)+\binom{6}{1}D(5)$. Last edited by Plato; November 15th 2013 at 03:13 PM. Re: Combinatorics Question Does this factor in the part of the question that states there can still be one person in the same place as before? Re: Combinatorics Question Re: Combinatorics Question I get 283, is that correct? Re: Combinatorics Question Did you take note of my edit? Look at this. Then at this Now apply the above formula. Re: Combinatorics Question So, it's 6*44+265! Re: Combinatorics Question I believe that overcounts. I think this gives the correct solution: Start with all permutations: 6!. Subtract from it the number of permutations where at least two people remain in their original positions: to count this, choose two people to remain in their original position, then permute the remaining four. However, this overcounts. If the permutation of the remaining four people fixes any other person, then I can I am counting that permutation an additional time. So, the correct number is $\binom{6}{2}4!-k$ where $k$ is the number of permutations that fixes at least three people. To count that, choose three people and permute the remaining three. Again, this overcounts by the number of permutations that fix at least four people. In other words, we need to apply inclusion/exclusion: $6!-\binom{6}{2}4!+\binom{6}{3}3!-\binom{6}{4}2!+\binom{6}{5}1! = 720 - 360 + 60 - 30 + 6 =396$ Re: Combinatorics Question Not is does not over count. In the OP it wants at most one to stay put. $D(6)$ counts the number of ways that none stay put. $\binom{6}{1}D(5)$ counts number of the ways that exactly one stays put. Now I did not check the poster's calculations. But the idea is correct. Re: Combinatorics Question My mistake. I haven't worked with derangements since I was an undergrad, and the inclusion/exclusion argument I set up was not quite right. You are correct. I was trying to give the idea of how to count it if they had not yet learned how to count derangements. But, that wikipedia entry gives a good explanation of how to arrive at the recursive formula. Re: Combinatorics Question The actual closed formula is interesting. Using inclusion/'exclusion: $D(N) = \sum\limits_{k = 0}^N {{{\left( { - 1} \right)}^k}\binom{N}{k} \left( {N - k} \right)!} = N!\sum\limits_{k = 0}^N {\frac{{{{( - 1)}^k}}}{{k!}}} \approx \frac{{N!}}{e}$ November 15th 2013, 02:52 PM #2 November 15th 2013, 03:09 PM #3 Nov 2013 United Kingdom November 15th 2013, 03:12 PM #4 November 15th 2013, 03:40 PM #5 Nov 2013 United Kingdom November 15th 2013, 04:06 PM #6 November 16th 2013, 01:00 AM #7 Nov 2013 United Kingdom November 16th 2013, 06:42 AM #8 MHF Contributor Nov 2010 November 16th 2013, 07:43 AM #9 November 16th 2013, 10:41 AM #10 MHF Contributor Nov 2010 November 16th 2013, 10:51 AM #11
{"url":"http://mathhelpforum.com/statistics/224304-combinatorics-question.html","timestamp":"2014-04-18T12:02:23Z","content_type":null,"content_length":"68704","record_id":"<urn:uuid:1f5909f7-148a-46d7-a8e4-3e3af0f31ca7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Solution of Generalized Geometric Programming - IEEE Transactions on Computer-Aided Design , 2001 "... We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (op-amps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er ..." Cited by 51 (10 self) Add to MetaCart We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (op-amps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er design problem can be expressed as a special form of optimization problem called geometric programming, for which very e cient global optimization methods have been developed. As a consequence we can e ciently determine globally optimal ampli er designs, or globally optimal trade-o s among competing performance measures such aspower, open-loop gain, and bandwidth. Our method therefore yields completely automated synthesis of (globally) optimal CMOS ampli ers, directly from speci cations. In this paper we apply this method to a speci c, widely used operational ampli er architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal trade-o curves relating performance measures such as power dissipation, unity-gain bandwidth, and open-loop gain. We show how the method can be used to synthesize robust designs, i.e., designs guaranteed to meet the speci cations for a - Global Optimization: From Theory to Implementation, Nonconvex Optimization and Its Application Series , 2006 "... ..." - in Proc. IEEE International Conference on Computer Aided Design , 1999 "... ABSTRACT: In this paper, an algorithm for simultaneous logic restructuring and placement is presented. This algorithm first constructs a set of super-cells along the critical paths and then generates the set of non-inferior re-mapping solutions for each supercell. The best mapping and placement solu ..." Cited by 13 (0 self) Add to MetaCart ABSTRACT: In this paper, an algorithm for simultaneous logic restructuring and placement is presented. This algorithm first constructs a set of super-cells along the critical paths and then generates the set of non-inferior re-mapping solutions for each supercell. The best mapping and placement solutions for all super-cells are obtained by solving a generalized geometric programming (GGP) problem. The process of identifying and optimizing the critical paths is iterated until timing closure is achieved. Experimental results on a set of MCNC benchmarks demonstrate the effectiveness of our algorithm. I. - Engng , 1997 "... A deterministic global optimization algorithm is proposed for locating the global minimum of generalized geometric (signomial) problems (GGP). By utilizing an exponential variable transformation the initial nonconvex problem (GGP) is reduced to a (DC) programming problem where both the constraints ..." Cited by 12 (3 self) Add to MetaCart A deterministic global optimization algorithm is proposed for locating the global minimum of generalized geometric (signomial) problems (GGP). By utilizing an exponential variable transformation the initial nonconvex problem (GGP) is reduced to a (DC) programming problem where both the constraints and the objective are decomposed into the difference of two convex functions. A convex relaxation of problem (DC) is then obtained based on the linear lower bounding of the concave parts of the objective function and constraints inside some box region. The proposed branch and bound type algorithm attains finite ffl--convergence to the global minimum through the successive refinement of a convex relaxation of the feasible region and/or of the objective function and the subsequent solution of a series of nonlinear convex optimization problems. The efficiency of the proposed approach is enhanced by eliminating variables through monotonicity analysis, by maintaining tightly bound variables "... Geometric programming is an optimization technique originally developed for solving a class of nonlinear optimization problems found in engineering and design. Previous applications have generally been small scale and highly nonlinear. In a geometric programming problem all the constraints as well a ..." Cited by 1 (0 self) Add to MetaCart Geometric programming is an optimization technique originally developed for solving a class of nonlinear optimization problems found in engineering and design. Previous applications have generally been small scale and highly nonlinear. In a geometric programming problem all the constraints as well as the objective function are posynomials. The Gaussian channel is a com-munications model in which messages are represented by vectors in R n. The transmitter has a finite set of messages available and these messages are transmitted over a noisy channel to a receiver. The received message equals the sent vector perturbed by a Gaussian noise vector. The receiver must decide which of the messages was sent. We use groups of orthogonal matri-ces to generate the message set and the result is called a group code. The problem of finding the best code generated by a particular group is called the initial vector problem. Previous attempts to find a general solution to this problem have been unsuccessful. Although, it has been solved in several special cases. We write this problem as a nonlinear programming problem, "... This paper considers the problem of certifying the reliability of a software system that can be decomposed into a finite number of modules. It uses a Markovian model for the transfer of control between modules in order to develop the system reliability expression in terms of the module reliabilities ..." Add to MetaCart This paper considers the problem of certifying the reliability of a software system that can be decomposed into a finite number of modules. It uses a Markovian model for the transfer of control between modules in order to develop the system reliability expression in terms of the module reliabilities. A test procedure is considered in which only the individual modules are tested and the system is certified if, and only if, no failures are observed. The minimum number of tests required of each module is determined such that the probability of certifying a system whose reliability falls below a specified value R 0 is less than a specified small fraction b. This sample size determination problem is formulated as a two-stage mathematical program and an algorithm is developed for solving this problem. Two examples from the literature are considered to demonstrate the procedure. Keywords: Software reliability; Modular Tests; Sample Size Determination; Mathematical Programming 1 1. Introduc...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1642617","timestamp":"2014-04-17T19:18:50Z","content_type":null,"content_length":"26739","record_id":"<urn:uuid:e82995b4-e8a3-440e-b04e-8c68ce351554>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Bulletin 2004-2005 General Information Undergraduate Studies Graduate Studies Course Descriptions A Prabhakar Rao., Professor*, Chairperson Ph.D., University of California, Berkeley Charles Chui, Distinguished Professor* Ph.D., University of Wisconsin Raymond Balbes, Professor Emeritus* Ph.D., University of California, Los Angeles William Connett, Professor Emeritus* Ph.D., University of Chicago Richard Friedlander, Professor*, Associate Chairperson Ph.D., University of California, Los Angeles Deborah Tepper Haimo, Professor Emerita* Ph.D., Harvard University Wayne L. McDaniel, Professor Emeritus* Ph.D., Saint Louis University Stephen Selesnick, Professor Emeritus* Ph.D., University of London Jerrold Siegel, Professor* Ph.D., Cornell University Grant V. Welland, Professor Emeritus* Ph.D., Purdue University Sanjiv K. Bhatia, Associate Professor* Ph.D., University of Nebraska-Lincoln Haiyan Cai, Associate Professor* Ph.D., University of Maryland Uday K. Chakraborty, Associate Professor* Ph.D., Jadavpur University Ronald Dotzel, Associate Professor* Ph.D., Rutgers University Cezary Janikow, Associate Professor* Ph.D., University of North Carolina at Chapel Hill Qingtang Jiang, Associate Professor Ph.D., Peking University Kyungho Oh, Associate Professor* Ph.D., Purdue University Frederick Wilke, Associate Professor Emeritus* Ph.D., University of Missouri-Columbia Shiying Zhao, Associate Professor* Ph.D., University of South Carolina Galina N. Piatnitskaia., Affiliate Associate Professor Ph.D., Moscow Physical-Technical Institute Wenjie He, Assistant Professor* Ph.D., University of Georgia Hyung Woo Kang, Assistant Professor Ph.D. KAIST Martin Pelikan, Assistant Professor Ph.D., University of Illinois at Urbana-Champaign Donald E. Gayou, Affiliate Assistant Professor Ph.D., Iowa State University John Antognoli, Senior Lecturer; Coordinator of Evening Program M.A., University of Missouri-St. Louis Monica L. Brown, Lecturer M.S., Southern Illinois University, Edwardsville Aarti Dahiya, Lecturer M.S., University of Missouri-St. Louis Preetam S. Desai, Lecturer M.S., University of Missouri-St. Louis Qiang Sun Dotzel, Lecturer M.A., University of Missouri-St. Louis Dorothy Gotway, Lecturer M.A., University of Kansas-Lawrence Marlene Gustafson, Senior Lecturer Emerita M.A., Western Reserve University Leslie Johnson, Lecturer M.S., Southeast Missouri State University Nazire Koc, Lecturer M.S., Southern Illinois University, Carbondale Mary Kay McKenzie, Senior Lecturer Emerita M.S., Saint Louis University Shahla Peterman, Senior Lecturer M.S., University of Wisconsin-Madison Gillian Raw, Senior Lecturer Emerita M.A., Washington University Emily Ross, Senior Lecturer M.A., Saint Louis University Paul Schneider, Senior Lecturer M.A., Saint Louis University Cynthia Siegel, Senior Lecturer Emerita M.A., University of Chicago *members of Graduate Faculty General Information Degrees and Areas of Concentration The Department of Mathematics and Computer Science offers work leading to the B.A. in mathematics, the B.S. in mathematics, the B.S. in computer science, and, in cooperation with the College of Education, the B.S.Ed. in secondary education with an emphasis in mathematics. The department also offers minors in computer science, mathematics, and statistics. At the graduate level, the department offers a Master of Arts (M.A.) degree in mathematics, a Master of Science (M.S.) degree in computer science and a Ph.D. in applied mathematics. The program leading to the B.A. in mathematics provides a broad grounding in different areas of mathematics, giving students the depth necessary to pursue various aims such as graduate studies or other career choices. The B.S. in mathematics provides a substantial background in mathematics, statistics and computer science to produce graduates who can work as mathematicians. Both the B.A. and the B.S. in mathematics allow optional courses that enable the student to focus on areas of interest like pure or applied mathematics. The B.S.Ed. in secondary education with an emphasis in mathematics introduces students to those branches of mathematics most relevant to the teaching of secondary- school mathematics. The B.S. in computer science prepares students for employment in modern computing technology and careers in computer science. Students pursuing the M.A. degree in mathematics may choose an emphasis in either pure or applied mathematics. The pure mathematics emphasis is well suited for students preparing to teach at the high school, junior college, or four year liberal arts college level. Those who concentrate on applied courses in the M.A. program build a foundation for the application of mathematics in industry and the continuation of their education in the Ph.D. program in applied mathematics. The M.S. degree in computer science emphasizes practical aspects of the field. The Ph.D. in applied mathematics prepares students for a leadership role involving research and development in both industrial and academic settings. Students may enroll in any of these graduate programs on a part-time basis. Career Outlook A degree in mathematics or computer science prepares well-motivated students for interesting careers. Our graduates find positions in industry, government, and education. The demand for individuals well trained in statistics, computer science, and applied mathematics is greater than the available supply. In addition, a number of graduates in mathematics have elected careers in business, law and other related fields where they find logical and analytical skills valuable. Graduates in computer science and mathematics from UM-St. Louis are located throughout the country, and they also have a strong local presence. They have careers in banking, health care, engineering and manufacturing, law, finance, public service, management, and actuarial management. Many are working in areas such as systems management, information systems and data management, scientific computing, and scientific positions in the armed services. Others have careers in education, especially at secondary and higher levels. Department Scholarships The Department of Mathematics and Computer Science offers two scholarships for students who are majoring in mathematics or computer science. The Mathematical Sciences Alumni Scholarship is a monetary award for outstanding undergraduates at the junior or senior level. The Edward Z. Andalafte Memorial Scholarship is a monetary award for outstanding students at the sophomore level or higher, including graduate students. Applicants for each of these scholarships must have a grade point average of 3.5 or higher in at least 24 hours of graded course work at the University of Missouri-St. Louis, and showsuperior achievement in courses in the mathematical sciences. Application forms may be obtained from the Department of Mathematics and Computer Science. The deadline for application for both scholarships is March 15, and the scholarships must be used for educational fees or for books at UM-St. Louis starting in the fall semester following the application. Undergraduate Studies General Education Requirements All majors must satisfy the university and appropriate school or college general education requirements. All mathematics courses may be used to meet the university’s general education breadth of study requirement in natural sciences and mathematics. Satisfactory/Unsatisfactory Restrictions Majors in mathematics and computer science may not take mathematical sciences or related area courses on a satisfactory/unsatisfactory basis. Students considering graduate study should consult with their advisers about taking work on a satisfactory/unsatisfactory basis. Degree Requirements All mathematical sciences courses presented to meet the degree requirements must be completed with a grade of C- or better. At least four courses numbered 3000 or above must be taken in residence. Students must have a 2.0 grade point average in the mathematical sciences courses completed. Students enrolling in introductory mathematics courses should check the prerequisites to determine if a satisfactory score on the Mathematics Placement Test is necessary. The dates on which this test is administered are given in the Schedule of Classes. Placement into introductory courses assumes a mastery of two years of high school algebra. A minimum grade of C- is required to meet the prerequisite requirement for any course except with permission of the department. Note: Courses that are prerequisites for higher-level courses may not be taken for credit or quality points if the higher-level course has been satisfactorily completed. Many students are qualified, as a result of having studied calculus in high school, to begin their major with Math 1900, Analytic Geometry and Calculus II, or Math 2000, Analytic Geometry and Calculus III. These students are urged to consult with the department before planning their programs. Credit for Mathematics 1800, Analytic Geometry and Calculus I, will be granted to those students who complete Mathematics 1900 with a grade of C- or better. Similarly, students who are ready to begin their computer science studies with Computer Science 2250, Programming and Data Structures, will be granted credit for Computer Science 1250, Introduction to Computing, once they complete Computer Science 2250 with a grade of C- or better. Degree Requirements in Mathematics All mathematics majors in all undergraduate programs must complete the mathematics core requirements. Core Requirements 1) The following courses are required: 1250, Introduction to Computing 1320, Applied Statistics I 1800, Analytic Geometry and Calculus I 1900, Analytic Geometry and Calculus II 2000, Analytic Geometry and Calculus III 2020, Introduction to Differential Equations 2450, Elementary Linear Algebra 3000, Discrete Structures 4100, Advanced Calculus I 2) The related area requirements as described below must be satisfied. Students seeking a double degree, either within this department or with another department, do not have to fulfill the related area requirements. Bachelor of Arts in Mathematics. In addition to the core requirements and the College of Arts and Sciences’ foreign language requirement, three mathematics courses at the 4000 level or higher must be completed. Of these, one must be 4400, Introduction to Abstract Algebra B.S.Ed. in secondary education with emphasis in mathematics. In addition to the core requirements and the required education courses, three mathematics/statistics courses at the 4000 level or higher must be completed. Of these, one must be 4400, Introduction to Abstract Algebra, and one must be chosen from: 4660, Foundations of Geometry or 4670, Introduction to Non-Euclidean Geometry Bachelor of Science in Mathematics In addition to the core requirements, the B.S. in Mathematics degree requires: i) Completing all of the following: 4160, Functions of a Complex Variable 4400, Introductions to Abstract Algebra 4450, Linear Algebra ii) Completing an additional three courses numbered above 4000 in mathematics, statistics or computer science, at least one of which must be in mathematics/statistics. Degree Requirements in Computer Science Candidates for the Bachelor of Science in Computer Science degree must complete the following work: 1) Computer Science 1250, Introduction to Computing 2250, Programming and Data Structures 2260, Object-Oriented Programming with C++ 2700, Computer Systems: Architecture and Organization 2710, Computer Systems: Programming 2750, Advanced Programming with Unix 3000, Discrete Structures 3130, Design and Analysis of Algorithms 4250, Programming Languages 4280, Program Translation Techniques 4760, Operating Systems 2) Mathematics and Statistics 1320, Applied Statistics I 1800, Analytic Geometry and Calculus I 1900, Analytic Geometry and Calculus II 2000, Analytic Geometry and Calculus III 2450, Elementary Linear Algebra 3) Philosophy 4458, Ethics and the Computer 4) Five more elective courses, numbered above 4000 if in computer science, and above 2010 if in mathematics or statistics. At least three of these elective courses must be in computer science, and at least one must be in mathematics or statistics. 5) Satisfy the related area requirements as described below. Related Area Requirements Candidates for the B.A. in Mathematics must satisfy the requirements in one of the groups below with a grade of C- or better. Candidates for the B.S.Ed. in Mathematics, B.S. in Mathematics and B.S. in Computer Science must satisfy the requirements in two of the groups below with a grade of C- or better. Candidates for the B.S. in Computer Science may not choose group 1. Candidates for the B.A. in Mathematics, B.S.Ed. in Mathematics, or B.S. in Mathematics may not choose group 2 or 3. Students seeking a double degree, either within this department or with another department, do not have to fulfill the related area requirements. Related Area Courses 1) Computer Science: Two courses from the following list: 2250, Programming and Data Structures 2700, Computer Systems: Architecture and Organization 3130, Design and Analysis of Algorithms 4140, Theory of Computation 4410, Computer Graphics 4440, Digital Image Processing 2) Mathematics (Analysis): Two courses from the following list: 2020, Introduction to Differential Equations 4030, Applied Mathematics I 4100, Advanced Calculus 4160, Functions of a Complex Variable 4230, Numerical Analysis I 3) Mathematics (Algebra): Two courses from the following list: 4350,Theory of Numbers 4400, Introduction to Abstract Algebra 4450, Linear Algebra 4550, Combinatorics 4) Statistics: 4200, Mathematical Statistics I 4210, Mathematical Statistics II 5) Biology: 2102, General Ecology 2103, General Ecology Laboratory 6) Biology: 2012, Genetics 4182, Population Biology 7) Chemistry: 1111, Introductory Chemistry I 1121, Introductory to Chemistry II 8) Chemistry: 3312, Physical Chemistry I and another 3000-level, or above, chemistry course. 9) Economics: 4100, Introduction to Econometrics, and one of either: 4110, Applied Econometrics or 4130, Econometric and Time Series Forecasting 3360, Formal Logic 3380, Philosophy of Science 4460, Advanced Formal Logic 11) Physics: 2111, Physics: Mechanics and Heat 2112, Physics: Electricity, Magnetism, and Optics 12) Physics: 3221, Mechanics and another 3000 level, or above, physics course 13) Business Administration: 3320, Introduction to Operations Management and one of the following courses: 4330, Production and Operations Management - Logistics 4324, Production and Operations Management - Service Systems 4312, Business Forecasting 4326, Quality Assurance in Business 4350, Operations Research 14) Engineering: 2310, Statics 2320, Dynamics Minor Requirements The department offers minors in computer science, mathematics, and statistics. All courses presented for any of these minors must be completed with a grade of C- or better. Minor in Computer Science The requirements for the minor are: 1250, Introduction to Computing 2250, Programming and Data Structures 2700, Computer Systems: Architecture and Organization and two additional coursescomputer science courses numbered above 2700. A minimum of two computer science courses numbered above 2700 must be taken in residence in the Department of Mathematics and Computer Science at UM-St. Louis. Minor in Mathematics The requirements for the minor are: 1800, Analytic Geometry and Calculus I 1900, Analytic Geometry and Calculus II 2000, Analytic Geometry and Calculus III and two additional three-hour mathematics courses numbered above 2400. A minimum of two mathematics courses numbered 2000 or above must be taken in residence in the Department of Mathematics and Computer Science at UM-St. Louis. Minor in Statistics The requirements for the minor are: 1320, Applied Statistics I 4200, Mathematical Statistics I and two additional courses in statistics numbered above 4200. A minimum of two statistics courses numbered above 2000 must be taken in residence in the Department of Mathematics and Computer Science at UM-St. Louis. Graduate Studies The Department of Mathematics and Computer Science offers an M.A. degree in mathematics, a Ph.D. degree in applied mathematics, and an M.S. degree in computer science. Applicants must meet the general admission requirements of the Graduate School, described elsewhere in this Bulletin. Additional admission requirements for specific programs are listed below. Mathematics Programs Applicants must have at least a bachelor's degree in mathematics or in a field with significant mathematical content. Examples of such fields include computer science, economics, engineering and physics. An applicant’s record should demonstrate superior achievement in undergraduate mathematics. Individuals may apply for direct admission to either the M.A. or Ph.D. program. Candidates for the M.A. degree may choose to concentrate in either pure or applied mathematics. A student in the M.A. program may petition the department for transfer to the Ph.D. program upon successful completion of 15 credit hours and fulfillment of additional requirements as listed below. Students intending to enter the Ph.D. program must have a working ability in modern programming technologies. A student with a deficiency in this area may be required to take courses at the undergraduate level in computer science. Applicants for the Ph.D. program must, in addition, submit three letters of recommendation and scores of the Graduate Record Examination (GRE) general aptitude test. Computer Science Program Applicants for the M.S. Degree in Computer Science must have at least a bachelor's degree, preferably in computer science or in a related area. Students with bachelor's degrees outside computer science must demonstrate significant proficiency in computer science, either by taking the GRE subject area examination or by explicitly showing competence in the following areas. Any area requirement can be satisfied through suitable experience or completed coursework, if approved by the Graduate Director. • Programming experience equivalent to at least two semesters, including knowledge of a modern structured language and a modern object-oriented language. • Elementary data structures. • Assembly language programming, computer architecture, or computer organization. • Design and analysis of algorithms • Basic knowledge of the Unix operating system and program development environment. Students must also have completed mathematics courses equivalent to the following: • Two semesters of calculus. • Elementary linear algebra. • Discrete mathematical structures. • Elementary probability or statistics A student missing some of the above requirements may be admitted on restricted status if there is strong supportive evidence in other areas. Special regulations of the Graduate School applying to students while they are on restricted status are described elsewhere in this Bulletin. Preliminary Advisement Incoming students are assigned advisers with whom they should consult before each registration period to determine an appropriate course of study. If necessary, students may be required to complete undergraduate course work without receiving graduate credit. Degree Requirements Master of Arts in Mathematics Candidates for the M.A. degree must complete 30 hours of course work. All courses numbered below 5000 must be completed with grades of at least B. The courses taken must include those listed below in group A together with additional courses discussed in B. Students who have already completed courses equivalent to those in A) may substitute other courses numbered above 4000. All substitutions of courses for those listed in A) require the prior approval of the graduate director. A) Mathematics core: 4100, Advanced Calculus 4160, Functions of a Complex Variable 4450, Linear Algebra B) M.A. candidates must also complete 15 hours of course work numbered 5000 or above, chosen with the prior approval of the graduate director. Courses may be chosen to develop expertise in either pure or applied mathematics. Thesis Option Part of B) may consist of an M.A. thesis written under the direction of a faculty member in the Department of Mathematics and Computer Science. A thesis is not, however, required for this degree. A student who wishes to write a thesis should enroll in 6 hours of Math 6900, M.A. Thesis. Students writing an M.A. thesis must defend their thesis in an oral exam administered by a committee of three department members which includes the thesis director. Doctor of Philosophy in Applied Mathematics The requirements for the Ph.D. degree include the following: 1. Course work 2. Ph.D. candidacy 3. Doctoral dissertation The requirements are described in detail below. 1. Course Work A minimum of 60 hours of courses numbered 4000 or above. At least 33 hours must be in courses numbered 5000 or above. All courses numbered below 5000 must be completed with a grade of at least B. Up to 9 hours can be in Math 7990, Ph.D. Dissertation Research. Courses outside the Department of Mathematics and Computer Science will require approval of the graduate director. 2. Advancement to Ph.D. Candidacy Advancement to Ph.D. candidacy is a four-step process consisting of: A) Completing 18 hours of 5000 level courses other than Math 7990, Ph.D. Dissertation B) Passing the comprehensive examinations. C) Selecting a Ph.D. committee and preparing a dissertation proposal. D) Defending the dissertation proposal. Qualifying Examination A student must fulfill the following requirements. Basic Requirement Pass one written examination covering the fundamental topics from advanced calculus, complex variables and linear algebra-Math 4100, Math 4160, and Math 4450. This examination would normally take place within the first 12 credit hours of study after admission to the Ph.D. program. Additional Requirement After fulfilling the basic requirement above, the student must meet one of the following: Pass a written examination in an area of the student’s interests. This area will be approved by the graduate committee and will be based on a set of two or more graduate courses taken by the student. This examination would normally take place within the first 24 credit hours of study after admission to the Ph.D. program. Write a survey paper in a specialized area under the direction of a member of the graduate faculty. The student should propose to take this option when he/she has already finished at least 2 graduate level courses and has the approval of the graduate committee. The paper should be submitted within four semesters, at which time an oral examination given by a committee of at least three members of the graduate faculty must be passed. Selection of a Ph.D. Committee and Preparation of a Dissertation Proposal . The student is required to identify a dissertation adviser and an area of specialization for the dissertation. The area of specialization can be in a discipline complementary to mathematics. Usually, students select an adviser from contacts made through course work or in the seminar series. The adviser and student will then form a Ph.D. committee which may include faculty from other departments at UM-St. Louis. The committee advises the student on course work and research. Each student must prepare a dissertation proposal. This is a substantial document describing the problem to be worked on and the methods to be used. It should also demonstrate the student's proficiency in written communication. The proposal is to be submitted to the Ph.D. committee for approval. Dissertation Proposal Defense. If the Ph.D. committee finds the student's dissertation proposal acceptable, a defense is scheduled. This is a public event in which the student demonstrates mastery of the necessary skills to begin research. 3. Dissertation and Dissertation Defense Each Ph.D. candidate must write a dissertation which is an original contribution to the field on a topic approved by the candidate's Ph.D. Committee and the department, and which meets the standards and requirements set by the Graduate School including the public defense of the dissertation. Students working on a dissertation may enroll in Math 7990, Ph.D. Dissertation Research. A maximum of 9 hours in Math 7990 can be used toward the required hours of work in courses numbered 5000 or above. Master of Science in Computer Science Candidates for the M.S. degree in Computer Science must complete 30 hours of course work, subject to the Graduate School regulations. All courses numbered below 5000 must be completed with grades of at least B. Outside computer science, up to 6 hours of related course work is allowed upon permission of the Graduate Director. Students must receive credit in all areas of the following core requirements. Waiving or substituting for a specific requirement can be done on the basis of prior course work or experience at the discretion of the Graduate Director, but it will not reduce the total hours required for the degree. Operating Systems, CS 4760 or CS 5760 Programming Languages, CS 4250 Computer Systems CS 5700 Software Development, one of CS 5500, CS 5520, CS 5540, or CS 5560 Advanced Data Structures and Algorithms, CS 5130 Financial Assistance Any student who intends to apply for financial assistance, in the form of a teaching assistantship or a research assistantship, is required to have three letters of recommendation submitted with the application to the graduate program in Mathematics or Computer Science. The application must include scores on the GRE general aptitude test. Applicants are also encouraged to submit scores in the GRE subject area test in Mathematics or Computer Science. Applications for financial assistance should be submitted before February 15 prior to the academic year in which the student expects to begin graduate study. Notifications of awards are generally made March 15, and students awarded financial assistance are expected to return letters of acceptance by April 15. Career Outlook Graduates from the Department of Mathematics and Computer Science have little difficulty in finding positions in industry, government, and education. The demand for individuals well-trained in statistics, computer science, and applied mathematics is greater than the available supply. In addition, a number of graduates in mathematics have elected careers in business and other related fields where they have found their logical and analytical skills to be well-rewarded.
{"url":"http://www.umsl.edu/services/library/archives/Bulletins/Bulletin,%202004-2005/Bulletin%2004_05/AS/Mathematics.htm","timestamp":"2014-04-20T08:50:10Z","content_type":null,"content_length":"47668","record_id":"<urn:uuid:939fb114-dfda-46ad-8c83-bb0b7e35f8a7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Paul Klee's strata and the transition to complexity Paul Klee’s strata and the transition to complexity Posted: May 25, 2010 Filed under: Visual Arts | Tags: Art, Chaos Theory, Fractals, Mathematics, Paul Klee, Visual Arts 3 Comments An aerial view of the Nile It does not take much intuition or imagination to relate Paul Klee’s “Monument in Fertile Country” (1929), to an aerial photograph. As the painting comes from a period right after the artist’s trip to Egypt in the winter of 1928 – 1929, it does not take much research either to find a location bearing from above a striking resemblance to Klee’s painting: almost any area around the Nile displays the same interplay between dark fertile plots and bright, blank blocks of desert lands. Yet the “Monument” is only an example of a more general idea with which the artist experimented at that time and, though simple, is purely mathematical at its core. Beyond the immediately obvious, what is clearly different in the two pictures after only a little inspection is that Klee’s “Monument” is not a nearly random arrangement of colored stripes and blocks but rather a careful composition of mathematical nature. And just as the same abstract mathematical law can often be applied in a variety of different situations, Klee extends the same abstract idea in a series of similar paintings beyond the aerial view explanation, providing new imaginative ways of interpreting the colored blocks From music to Geometry Paul Klee (1879 – 1940), a German – Swiss, was a notoriously innovative artist. During his nearly forty creative years Klee produced an amazing array of more than 9000 pictures, more than one picture for every couple of days on average, using a dozen different techniques. His pictures, displaying a character equally as pictorial as poetic, are often playful or humorous, sometimes autobiographic, sometimes ambiguous or enigmatic and almost always innovative. Klee’s production is not dominated by a few masterpieces casting their shadow on the rest of the yield. Each and every picture is unique in its own way, resembling a “kurzgeschichte” or a poem, or even perhaps a short melody, its interpretation often revealed by a surprising, imaginative title chosen by the artist. Klee, the son of a music teacher and a singer, played the violin and under his family’s influence had early in his life set forth to becoming a musician, a pursuit he soon abandoned in favor of painting. His work as a whole is now often recognizably influenced by music; however it provides an encyclopedic variety of techniques, styles and themes. Quite a few of his pictures cannot be considered as “beautiful”, at least not in the conventional, widely accepted sense of beauty, and some of them can be described as disturbing or even ugly. Klee himself commented on that using a mathematical metaphor: “to emphasize only the beautiful seems to me like a mathematical system that only concerns itself with positive numbers”. Geometry plays a key role in Klee’s work as his compositions were at times dominated by geometric shapes: circles and orthogonal blocks, polygonal shapes, parallel lines and continuous curves, single strokes that fold to create forms. Places and caricatured human characters, theatrical compositions, imaginary animals and plants, political statements, autobiographical notes, mythological characters, sensations and even allusions to science are mostly presented symbolically in a simplistic, fairy – tale manner, often paradoxical, enigmatic or dream – like. In a clearly polyphonic manner, he often uses a specific style to permeate a number of different themes. Several of his works following his trip to Egypt display the use of orthogonal building blocks and stripes organized in “strata”, as Klee himself named the horizontal arrangement of his orthogonal shapes in one of the pictures in this series (“Individualized Measurement of the Strata”). In the humorous etching titled “Old Man Calculating”, where the bald caricature of a toothless old man is likened to that of an infant, the strata simply facilitate the construction of the figure “exotopically” i.e. more by using shading outside the form, a favorite technique of the artist. Moving across the picture, the strata double in number and become denser each time they meet with the man’s outline. The emergence of complexity In the “Monument in Fertile Country” the artist takes a step further and makes use of the strata and the doubling to achieve a description of the emergence of complexity. As we move away from the dune colored blocks in the middle, supposedly approaching the water of the Nile, the land becomes fertile and complexity arises in a rigorous, mathematical way. The dark colored fertile plots appear after successive doublings in accordance to a geometric progression: 1, 2, 4, 8, 16. The same progression appears in another variation of the same theme (“Monument on the Edge of Fertile Country”) where Klee offers an aerial view of the edge of the desert progressively turning fertile towards the west. An aerial or “bird’s eye” view is offered also in “Highways and Byways”, where the landscape appears obliquely to create the illusion of perspective and relief. Here complexity emerges geometrically by successive doublings from a central “highway” to create “byways”, smaller areas of dense strata, only to be lost again inversely by successively halving their number. Interestingly, Klee’s imaginative title of this painting suggests an interpretation other than the cultivated plots for his colored strata. With “Fire in the Evening” however, the strata, in a quite different range of colors, take a quite different meaning as they are used to partly conceal or partly uncover a blue hour landscape scene. Klee’s title acts as instructions on how to read his colored stripes and leaves no room for other interpretations. The strata provide only a partial view of the scene, again through the same geometric progression, making the detail denser only locally and away from the fire, which is rendered as a centrally placed glowing red block. Following Klee’s instructions, the land, the horizon and the sky are immediately identified and even a hint of twilight can be recognized, depicted as a long, pink stratum on the top of the picture. Turbulence and the strata Then the painting strangely titled “In the Current, Six Thresholds” takes the strata and the geometric progression idea to a different level. Klee had explored in a series of sketches the flow of water as it gradually turns from laminar (smooth) to turbulent and apparently perceived the transition to turbulence as a successively increased complexity. His ink sketch “Movement in Locks” depicts a right to left flow across six obstacles. After each obstacle, the flow’s pattern becomes increasingly complex until, in the far left, unstable vortices or eddies appear in various scales, some even moving in the inverse direction of the flow. As these qualitative features are actually characteristic of turbulent flow, Klee had by intuition correctly perceived and depicted turbulence. Moreover, as the sketch’s title refers to some undetermined “movement”, turbulent flow as perceived by Klee is not restricted to water or liquid but is left open to extensions to any fluid, as indeed is the case with the phenomenon. Klee proceeded in applying his strata idea in the representation of his perception of turbulence: the result is the “Six Thresholds” where the transition to turbulence is made through the successive doubling of the strata. Turbulence, a theoretically unsolved problem in Physics, has been associated with the so called “Chaos Theory”, a field of study which inhabits the region between Mathematics and Physics and examines systems highly “sensitive to initial conditions”. To understand why turbulence is a “chaotic phenomenon”, suppose that two corks float on the surface of flowing water. In the case of laminar (smooth) flow, a fairly accurate prediction of the movements of the corks can be made. Had the two corks been close to each other in the beginning of their movement, they would also be close to each other in the end. Moving slightly a cork from its initial position would not make a big difference to its final position: the outcome of the laminar flow is not sensitive to initial conditions. In the right side of Klee’s “Six Thresholds” the large, broad strata are used to indicate such a laminar, predictable flow. However, in the case of turbulent flow, the corks would float amongst various, unstable, local, smaller or bigger vortices and eddies that make their movement unpredictable. Moving slightly a cork from its initial position could make a big difference to its final position and the initially neighboring corks may end up far from each other after some time, in a quite unpredictable way. The movement of the corks in the turbulent flow is thus very sensitive to initial conditions. In the left side of the “Six Thresholds”, the turbulent flow is depicted by an increased complexity of the strata arrangement using the geometric progression or “doubling” route. The period doubling cascade There has been a lot of study on how the transition to chaos is made and several routes towards chaotic behavior have been examined in various phenomena, including turbulence. One of the chaotic transition scenarios involves a successive doubling and is best understood using the so called “logistic map”, a mathematical iterative rule proposed in the mid seventies as a demographic model, though well known in a slightly different form since the early nineteenth century. The logistic map is elegantly expressed by the rule X=rx(1-x) where r is a positive parameter. After an application of the rule for an arbitrary x, a result X is produced. In order to get the map going, this result must replace x in the right hand side of the equation and produce thus a new result X which in turn should replace x and so on. Depending on the value of r, the behavior of the map varies greatly: some values of r lead to the monotonous repetition of the same X over and over again, while some other values of r produce the alternation of several specific X’s over and over again. Such cases as the latter are called “periodic” and the logistic map produces, as r is increased, a period 2 solution, then a period 4, then a period 8 and so on in a successive doubling of periods. Obviously, a periodic solution is stable and highly predictable and there is nothing chaotic about it. Each time the map switches to a higher period, a bifurcation appears in the graph of the system creating an image that brings to mind the way a cascade separates into smaller currents and droplets. Soon, when a certain threshold in the values of r is crossed, the periods cease to appear and the solutions produced by the logistic map become unpredictable. For these values of r, the logistic map becomes very sensitive to the initial value of x, displaying thus a basic characteristic of chaotic behavior. Such a route to chaos is called a “period doubling scenario” and the resulting graph, highly reminiscent of Klee’s strata, is a “period doubling cascade”. The logistic cascade is a fractal, a peculiar, rough geometric object whose smaller parts are size – reduced copies of the whole, a property which is called self – similarity and is closely connected to chaotic systems. Any fractal is an irregular, rugged shape whose irregularity remains the same in every scale. Klee’s “strata” paintings seem to suggest a similar cascade sequence that may be imagined entering into smaller and smaller scales, producing images with evident self similarity. Beyond the technical part of Klee’s construction, the artist seems also to bring forward the philosophical idea that a simple mechanism may be responsible for complicated, non Euclidean, fractal geometries such as the geometry encountered in nature. As this series of paintings were inspired by the trip to Egypt, there is an immediate link between Klee’s strata and the natural and cultural environment of the banks of the Nile. From the colored stripes of the “Monument” that represent the alternation of fertile and desert lands, down to the minute vortices of the “Six Thresholds”, that may represent the waters of the Nile, Klee observed or discovered in all scales the same complex structure and even found a simple mathematical mechanism to reproduce them rigorously. His ideas and intuitive concepts, as expressed through his “strata” paintings, display a remarkable similarity in content and in form to the scientific ideas and results found in modern Mathematics and Physics. 3 Comments on “Paul Klee’s strata and the transition to complexity” 2. a localist artist which is based on tradition . somebody says that is included in degenerate art but if we see what’s going on today I can call him 3. [...] really enjoyed the exhibition, for very different reasons; Todd because Klee was committed to the mathematics which informed his work, and me because he managed to tackle topics about some of the less [...]
{"url":"http://pavlopoulos.wordpress.com/2010/05/25/paul-klees-strata-and-the-transition-to-complexity/","timestamp":"2014-04-19T06:58:57Z","content_type":null,"content_length":"82496","record_id":"<urn:uuid:8a82f391-a3bf-4199-a6c7-c256a96cfe69>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Can adjoint linear transformations be naturally realized as adjoint functors? up vote 11 down vote favorite Last week Yan Zhang asked me the following: is there a way to realize vector spaces as categories so that adjoint functors between pairs of vector spaces become adjoint linear operators in the usual It seems as if one needs to declare an inner product by fiat for this to work out. An obvious approach is to take the objects to be vectors and hom(v, w) to be the inner product (so the category should be enriched over C). But I don't see how composition works out here, and Yan says he tried this and it didn't work out as cleanly as he wanted. In this setup I guess we want the category to be additive and the biproduct to be vector addition, but I have no idea whether this actually happens. I think John Baez's ideas about categorified linear algebra, especially categorified Hilbert spaces, are relevant here but I don't understand them well enough to see how they work out. Anyone who actually knows some category theory care to clear things up? ct.category-theory categorification adjoint-functors " I guess we want the category to be additive and the biproduct to be vector addition, but I have no idea whether this actually happens." Make it happen! Mod out the the messy part so you get what you want. This works remarkably often. Of course, sometimes it gives you the trivial object... – SixWingedSeraph Jan 2 '10 at 1:30 add comment 6 Answers active oldest votes There's a canonical way of going the other way, starting with two linear categories with nice finiteness properties, with adjoint functors between them and getting a pair of vector spaces with adjoint linear transformations. The vector spaces are generated by formal symbols for each object in the category, and the inner product between any objects is the dimension of the Hom space (so Hom spaces had better be finite dimensional). Note that this doesn't have to be symmetric. Functors give linear transformations, and adjoint functors are adjoint in the usual sense. You can soup up this construction when you have some more structures on your category. For example, if you have a direct sum, then you can impose the relation $[A+B]=[A]+[B]$, and everything will work fine. up vote 8 down vote accepted If your category is abelian, you can take Grothendieck group, where $[A]+[C]=[B]$ for every short exact sequence $0\to A \to B \to C\to 0$, but then you have to be much more careful about the fact that lots of functors (including Hom with objects in the category!) aren't exact: they don't send short exact sequences to short exact sequences. You need to use derived functors to fix this. There's no canonical way of going the direction you asked, though in practice we have a very good record of being able to and I don't know of any really good examples of there being two equal natural seeming but different such constructions. add comment I think it's more natural to take advantage of the monoidal structure and regard the vector spaces as functors rather than objects. For simplicity, consider only finite dimensional vector spaces. Given V, we have a functor $F_V: Vect \to Vect$ which sends $W$ to $W \otimes V$. The familiar identification $Hom(U\otimes V, W) = Hom(U, W\otimes V^*)$ shows that the (category theory) adjoint of $F_V$ is $F_{V^*}$. (That's $F$ sub $V^*$, in case the font is too small to read.) Chaining together two of these adjunctive identifications of Hom sets, we have $Hom(V, X) = Hom(1, X\otimes V^*) = Hom(X^*, V^*)$. up vote 6 down vote The above identification sends a linear transformation $g:V\to X$ to the (linear algebra) adjoint $g^*: X^*\to V^*$. If $V$ and $X$ are inner product spaces then we can of course identify $V^*$ with $V$ and $X^*$ with $X$. Maybe that's too elementary and not the answer you were looking for. But it seems to me it's the most simple and obvious way to relate linear algebra adjoints to category theory adjoints. add comment It just occurred to me that there may be a certain sense in which this is impossible in principle. Every equivalence of categories can be improved to an adjoint equivalence, by modifying either the unit or the counit. This is true for all sorts of categories (internal, enriched, fibered, etc.). So if there were a way to realize all vector spaces (or, say, inner product spaces) as some kind of category such that adjoint linear transformations became adjoint functors, we would expect that any isomorphism of vector spaces would give an equivalence of such up vote categories, and hence could be improved to an adjoint equivalence, i.e. an isomorphism whose adjoint is its inverse. But this is false; not every isomorphism between inner product spaces is 4 down unitary/orthogonal. I can't decide whether this is deep or nonsensical, but I thought I'd throw it out there. I think your remark is actually evidence in favor of such a categorification being possible! You see... – Vectornaut Nov 13 '13 at 23:47 ... in a setting where a functor is the categorification of a linear map, a natural transormation should be the categorification of something like a homotopy of linear maps---let's say a constant-rank path through the space of linear maps. In this setting, an equivalence of categories would be a pair of linear maps whose compositions are isotopic to the identity. Your remark is the categorification of the fact that any isomorphism between inner product spaces can be improved, by the Gram-Schmidt process, to a pair of linear maps whose compositions are isotopic to the identity. – Vectornaut Nov 13 '13 at 23:48 But that sort of "natural transformation" would not yield the usual notion of "adjoint linear map" as a category-theoretic adjunction, would it? – Mike Shulman Nov 14 '13 at 16:33 I'm not sure what you mean. Are you saying that the notion of isotopy of linear maps somehow yields the notion of adjunction of linear maps, so that choosing how to categorify isotopy of linear maps will force a particular choice of how to categorify adjunction of linear maps? I don't see how that would work. – Vectornaut Nov 15 '13 at 20:10 The question was whether there is "a way to realize vector spaces as categories so that adjoint functors between pairs of vector spaces become adjoint linear operators in the usual sense". Does your proposal of isotopies as natural transformations have this property? – Mike Shulman Nov 17 '13 at 5:54 show 11 more comments A neat correspondence between adjoint functions and adjoint functors is possible, if you relax your understanding of what it means for a category to "realize" a Hilbert space a bit. (The adjoint of a linear function only exists if the vector spaces are Hilbert spaces and the function is continuous, so I'll take the question to be about Hilbert space instead of vector Given a Hilbert space $H$, "realize" it as the partially ordered set of closed subspaces $S(H)$, regarded as a category. Then a continuous linear function $f \colon H \to K$ induces a up vote 3 contravariant functor $S(f) \colon S(H)^{\text{op}} \to S(K)$. Now, denoting the adjoint function of $f$ by $f^\dagger \colon K \to H$, we get an adjunction between $S(f)$ and $S(f^\dagger) down vote $. In fact, up to a scalar, any contravariant adjunction between $S(H)$ and $S(K)$ comes from an adjoint pair of functions between $H$ and $K$! All this comes from a 1974 paper by Paul H. Palmquist, a student of Mac Lane, called "Adjoint functors induced by adjoint linear transformations" in Proceedings of the AMS 44(2):251--254. “The adjoint of a linear function only exists if the vector spaces are Hilbert spaces and the function is continuous”—I guess you mean the only situation in which we may naturally view the adjoint of a continuous map $V \to W$ as a map $W \to V$? – L Spice May 9 '11 at 4:15 This is super cool! I think it's worth emphasizing that a linear map can be recovered up to a scalar from the associated functor (part iv of Theorem 1 in the paper cited). – Vectornaut Nov 14 '13 at 0:00 add comment There a simple way to make this work: Say T:V->X is a map of inner-product vector spaces. You can view V as a category, where Hom(v,w) is a singleton set containing one real number, the inner product <v,w>, and similarly for X. Composition, a binary operation, is defined (stupidly, as in any category with singleton hom-sets) as follows: Comp_{uvw} : Hom(u,v)xHom(v,w) -> Hom(u,w) by (<u,v>,<v,w>) |-> <u,w> up vote 1 down Then the adjoint T*:X->V satisfies <Tv,x>=<v,T*x>, i.e. Hom(Tv,x)=Hom(v,T*x) , meaning it is a right adjoint to T (in a very strong sense: we have equality of these hom-sets instead of just natural isomorphism). The triviality of this example reflects the fact that that T and T* are called "adjoint" simply because they belong on opposite sides of a comma :) In general, if H is any function of two variables, we can say that g is right adjoint to f "with respect to H" if H(f(a),b)=H(a,g(b)), and say that "adjoint functors" are "adjoint with respect to Hom" (up to natural isomorphism, of course). Okay, but in this setup are the functors between such categories precisely the linear transformations? I don't have much intuition for whether this Hom makes sense. – Qiaochu Yuan Oct 14 '09 at 23:36 this approach forgets the whole vector space structure... – Martin Brandenburg Dec 28 '09 at 12:40 add comment check out John C. Baez, Higher-Dimensional Algebra II: 2-Hilbert Spaces, online. up vote 1 down vote "The analogy to adjoints of operators between Hilbert spaces is clear. Our main point here is that that this analogy relies on the more fundamental analogy between the inner product and the hom functor." add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory categorification adjoint-functors or ask your own question.
{"url":"http://mathoverflow.net/questions/476/can-adjoint-linear-transformations-be-naturally-realized-as-adjoint-functors/10420","timestamp":"2014-04-17T10:05:40Z","content_type":null,"content_length":"85718","record_id":"<urn:uuid:f96b0ff0-5b3d-4654-9258-4d7f791567ed>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
The Theory of Interest, as determined by Impatience to Spend Income and Opportunity to Invest it Irving Fisher, The Theory of Interest, as determined by Impatience to Spend Income and Opportunity to Invest it [1930] Title Page Original Table of Contents or First Page Available in the following formats: Facsimile PDF small 9.33 MB This is a compressed facsimile or image-based PDF made from scans of the original book. Facsimile PDF 23.6 MB This is a facsimile or image-based PDF made from scans of the original book. Kindle 3.5 MB This is an E-book formatted for Amazon Kindle devices. EBook PDF 2.72 MB This text-based PDF or EBook was created from the HTML version of this book and is part of the Portable Library of Liberty. HTML 1.17 MB This version has been converted from the original text. Every effort has been taken to translate the unique features of the printed book into the HTML medium. About this Title: Fisher was one of America’s greatest mathematical economists. This book is still used a textbook and is an outstanding example of clearly written economic theory. Copyright information: The text is in the public domain. Fair use statement: This material is put online to further the educational goals of Liberty Fund, Inc. Unless otherwise stated in the Copyright Information section above, this material may be used freely for educational and academic purposes. It may not be used in any way for profit. Table of Contents:
{"url":"http://oll.libertyfund.org/titles/fisher-the-theory-of-interest","timestamp":"2014-04-16T15:59:39Z","content_type":null,"content_length":"43746","record_id":"<urn:uuid:f3b28aca-1b00-450c-bf89-8a76c1943e81>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
New beginnings Today was my first lecture of the new semester, there was some sort of hubbub on The Mall in Washington, and Jacob Lurie gave the first of a series of lectures on “Extended” Topological Quantum Field Theory and a proof of (some might say a precise statement of) the Baez-Dolan Cobordism Hypothesis. According to Atiyah, a $d$-dimensional TQFT is a tensor functor, $Z$, from the category, $Cob(d)$, whose objects are closed $(d-1)$-manifolds^1, and whose morphisms are bordisms $Hom (M,N) = \{B: \ partial B \simeq M\amalg \overline{N}\}/\text{diffeomorphisms}$ to $\Vect$, the category of complex vector spaces. $Cob(d)$ is a symmetric monoidal category, with $\otimes$ given by disjoint union of manifolds, and $Z$ preserves tensor products $\begin{gathered} Z(M\amalg N) \simeq Z(M)\otimes Z(N)\\ Z(\emptyset) \simeq \mathbb{C} \end{gathered}$ The vague idea of extended TQFT is to replace $Cob(d)$ with some sort of $n$-category (where $n=d$), consisting of manifolds of all dimension $m\leq d$, and replace $Vect$ with a similarly fancied-up $n$-category, $\mathcal{C}$. Over the course of several lectures, Jacob proposes to tell us exactly what these all are, but the vague version is as follows: For $m\leq d$, a $d$-framing of a manifold, $M$, of dimension $m$, is a trivialization of $T_M\oplus \mathbb{R}^{d-m}$. $Cob(d)^{\text{framed}}_{\text{ext}}$ is a symmetric monoidal $d$-category (with tensor product given by disjoint union of manifolds). • Objects are $0$-manifolds with a $d$-framing. • Morphisms are $d$-framed bordisms between $d$-framed $0$-manifolds. • 2-Morphisms are $d$-framed bordisms between $d$-framed $1$-manifolds. • ⋮ • $d$-morphisms are $d$-framed $d$-manifolds (with corners) $/\text{diff}$ The statements which he proposes to prove are that Given a $d$-category, $\mathcal{C}$, with tensor product, an ETQFT “with values in $\mathcal{C}$” is a tensor functor 1. $Z:\, Cob{d}^{\text{framed}}_{\text{ext}}(d) \to \mathcal{C}$ 2. The “fully dualizable” objects, $X\in \mathcal{C}$ are given by $X = Z(\bullet)$. In some sense, the whole ETQFT is determined by knowing what $Z$ of a point is. Here, “fully dualizable objects” is some condition analogous to demanding that the vectors spaces in an ordinary TQFT, associated to a $(d-1)$ manifold, are finite-dimensional. There is, of course, a 110 page paper providing “an outline” of the idea. Probably, it will be more intelligible than my summary. ^1 Here, and below, a “manifold” is smooth, compact and oriented. Usually, it will be a manifold with boundary. When we want to denote a manifold without boundary, we’ll call it “closed.” Posted by distler at January 21, 2009 2:07 AM formalizing it (some might say a precise statement of) the Baez-Dolan Cobordism Hypothesis My impression is that nobody would claim that the original statement was already technically precise, nor that it was suggested to be. Part of the aspect of proving it is to find the natural formalization that makes it naturally true. In this respect the tangle hypothesis is not unlike the homotopy hypothesis, I’d think. Posted by: Urs Schreiber on January 21, 2009 7:47 AM | Permalink | Reply to this Re: formalizing it In our paper on this subject, James Dolan and I explained that we used the term ‘hypothesis’ instead of ‘conjecture’ because the theory of $n$-categories wasn’t sufficiently developed to state a precise conjecture. The goal was to get people to develop enough $n$-category theory to turn these hypotheses into conjectures and then into theorems. It seems to be working. But unfortunately, a Google search under “Baez Dolan” still corrects this string to “Baez Dylan”. Posted by: John Baez on January 23, 2009 1:02 PM | Permalink | Reply to this Re: New beginnings I’d like to understand the import of the framing in this story. Most manifolds aren’t framable after all (referring to the top dimension where we don’t get any extra stuff to play around with). Later on in the notes, Lurie gives a version where the framing is removed and reasonably arbitrary other structures are substituted. So, I guess I’m wondering if the framing is just a math trick or is something deeper. Of course, I’m only on pg 50 of the notes, so maybe this is addressed later. Posted by: Aaron Bergman on January 21, 2009 12:17 PM | Permalink | Reply to this Re: New beginnings There are many flavors of cobordism theory, and framed cobordism theory is the ‘simplest’ in a certain conceptual sense, though one of the most complicated to compute. So, the $n$-categorical description of cobordisms is simplest for framed cobordisms. So we should study this one first, and then others. A bit more precisely, the spectrum for framed cobordism theory is the ‘sphere spectrum’. This is a cleverly defined limit of the $n$-fold loop space of the $n$-sphere as $n \to \infty$. The simple nature of the $n$-sphere gives rise to the simplicity of the sphere spectrum. Again, this simplicity is conceptual rather than calculational: the homotopy groups of the sphere spectrum are famously tough to compute! But here’s the reason for this conceptual simplicity: The fundamental $d$-groupoid of the $n$-sphere should have a nice universal property: it should be the free $d$-groupoid on an $n$-loop. Here an $n$-morphism is called an ‘$n$-loop’ if it’s an automorphism of the identity of the identity of the identity… of the identity morphism of some object. Why should the $n$-sphere have this property? It boils down to this: if you draw a diagram of an $n$-loop, it looks just like an $n$-sphere. Calculations show this idea is not so goofy as it may sound. Starting from this hypothesized universal property of the fundamental groupoid of the $n$-sphere, one can heuristically derive a universal property of the sphere spectrum, and also of the framed cobordism $n$-categories. That’s what Jim and I did on page 28-29 here. Here’s another way to say what’s so great about the sphere spectrum: it’s the initial object in the category of ring spectra, just as the integers is the initial object in the category of rings. Posted by: John Baez on January 23, 2009 2:31 PM | Permalink | Reply to this Re: New beginnings i got on a search quest today and found your site. I suppose you heard of the “1982 aspect experiment” ? Can you tell me what it actually was… the experiment itself? more specifically, what does math and the experiment have to do with each other? People say its proves this and that… other say no. or it brings up more questions than it answers. anyway, thanks for a reply :) Posted by: Timeshares on January 30, 2009 4:22 AM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/~distler/blog/archives/001894.html","timestamp":"2014-04-21T12:09:55Z","content_type":null,"content_length":"34228","record_id":"<urn:uuid:e8472369-09e2-4128-9a2c-b59ba406e020>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Winning or Losing in the Game of Nim on Graphs This Demonstration presents a random oriented graph that has at least one terminal vertex. Such graphs can represent the terminal game Nim. To play Nim, players alternate in choosing a sequence of connected vertices until a terminal vertex is reached by one of the players; then the other player wins. It can be shown that in such games all the positions can be divided into two subsets representing winning and losing positions. An algorithm is applied to partition an arbitrary directed graph. The graph on the left is a random graph with numbered vertices and edges. The graph on the right shows the losing positions in black and the winning positions in white. A terminal game with players is defined by: • a partition of the nonempty set into nonempty sets ; • a function (i.e. for every , there is a subset of ) such that ; • a function (i.e. for every element , there is a vector ); is called the payoff function and each player tries to maximize it. An arbitrary oriented graph with at least one terminal vertex can represent a Nim-like game that is a particular case of a terminal game on graphs with , , and equal to the set of terminal vertices. It can be verified that can be partitioned into subsets and (i.e. and ) with the properties: • , • . The vertices in the set are called the "gain" vertices, and those in are called the "loss" vertices, because the player who chooses a vertex from offers only losing vertices to the opponent (from ), and the player who chooses a vertex from offers only winning vertices (from ). Obviously is the graph's kernel. Define to be the set of terminal vertices. The algorithm is: 1. Set . 2. For every vertex from , we know if belongs to or to Let , and Then, and . 3. Let . only when . [1] B. Kummer, Spiele auf Graphen , Berlin 1979. (Moldova State University)
{"url":"http://demonstrations.wolfram.com/WinningOrLosingInTheGameOfNimOnGraphs/","timestamp":"2014-04-20T21:45:44Z","content_type":null,"content_length":"50008","record_id":"<urn:uuid:18d8e62d-85a4-478b-8be9-ba6b0282da19>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
The Dietz Algorithm The Income Builder newsletter uses the Dietz Algorithm for calculating returns on its model portfolios. The Dietz Algorithm is a method for calculating investment returns (internal rate of return) when the portfolio is subject to net contributions (redemptions) during the measured time interval. This method assumes that the net contributions (redemptions) are received evenly through the time interval. It accomplishes this by adding half of the net contributions to the beginning portfolio value and subtracting half of the contributions from the ending portfolio value. In mathematical terms, the Dietz Algorithm is: R[i] = ((P[i] -.5C[i])/(P[o] + .5C[i])-1) x 100 P[o] = portfolio value at beginning of time interval i P[i] = portfolio value at end of time interval i C[i] = net contributions during time interval i R[i] = net rate of return during time interval i This method was first described in "Pension Fund Investment Performance - What Method to Use When." by Peter O. Dietz from the January/February 1966 issue of the Financial Analysts Journal. Taken from Managing Investment Portfolios, A Dynamic Process, Edited by John L. Maginn and Donald L. Tuttle; Second Edition, 1990, Warren Gorham & Lamont
{"url":"http://www.larkresearch.com/dietz.htm","timestamp":"2014-04-16T18:59:41Z","content_type":null,"content_length":"7790","record_id":"<urn:uuid:fd2c0db7-bbe6-46ec-ba61-9144f45269db>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Fact Families In this lesson, the relationship of subtraction to addition is introduced with a book and with dominoes. Then students explore the concept of missing addends. To review the concept of subtraction, read Ten Sly Piranhas. Ask the children to act out with counters what is happening in the story and to write the related subtraction sentence for each page. Then call out a sum and have each child show you a domino with that many spots. Encourage the students to write the addition equation suggested by the domino. In the example above, students may sugggest the following addition equation: 6 + 4 = 10 Next, choose two dominoes with the same number of total spots, then display them with one crossed over the other so that both parts of the upper domino but only one part of the bottom domino is visible. Now tell the children that both dominoes have the same number of spots and that they are to guess how many spots are covered on the bottom domino. When a correct response is given, display the domino and ask the students to explain how they knew. Model the activity a few more times, being sure to include one example of what happens when one domino has 0 spots on one side. Then place the students in pairs and have them take turns being the teacher. This activity will help them focus on the relationship of subtraction to addition. Finally, ask the pairs to sort the set of Double 6 dominoes by the sums that the dominoes represent. Ask the students to write a sentence about this exercise for their portfolios. Now call the children together and ask a volunteer to choose a domino that is not a double and write the four number sentences (two addition and two subtraction) that the domino suggests. You may wish to repeat this exercise with other volunteers. For the domino above, the following addition and subtraction sentences are suggested: 4 + 5 = 9 5 + 4 = 9 9 - 5 = 4 9 - 4 = 5 As the lesson concludes, remind the students that they need to practice the addition facts and that making more triangle-shaped flash cards will help them to do so. • Book: Ten Sly Piranhas, by William Wise • Dominoes • Index cards 1. The Questions for Students help students focus on their current level of understanding and of fact mastery. 2. You may wish to add more documentation to the Class Notes chart. These notes will be valuable as you plan appropriate remediation and enrichment opportunities. Questions for Students 1. What is missing when I say “2 + ‘something’ = 5?” Can you write the complete addition sentence? [3 is missing; 2 + 3 = 5] 2. What about when I say “6 + ‘something’ equals 6?” What addition sentence would show that? [0 is missing; 6 + 0 = 6] 3. What addition and subtraction facts can I write if I pick a 3+4 domino? Suppose I pick a 3+0 domino? A 3+3 domino? [Answers may include any of the following: 3 + 4 = 7 7 - 4 = 3 3 + 0 = 3 3 - 0 = 3 3 + 3 = 6 6 - 3 = 3] 4. How could you help a friend find a subtraction fact related to 5 + 4 = 9? [Student responses may vary, but they may say 9 - 4 = 5] Teacher Reflection • Which students have some of the facts memorized? • Did most students remember the effects of adding by 0? Did most recall the order property? • Which students met all the objectives of this lesson? What extension activities are appropriate for those students? • Which students are still having difficulty with the objectives of this lesson? What additional instructional experiences do they need? • What will you do differently the next time that you teach this lesson? This lesson focuses on the counting model for addition and begins with reading a counting book. Students model the numbers with counters as the book is read. Then they count the spots on each side of a domino and write, in vertical and horizontal format, the sums suggested by dominoes. Finally, the students illustrate a domino and record a sum it represents. Pre-K-2, 3-5 In this lesson, students generate sums using the number line model. This model highlights the measurement aspect of addition and is a distinctly different representation of the operation from the model presented in the previous lesson. The order (commutative) property is also introduced. At the end of the lesson, students are encouraged to predict sums and to answer puzzles involving This lesson builds on the previous two lessons and encourages students to explore another model for addition, the set model. This model is similar to the counting model in the first lesson, because it is based on counting. Reading a related counting and addition book sets the stage for this lesson in which students write story problems, find sums using sets, and present results in the form of a table. In the discussion of the table, the students focus on the order property and the effects of adding 0. This lesson encourages students to explore another model of addition, the balance model. The exploration also involves recording the modeled addition facts in equation form. Students begin to memorize the addition facts by playing the “seven-up game.” In this lesson, the students focus on dominoes with the same number of spots on each side and on the related addition facts. They make triangle-shaped flash cards for the doubles facts. Learning Objectives Students will: • Find missing addends • Review the additive identity • Relate subtraction to addition Common Core State Standards – Mathematics -Kindergarten, Algebraic Thinking • CCSS.Math.Content.K.OA.A.1 Represent addition and subtraction with objects, fingers, mental images, drawings1, sounds (e.g., claps), acting out situations, verbal explanations, expressions, or equations. -Kindergarten, Algebraic Thinking • CCSS.Math.Content.K.OA.A.2 Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. -Kindergarten, Algebraic Thinking • CCSS.Math.Content.K.OA.A.5 Fluently add and subtract within 5. Grade 1, Algebraic Thinking • CCSS.Math.Content.1.OA.B.4 Understand subtraction as an unknown-addend problem. For example, subtract 10 - 8 by finding the number that makes 10 when added to 8. Grade 1, Algebraic Thinking • CCSS.Math.Content.1.OA.C.6 Add and subtract within 20, demonstrating fluency for addition and subtraction within 10. Use strategies such as counting on; making ten (e.g., 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14); decomposing a number leading to a ten (e.g., 13 - 4 = 13 - 3 - 1 = 10 - 1 = 9); using the relationship between addition and subtraction (e.g., knowing that 8 + 4 = 12, one knows 12 - 8 = 4); and creating equivalent but easier or known sums (e.g., adding 6 + 7 by creating the known equivalent 6 + 6 + 1 = 12 + 1 = 13). Grade 1, Algebraic Thinking • CCSS.Math.Content.1.OA.D.8 Determine the unknown whole number in an addition or subtraction equation relating to three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations 8 + ? = 11, 5 = _ - 3, 6 + 6 = _. Grade 1, Number & Operations • CCSS.Math.Content.1.NBT.C.4 Add within 100, including adding a two-digit number and a one-digit number, and adding a two-digit number and a multiple of 10, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used. Understand that in adding two-digit numbers, one adds tens and tens, ones and ones; and sometimes it is necessary to compose a ten. Grade 2, Algebraic Thinking • CCSS.Math.Content.2.OA.B.2 Fluently add and subtract within 20 using mental strategies. By end of Grade 2, know from memory all sums of two one-digit numbers. Grade 2, Number & Operations • CCSS.Math.Content.2.NBT.B.7 Add and subtract within 1000, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method. Understand that in adding or subtracting three-digit numbers, one adds or subtracts hundreds and hundreds, tens and tens, ones and ones; and sometimes it is necessary to compose or decompose tens or hundreds. Common Core State Standards – Practice • CCSS.Math.Practice.MP4 Model with mathematics. • CCSS.Math.Practice.MP5 Use appropriate tools strategically. • CCSS.Math.Practice.MP6 Attend to precision.
{"url":"http://illuminations.nctm.org/Lesson.aspx?id=372","timestamp":"2014-04-18T17:08:52Z","content_type":null,"content_length":"79414","record_id":"<urn:uuid:44a926bb-28cf-4a9b-93a4-669049b68d71>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Receivable Turnover Ratio The receivable turnover ratio (debtors turnover ratio, accounts receivable turnover ratio) indicates the velocity of a company's debt collection, the number of times average receivables are turned over during a year. This ratio determines how quickly a company collects outstanding cash balances from its customers during an accounting period. It is an important indicator of a company's financial and operational performance and can be used to determine if a company is having difficulties collecting sales made on credit. Receivable turnover ratio indicates how many times, on average, account receivables are collected during a year (sales divided by the average of accounts receivables). A popular variant of the receivables turnover ratio is to convert it into an Average collection period in terms of days. The average collection period (also called Days Sales Outstanding (DSO)) is the number of days, on average, that it takes a company to collect its accounts receivables, i.e. the average number of days required to convert receivables into cash. An accounting measure used to quantify a firm's effectiveness in extending credit as well as collecting debts. Calculation (formula) Receivables turnover ratio = Net receivable sales/ Average accounts receivables Accounts Receivable outstanding in days: Average collection period (Days sales outstanding) = 365 / Receivables Turnover Ratio Norms and Limits There is no general norm for the receivables turnover ratio, it strongly depends on the industry and other factors. The higher the value of receivable turnover the more efficient is the management of debtors or more liquid the debtors are, the better the company is in terms of collecting their accounts receivables. Similarly, low debtors turnover ratio implies inefficient management of debtors or less liquid debtors. But in some cases too high ratio can indicate that the company's credit lending policies are too stringent, preventing prime borrowing candidates from becoming customers. Exact formula in the ReadyRatios analytic software Average collection period = ((F1[b][TradeAndOtherCurrentReceivables] + F1[e][TradeAndOtherCurrentReceivables])/2)/(F2[Revenue]/NUM_DAYS) Receivables turnover ratio = 365 / Average collection period F2 – Statement of comprehensive income (IFRS). F1[b], F1[e] - Statement of financial position (at the [b]eginning and at the [e]nd of the analizing period). NUM_DAYS – Number of days in the the analizing period. 365 – Days in year. what if the company income statement didn't tell credit sales? can I use general sales instead? As you can see in the formula above instead of credit sales you can use simplified way of calculation (receivables from balance sheet and revenue from income statement). Quote Guest 19 February, 2013 Looking at the formula at the end of the article, it seems to me a little bit of inconsistency is present. With the average collection period formula if we are calculating the the periods shorter than 12months, like for example 9 monts we would use istead of 365 days 270 days in order to have average collection period for these 9 monhts. Yet, in the second formula the same logic does not aply since we are dividng 365 days of the full year with average collection period (regardless of the respective period) in order to calculate receivable turnover ratio. My question is, why aren't we using the NUM DAYS instead of 365 days in the second formula? Guest wrote: Looking at the formula at the end of the article, it seems to me a little bit of inconsistency is present. With the average collection period formula if we are calculating the the periods shorter than 12months, like for example 9 monts we would use istead of 365 days 270 days in order to have average collection period for these 9 monhts. Yet, in the second formula the same logic does not aply since we are dividng 365 days of the full year with average collection period (regardless of the respective period) in order to calculate receivable turnover ratio. My question is, why aren't we using the NUM DAYS instead of 365 days in the second formula? You should use NUM DAYS = 365 for annual calculation only. If you take Revenue for 1 month, use NUM DAYS = 30 (31). Receivables turnover ratio is always annual indicator so there is 365 days used in it formula. Of caouse, you can calculate your costum indicator like You should use NUM DAYS = 365 for annual calculation only. If you take Revenue for 1 month, use NUM DAYS = 30 (31). Receivables turnover ratio is always annual indicator (it shows number turns during the year) so there is 365 days used in it formula. Quote Guest 30 October, 2013 Reah Paeaz wrote: what if the company income statement didn't tell credit sales? can I use general sales instead? We will use revenue from operations. Quote Guest 19 November, 2013 How to calculate the receivables account if the business of the company expect to be paid by credit cards about 48 days of account receivables? Account receivable at the end of period = 48 days x Sales/365 days Is this correct? Quote Guest 27 December, 2013 What is the company doesn't has neither sales or revenue what to do?! :( I'm in deep trouble I really need help ASAP Quote Guest 25 January, 2014 Well you may need study accounting Quote Guest 12 February, 2014 Hi, in regards to the debtors days, it says it is better if the figure is higher, but if the figure is higher it will take you longer to recover your debts. wouldn't this be a disadvantage? Quote Guest 22 February, 2014 What if the AR turnover not change anymore, no addition in sales and no payment for more than 1 year, let say 24 month the customer didn't pay. How to calculate AR days? Can we use formula no of days for 2 years. Example AR TO : 0,95 . Can we find AR days usinge this formula = (365+365)/0,95 = 768 days. Secondly: Do the AR days have maximum days. If you say AR days only for annual indicator, what about the payment AR more than 1 year. Start free ReadyRatios financial analysis now! start online No registration required! But if you signed up extra ReadyRatios features will be available. Most WantedFinancial Terms Have 10 minutes to relax? Play our unique «Balance» game Play The Game
{"url":"http://www.readyratios.com/reference/asset/receivable_turnover_ratio.html","timestamp":"2014-04-18T20:42:38Z","content_type":null,"content_length":"52613","record_id":"<urn:uuid:996b8809-76bf-42e8-812d-fac266b63edb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Archives WWW Server Topics in Mathematics Software Teaching materials, software, WWW links organized by Mathematical Public domain and shareware software for Macintosh, Windows (2000, ME, 98, 95, 3.1 and MSDOS) computers and for multi-platforms Topics. Searchable database. (incl. UNIX) in addition to links to other software sites. Teaching Materials Other Math Archives Features Other Links Math and the Web Links to other mathematics related sites including Tutorials and information on developing materials for the web. What's New on the Math Archives Math Archives Information A listing of the current month's and previous months' additions to Goals, financial support, personnel, information on submitting materials to the Math Archives, etc. the Math Archives. Hosted on SunSITE, University of Tennessee, Knoxville. If you encounter any problems with this server or have suggestions on how to improve the offerings of this server, please contact us by sending a message to husch@math.utk.edu ©1996-2001 Mathematics Archives
{"url":"http://archives.math.utk.edu/newindex.html","timestamp":"2014-04-20T06:23:38Z","content_type":null,"content_length":"5861","record_id":"<urn:uuid:e45739dd-531d-4432-a110-7ffe477fc560>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Example of a smooth morphism where you can't lift a map from a nilpotent thickening? up vote 15 down vote favorite Definition. A locally finitely presented morphism of schemes $f\colon X\to Y$ is smooth (resp. unramified, resp. étale) if for any affine scheme $T$, any closed subscheme $T_0$ defined by a square zero ideal $I$, and any morphisms $T_0\to X$ and $T\to Y$ making the following diagram commute T[0] --> X | | | |f v v T ---> Y there exists (resp. exists at most one, resp. exists exactly one) morphism $T\to X$ which fills the diagram in so that it still commutes. For checking that $f$ is unramified or étale, it doesn't matter that I required $T$ to be affine. The reason is that for an arbitrary $T$, I can cover $T$ by affines, check if there exists (a unique) morphism on each affine, and then "glue the result". If there's at most one morphism locally, then there's at most one globally. If there's a unique morphism locally, then there's a unique morphism globally (uniqueness allows you to glue on overlaps). But for checking that $f$ is smooth, it's really important to require $T$ to be affine in the definition, because it could be that there exist morphisms $T\to X$ locally on $T$, but it's impossible to find these local morphisms in such a way that they glue to give a global morphism. Question: What is an example of a smooth morphism $f\colon X\to Y$, a square zero nilpotent thickening $T_0\subseteq T$ and a commutative square as above so that there does not exist a morphism $T\to X$ filling in the diagram? I'm sure I worked out such an example with somebody years ago, but I can't seem to reproduce it now (and maybe it was wrong). One thing that may be worth noting is that the set of such filling morphisms $T\to X$, if it is non-empty, is a torsor under $Hom_{\mathcal O_{T_0}}(g^*\Omega_{X/Y},I)=\Gamma(T_0,g^*\mathcal T_{X/Y}\otimes I)$, where $\mathcal T_{X/Y}$ is the relative tangent bundle. So the obstruction to finding such a lift will represent an element of $H^1(T_0,g^*\mathcal T_{X/Y}\otimes I)$ (you can see this with Čech cocycles if you want). So in any example, this group will have to be non-zero. 1 Is it the same if you substitute "Artinian local ring" for "affine scheme" in the definitions? I would expect smoothness, unramifiedness, and étaleness to be local properties... – Qfwfq Apr 21 '10 at 6:16 2 unknown(google) this is trivial for formally unramified morphisms, a hard theorem for formally étale morphisms, and an open problem for formally smooth morphisms. This is why we require it for all rings in the finitely presented case, because we want étale (resp. smooth, resp. unramified) morphisms to be finitely presented and formally smooth (resp. étale, resp. unramified). You shouldn't build reductions into the definition when they only hold in special cases. – Harry Gindi Apr 21 '10 at 20:09 See my question on formally étale morphisms (and the answer from Gabber that I posted). – Harry Gindi Apr 21 '10 at 20:13 1 @unknown: Yes, you get the same notions if you just use Artinian local rings; see EGA IV, section 17.4 (particularly Proposition 17.4.2). This is actually much more local than I first expected these properties to be. It wouldn't be unreasonable to expect that you can only reduce down to the case where you replace "affine scheme" by "spectrum of a local ring". – Anton Geraschenko Apr 22 '10 at 3:31 1 By the way, I was implying that yes, you can make the substitution that you noted in the finitely presented case, but not in the formal case, so my comment is consistent with Anton's comment. I was remarking that the definition is stated the way it is because it is a special case of the formal case. – Harry Gindi Apr 22 '10 at 20:13 add comment 4 Answers active oldest votes Using some of BCnrd's ideas together with a different construction, I'll give a positive answer to Kevin Buzzard's stronger question; i.e., there is a counterexample for any non-etale smooth morphism. Call a morphism $X \to Y$ wicked smooth if it is locally of finite presentation and for every (square-zero) nilpotent thickening $T_0 \subseteq T$ of $Y$-schemes, every $Y$-morphism $T_0 \to X$ lifts to a $Y$-morphism $T \to X$. Theorem: A morphism is wicked smooth if and only if it is etale. Proof: Anton already explained why etale implies wicked smooth. Now suppose that $X \to Y$ is wicked smooth. In particular, $X \to Y$ is smooth, so it remains to show that the geometric fibers are $0$-dimensional. Wicked smooth morphisms are preserved by base change, so by base extending by each $y \colon \operatorname{Spec} k \to Y$ with $k$ an algebraically closed field, we reduce to the case $Y=\operatorname{Spec} k$. Moreover, we up vote 21 may replace $X$ by an open subscheme to assume that $X$ is etale over $\mathbb{A}^n_k$ for some $n \ge 0$. down vote accepted Fix a projective variety $P$ and a surjection $\mathcal{F} \to \mathcal{G}$ of coherent sheaves on $P$ such that some $g \in \Gamma(P,\mathcal{G})$ is not in the image of $\Gamma(P,\ mathcal{F})$. (For instance, take $P = \mathbb{P}^1$, let $\mathcal{F} = \mathcal{O}_P$, and let $\mathcal{G}$ be the quotient corresponding to a subscheme consisting of two $k$-points.) Make $\mathcal{O}_P \oplus \mathcal{F}$ an $\mathcal{O}_P$-algebra by declaring that $\mathcal{F} \cdot \mathcal{F} = 0$, and let $T = \operatorname{\bf Spec}(\mathcal{O}_P \oplus \ mathcal{F})$. Similarly, define $T_0 = \operatorname{\bf Spec}(\mathcal{O}_P \oplus \mathcal{G})$, which is a closed subscheme of $T$ defined by a nilpotent ideal sheaf. We then may view $g = 0+g \in \Gamma(P,\mathcal{O}_P \oplus \mathcal{G}) = \Gamma(T_0,\mathcal{O}_{T_0})$. Choose $x \in X(k)$; without loss of generality its image in $\mathbb{A}^n(k)$ is the origin. Using the infinitesimal lifting property for the etale morphism $X \to \mathbb{A}^n$ and the nilpotent thickening $P \subseteq T_0$, we lift the point $(g,g,\ldots,g) \in \mathcal{A}^n(T_0)$ mapping to $(0,0,\ldots,0) \in \mathbb{A}^n(P)$ to some $x_0 \in X(T_0)$ mapping to $x \ in X(k) \subseteq X(P)$. By wicked smoothness, $x_0$ lifts to some $x_T \in X(T)$. The image of $x_T$ in $\mathbb{A}^n(T)$ lifts $(g,g,\ldots,g)$, so each coordinate of $x_T$ is a global section of $\mathcal{F}$ mapping to $g$, which is a contradiction unless $n=0$. Thus $X \to Y$ is etale. 2 Wicked :-) I think that's a pretty definitive answer to Anton's question! – Kevin Buzzard Apr 21 '10 at 19:48 Bjorn, that's great. I'd been trying to make something like that work (to ultimately reduce to something with affine space in dim > 0), but I kept getting stuck. Then I gave up and 2 just posted the partial idea as comment to your earlier answer. Good to see this kind of strategy works in the end, even simpler than the P^1 case. Wicked, indeed. – BCnrd Apr 22 '10 at 3:26 add comment Let $Y=\operatorname{Spec} k$ and let $X=\mathbb{P}^1_k$, viewed as $\operatorname{Spec} k[t]$ glued to $\operatorname{Spec} k[t^{-1}]$. Let $T = \operatorname{\bf Spec}(\mathcal{O}_X + \ mathcal{O}_X(-2)\epsilon + \mathcal{O}_X(-4) \epsilon^2)$ where $\epsilon^3=0$, so $T$ is $$\operatorname{Spec}(k[t] + k[t]\epsilon + k[t]\epsilon^2)$$ glued to $$\operatorname{Spec}(k[t^ {-1}] + t^{-2} k[t^{-1}]\epsilon + t^{-4} k[t^{-1}]\epsilon^2).$$ Let $I$ be the ideal sheaf of $\mathcal{O}_T$ generated by $\epsilon^2$, and let $T_0$ be the associated subscheme. Consider the $k$-morphism $T_0 \to X$ given by $$t \mapsto t + \epsilon$$ $$t^{-1} \mapsto t^{-1} - t^{-2} \epsilon.$$ (Check that this is well-defined, i.e., that $(t+\epsilon)(t^{-1} - t^{-2} \ up vote epsilon) = 1$ in $k[t,t^{-1}][\epsilon]/(\epsilon^2)$.) A lift of this to a morphism $T \to X$ has the form $$t \mapsto t + \epsilon + f(t) \epsilon^2$$ $$t^{-1} \mapsto t^{-1} - t^{-2} \ 9 down epsilon + t^{-4} g(t^{-1}) \epsilon^2$$ for some polynomials $f$ and $g$, but the compatibility condition is now $$t^{-3} g(t^{-1}) - t^{-2} + t^{-1} f(t) = 0,$$ which has no solution. Note: If we replaced $\mathcal{O}(-4)$ with $\mathcal{O}(-3)$, then there would be a lifting, given by the Taylor polynomials of $t$ and $t^{-1}$, i.e., $$t \mapsto t + \epsilon$$ $$t^{-1} \ mapsto \frac{1}{t+\epsilon} = t^{-1} - t^{-2} \epsilon + t^{-3} \epsilon^2.$$ Bjorn: your example seems to beg the question: if $X\to Y$ is any smooth but not etale morphism, does there always exist $T_0$ and $T$ as above such that the map doesn't lift?! – Kevin Buzzard Apr 21 '10 at 9:10 Kevin, if we pick a point $y \in Y$ such that $X_y$ has positive dimension, we can pass to that fiber to reduce to the case $Y = {\rm{Spec}}(k)$ for a field $k$. Can work locally on $X$, so $X$ affine. The alg. closure of $k$ in $X$ is finite sepble, so can replace $k$ with that so $X$ geom. connected. By Bertini (and Bjorn's version for finite $k$), can then slice $X$ down to a geom. connected smooth affine curve. So Bjorn's case is not as far from the most general case as it may have initially seemed. – BCnrd Apr 21 '10 at 16:08 I added an argument for Kevin's general version as a separate answer. – Bjorn Poonen Apr 21 '10 at 21:23 add comment The kind of example that comes easily to mind is where $X=L$ is a line bundle over $Y$, a smooth projective variety over, say, $\mathbb{Z}_p$. We take $T=Y$ and $T_0=Y_0$, the special fiber of $Y$. Then $L$ can have plenty of sections over $Y_0$ that refuse to lift to $Y$. Note that this implies what you want, since if the sections could be lifted repeatedly over square-zero ideals, then they could be lifted all the way to $Y$. (By formal GAGA, if you want.) That is, replace the original $Y_0$ by $Y\otimes \mathbb{Z}/p^n$ for larger $n$. One place you can see this spelled out with $L$ the tensor powers of the canonical bundle $\omega_{Y/\mathbb{Z}_p}$ ('jump of plurigenera') is a paper by Junecue Suh: up vote 8 Compos. Math. 144 (2008), no. 5, 1214–1226. (Unfortunately, I have no link that doesn't require a log-in.) down vote I think he constructs examples where the jump is arbitrarily large even for Shimura surfaces. This example is probably overly pathological, and I suspect you can construct more commonplace $L$. add comment Suppose that $X \to Y$ is a smooth map of varieties, and denote by $Y'$ the relative spectrum of $\mathcal O_Y[t]/(t^2)$. The liftings of $X \to Y$ to $Y'$ are parametrized by $\mathrm H^1(X, \mathrm T_{X/Y})$. Now suppose that $X' \to Y'$ is a lifting $s\colon Y \to X$ is a section; it is easy to see that the obstruction to lifting $s$ to a section $Y' \to X'$ is the image of the element of $\mathrm H^1(X, \mathrm T_X/Y)$ into $\mathrm H^1(Y, \mathrm s^*T_{X/Y})$; so to give an example where the section doesn't lift it is enough to give examples in which the map $\ mathrm H^1(X, \mathrm T_{X/Y}) \to \mathrm H^1(Y, \mathrm s^*T_X/Y)$ is not 0. This is easy; for example, one can take $L$ to be a line bundle on $Y$ with $\mathrm H^1(Y,L) \neq 0$, and $f\ up vote colon X \to Y$ to be the total space of $L$. In this case what is happening is that $X' \to Y'$ is a non-trivial $L$-torsor, so the trivial section $Y \to L$ does not lift. 7 down vote Another type of example is of the type suggested by Minhyong. Take $Y$ to be a projective variety over $k$ with $\mathrm H^0(Y, \mathcal O) = k$ and $\mathrm H^1(Y, \mathcal O) \neq 0$; let $Y'$ be as before. There exists a non-trivial line bundle $L'$ on $Y'$ whose restriction to $Y$ is $\mathcal O_Y$; then the only section of $\mathcal O_Y$ that lifts is the zero section. add comment Not the answer you're looking for? Browse other questions tagged smoothness ag.algebraic-geometry examples deformation-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/22015/example-of-a-smooth-morphism-where-you-cant-lift-a-map-from-a-nilpotent-thicken?sort=votes","timestamp":"2014-04-17T07:27:32Z","content_type":null,"content_length":"82800","record_id":"<urn:uuid:b1548176-ac3c-41f5-8f94-62991321037a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Henry’s Pool Tables on Global Warming/Cooling February 21, 2013 in climate change I took a random sample of weather stations that had daily data In this respect random means any place on earth, with a weather station with complete or almost complete daily data, subject to the given sampling procedure decided upon and given in 2) below. I made sure the sample was globally representative (most data sets aren’t!!!) …… that means a) The amount of weather stations taken from the NH must be equal to the amount weather stations taken from the SH b) The sample must balance by latitude (as close to zero as possible) c)The sample must also balance 70/30 in or at sea/ inland d) longitude does not matter, as in the end we are looking at average yearly temps. which includes the effect of seasonal shifts and irradiation + earth rotates once every 24 hours). So balancing on longitude is not required. e) all continents included (unfortunately I could not get reliable daily data going back 38 years from Antarctica,so there always is this question mark about that, knowing that you never can get a “perfect” sample) f) I made a special provision for months with missing data, not to put in a long term average, as usual in stats, but to rather take the average of that particular month’s preceding year and year after. This is because we are studying weather patterns which might change over time. As an example here you can see the annual average temperatures for New York JFK: You can copy and paste the results of the first 4 columns in excel. Note that in this particular case you will have to go into the months of the years 2002 and 2005 to see in which months data are missing and from there apply the correction as indicated by me + determine the average temperature for 2002 and 2005 from all twelve months of the year. g) I did not look only at means (average daily temp.) like all the other data sets, but also at maxima and minima… … I determined at all stations the average change in temp. per annum from the average temperature recorded, over the period indicated (least square fits). The figure reported is the value before the x. the end results on the bottom of the first table (on maximum temperatures), clearly showed a drop in the speed of warming that started around 38 years ago, and continued to drop every other period I looked//… I did a linear fit, on those 4 results for the drop in the speed of global maximum temps, ended up with y=0.0018x -0.0314, with r2=0.96 At that stage I was sure to know that I had hooked a fish: I was at least 95% sure (max) temperatures were falling. I had wanted to take at least 50 samples but decided this would not be necessary which such high correlation. On same maxima data, a polynomial fit, of 2nd order, i.e. parabolic, gave me y= -0.000049×2 + 0.004267x – 0.056745 That is very high, showing a natural relationship, like the trajectory of somebody throwing a ball… projection on the above parabolic fit backward, ( 5 years) showed a curve: happening around 40 years ago. You always have to be careful with forward and backward projection, but you can do so with such high correlation (0.995) ergo: the final curve must be a sine wave fit, with another curve happening, somewhere on the bottom… Now, I simply cannot be clearer about this. The only bias might have been that I selected stations with complete or near complete daily data. But even that in itself would not affect randomness in my understanding of probability theory. Either way, you could also compare my results (in the means table) with that of Dr. Spencers, or even that reported by others and you will find same 0.14 /decade since 1990 or 0.13/decade since 1980. In addition, you can put the speed of temperature change in means and minima in binomials with more than 0.95 correlation. So, I do not have just 4 data for a curve fit, I have 3 data sets with 4 data each.They each confirm that it is cooling. And my final proposed fit for the drop in maximum temps. shows it will not stop cooling until 2039. 9 responses to Henry’s Pool Tables on Global Warming/Cooling 1. Hi Henry, came by to say hello. I don’t do too well with figures- explain some? □ The (black) figures you are looking at in the tables (allow some time to load up), represent the average change in degrees Celsius (or Kelvin) per annum, from the average temperatures measured during the period indicated. These are the slopes of the least square fit equations or “ linear trendlines” for the periods indicated, as calculated, i.e. the value before the x. The average temperature data from the stations were obtained from http://www.tutiempo.net. I tried to avoid stations with many missing data. Nevertheless, it is very difficult finding weather stations that have no missing data at all. If a month’s data was found missing or if I found that the average for a month was based on less than 15 days of that month’s data, I looked at the average temperatures of that month of the preceding- and following year, averaged these, and in this way estimated the temperatures of that particular month’s missing data. To understand how I did this you just need to understand first year statistics. I will come back to you on what these figures all mean by giving cross references to other comments. You can already see that all results for maxima, means and minima went negative some 15 years ago. This means earth is getting cooler now. But don’t worry. I don’t think we will fall in an ice age just yet. I have good hopes that my A-C curve is correct. http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/ (but where did all my comments go on the above post? I cannot seem to be able to make contact with the administrator of this blog) ☆ Here is a first cross reference. 2. CET temps totalled over each sunspot cycle compared with total sunspots. When Temperatures a low then maximum temperatures go up with sunspot numbers. When temperatures are high then maximum temperatures go up as the sunspot numbers go down. It appears there is a mechanism trying to keep the temperatures high. The minimum temperatures always do the opposite to sunspot numbers. 3. Hi Kelvin! I need to see a graph of that or the paired data.I lost a lot of comments – I don’t how that happened or why. 4. I cannot get a plot out of that that makes sense to me. Remember that in a cooling period such as now, CET runs opposite of the a-c wave as determined earlier by me (post went missing) simply because it gets more clouds and precipitation. So paradoxically it gets warmer in CET because globally it is getting cooler.,… it is called the GH effect….. 5. Henry I have been playing with the total of maximum temperatures in a cycle and the total number of sun spots. I have discovered there is a correlation formula on Excel and have been using it after Willis pointed out I should be using it. I have found a correlation of -0.99 between max and sunspots and -0.97 for minimum and sun spots for Cambridge UK. I then tried it on Lerwick and discovered that for the cycles before the one ending in 1964 the correlations are +0.90 and +0.92. For the cycles ending 1964 onwards the correlations are -0.81 and I then tried it on the CET and got the same change at the 1964 cycle. I got 0.67 and 0.78 for cycles 1964 on and 0.71 and 0.64 prior to the 1964ending cycle. Do you still want me to upload the graph?If so how do I do it? 6. Hi Kelvin my mum has just passed on to be with the Lord and I have a lot of things to do and on my mind. My son always figures out for me how to upload a graph on my blog, I am not too good at it myself. In any case, you must first start a blog. Just remember: for example, if you do a plot on the drop of maximum temps (last row, first table, above, and you set the average speed of warming/cooling out against time, you can also get a binomial with very high correlation, something like 0.997, but in the end it showed that that plot would lead to such an amount of cooling such as has never seen before. This therefore led me to consider the a-c wave fit for same data: so with high correlation it may show that you are going somewhere but don’t be too quick into predicting the future from your plot…. 7. Sorry to hear about your Mum Henry. It’s not good to loose a family member, especially when it’s your Mum. My thoughts are with you. Leave a reply
{"url":"http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/","timestamp":"2014-04-17T06:42:55Z","content_type":null,"content_length":"58287","record_id":"<urn:uuid:e2e0a4b8-bfbd-423e-a3ec-8a94080e44e5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Coordinates of Relative Maxima or Minima Date: 31 Dec 1994 17:58:13 -0500 From: Anonymous Subject: math problem test Hi ..... this is just a test..... I heard you solve math problems... so I'm just playing...... given f(x) = 15 x^(2/3) + 5x, find coords of relative max or min. and for what x values is the function concave down? Date: 1 Jan 1995 20:31:51 -0500 From: Dr. Ken Subject: Re: math problem test Hello there! Yes, we're a service to students in kindergarten through 12th grade. Students send along math problems that puzzle them, and we try to help them out. Spread the word! As for your question, here's how you would proceed. You'd take the first derivative of f(x), and then look at which values of x make f'(x) zero, and for which values of x the derivative doesn't exist. These will be the candidates for local (relative) maxima and minima. We have f(x) = 15x^(2/3) + 5x. So we get f'(x) = 10/[x^(1/3)] + 5. If you're not sure how I got that, let us know. So then you set this expression equal to zero, and then solve for x. In this case, we get x=-8. And the derivative doesn't exist when x=0. So these are our only candidates for local maxima or minima. Plugging in to f, we see that the coordinates of these points are (-8, 20) and (0,0). But we still need to test whether these really ARE local extrema. So let's do it. The best way is to take the second derivative of the original function, plug in our x-value, and see whether a positive or a negative number pops out. If it's positive there, the function is concave up (think of an upward wind - the positive direction - blowing on the graph of the function), and if it's negative, it's concave down (a downward wind). Since you also want to know all values of x for which this function is concave down, we'll kill two math birds with one stone. The second derivative of your function is f"(x) = -10/[3x^(4/3)]. So when is this less than zero? We'll set up an inequality to find out. -10/[3x^(4/3)] < 0 3x^(4/3) > 0 x^(4/3) > 0 Great. This works out nicely. Sometimes it gets really messy when you have fractional exponents. Anyway, when you raise something to the 4/3 power, you take the cube root of it (which you can do with any number, positive or negative) and then you raise the result to the fourth power. So you'll end up with a positive number all the time. Which means this condition on the second derivative is fulfilled all the time, i.e. that the function is concave down everywhere. Except zero. See, that's the tricky part of this problem. Notice that we can't plug in zero to our second derivative (or even the first derivative, for that matter), since we'd then divide by zero. So we'll have to check zero by hand. It looks like zero is a local minimum, since f(0)=0, and if you plug in anything just to the left or right of zero you get something positive, i.e. bigger than zero (do you know how you could check that and make absolutely sure?). So zero is a local minimum. So here's what we've got. We've got a function that's concave down eveywhere except one point, and it has a local maximum at (-8, 20) and a local minimum at (0, 0). What a neat problem. Note the little sharp point at (0, 0); that's why the derivative doesn't exist there. Anyway, if you have more questions, please feel free to write back! -Ken "Dr." Math
{"url":"http://mathforum.org/library/drmath/view/53393.html","timestamp":"2014-04-17T13:28:21Z","content_type":null,"content_length":"8268","record_id":"<urn:uuid:72daa711-5ffe-4cd3-b3d3-8dbacaff24aa>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
New DS set!!! Author Message This post received Expert's post The next set of medium/hard DS questions. I'll post OA's with detailed explanations after some discussion. Please, post your solutions along with the answers.1. What is the product of three consecutive integers? (1) At least one of the integers is positive (2) The sum of the integers is less than 6 Solution: new-ds-set-150653-60.html#p12119022. If x and y are both positive integers and x>y, what the remainder when x is divided by y? (1) y is a two-digit prime number (2) x=qy+9, for some positive integer q Solution: new-ds-set-150653-60.html#p12119033. The length of the median BD in triangle ABC is 12 centimeters, what is the length of side AC? (1) ABC is an isosceles triangle (2) AC^2 = AB^2 + BC^2 Solution: new-ds-set-150653-60.html#p12119044. Two machines, A and B, each working at a constant rate, can complete a certain task working together in 6 days. In how many days, working alone, can machine A complete the task? (1) The average time A and B can complete the task working alone is 12.5 days. (2) It would take machine A 5 more days to complete the task alone than it would take for machine B to complete the task Solution: new-ds-set-150653-60.html#p12119065. Set A={3-2x, 3-x, 3, 3+x, 3+2x}, where x is an integer. Is the standard deviation of set A more than the standard deviation of set B={3-2x, 3-x, 3, 3+x, 3+2x, y} (1) The standard deviation of set A is positive (2) y=3 Solution: new-ds-set-150653-60.html#p12119076. The ratio of the number of employees of three companies X, Y and Z is 3:4:8, respectively. Is the average age of all employees Bunuel in these companies less than 40 years? Math Expert (1) The total age of all the employees in these companies is 600 Joined: 02 Sep 2009 (2) The average age of employees in X, Y, and Z, is 40, 20, and 50, respectively. Posts: 17318 Solution: new-ds-set-150653-80.html#p12119087. Was the average (arithmetic mean) temperature in city A in March less than the average (arithmetic mean) temperature in city B in March? Followers: 2876 (1) The median temperature in City A in March was less than the median temperature in city B Kudos [?]: 18399 [19] , given: 2350 (2) The ratio of the average temperatures in A and B in March was 3 to 4, respectively Solution: new-ds-set-150653-80.html#p12119098. Two marbles are drawn from a jar with 10 marbles. If all marbles are either red of blue, is the probability that both marbles selected will be red greater than 3/5? (1) The probability that both marbles selected will be blue is less than 1/10 (2) At least 60% of the marbles in the jar are red Solution: new-ds-set-150653-80.html#p12119109. If x is an integer, is x^2>2x? (1) x is a prime number. (2) x^2 is a multiple of 9. Solution: new-ds-set-150653-80.html#p121191110. What is the value of the media of set A? (1) No number in set A is less than the average (arithmetic mean) of set A. (2) The average (arithmetic mean) of set A is equal to the range of set A. Solution: new-ds-set-150653-80.html#p1211912Kudos points for each correct solution!!! NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Kaplan Promo Code Knewton GMAT Discount Codes GMAT Pill GMAT Discount Codes 2. If x and y are both positive integers and x>y, what the remainder when x is divided by y? (1) y is a two-digit prime number (2) x=qy+9, for some positive integer q WoundedTiger Sol: From St1, we get that x/y will be of the form Moderator x/y= x/11, x/13, x/17, x/29, x/37...... Joined: 25 Apr 2012 Consider x = 12, Y =11, remainder 1 Posts: 340 Consider x= 13, Y =11, remainder 2 Location: India So we have 2 ans and hence St 1 not sufficent alone (A & D ruled out) Concentration: Marketing, St 2 International Business x=qy+9, for some positive integer q GMAT Date: 11-07-2013 x/y= q+ 9/y ------> remainder will depend on value of y and hence st 2 not sufficient alone (Option B ruled out) GPA: 3.21 Combining both statement we get that remainder will always be 9 since Y is a 2 digit prime no WE: Business Development (Other) Hence answer should be C Followers: 5 _________________ “Many of life's failures are people who did not realize how close they were to success when they gave up.” “If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.” “Renew, release, let go. Yesterday’s gone. There’s nothing you can do to bring it back. You can’t “should’ve” done something. You can only DO something. Renew yourself. Release that attachment. Today is a new day!” 5. Set A={3-2x, 3-x, 3, 3+x, 3+2x}, where x is an integer. Is the standard deviation of set A more than the standard deviation of set B={3-2x, 3-x, 3, 3+x, 3+2x, y} (1) The standard deviation of set A is positive WoundedTiger (2) y=3 Moderator Sol: Joined: 25 Apr 2012 We know SD >/ 0 and hence Set A is Positive does not tell us about y in statement B Posts: 340 So St 1 is alone not sufficent Location: India St 2: y =3 Concentration: Marketing, Set A : SD \sqrt{10X^2/5} International Business Set B : SD is \sqrt{10x^2/6} GMAT Date: 11-07-2013 Since x is an integer we have SD of A as x \sqrt{2} and SD of B as x\sqrt{5/3} GPA: 3.21 Clearly SD of A is greater than that of B and hence ans should be B WE: Business Development (Other) _________________ Followers: 5 “Many of life's failures are people who did not realize how close they were to success when they gave up.” “If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.” “Renew, release, let go. Yesterday’s gone. There’s nothing you can do to bring it back. You can’t “should’ve” done something. You can only DO something. Renew yourself. Release that attachment. Today is a new day!” 6. The ratio of the number of employees of three companies X, Y and Z is 3:4:8, respectively. Is the average age of all employees in these companies less than 40 years? (1) The total age of all the employees in these companies is 600 (2) The average age of employees in X, Y, and Z, is 40, 20, and 50, respectively. WoundedTiger From St 1 we have Moderator 600 = Average age of all employees (A)* No. of Employees(n) Joined: 25 Apr 2012 So Q asks is A<40 Posts: 340 Clearly 1 alone is not sufficient Location: India St 2 : Let the employees in the company be in the ratio 3c :4c: 8c where c is a positive integer Concentration: Marketing, Therefore we have (3c*40+4c*20+ 5c* 50 )/15c = A International Business If c =1 we have (120+80+250)/15 ----> (450/15) = 30 < 40 GMAT Date: 11-07-2013 If c =2 we have ( 3*2*40+ 4*2*20+ 5*2*50/15*2), A = 900/30 < 40 GPA: 3.21 If c= 3, we have (360+240+750)/45 ----> 1350/45 < 40 WE: Business Development (Other) If c= 4, we have ( 480+320+1000)/60 -----> 1800/60 < 40 Followers: 5 Therefore ans should be B “Many of life's failures are people who did not realize how close they were to success when they gave up.” “If you can't fly then run, if you can't run then walk, if you can't walk then crawl, but whatever you do you have to keep moving forward.” “Renew, release, let go. Yesterday’s gone. There’s nothing you can do to bring it back. You can’t “should’ve” done something. You can only DO something. Renew yourself. Release that attachment. Today is a new day!” This post received Expert's post mridulparashar1 wrote: 5. Set A={3-2x, 3-x, 3, 3+x, 3+2x}, where x is an integer. Is the standard deviation of set A more than the standard deviation of set B={3-2x, 3-x, 3, 3+x, 3+2x, y} (1) The standard deviation of set A is positive (2) y=3 We know SD >/ 0 and hence Set A is Positive does not tell us about y in statement B So St 1 is alone not sufficent St 2: y =3 Set A : SD \sqrt{10X^2/5} Set B : SD is \sqrt{10x^2/6} Since x is an integer we have SD of A as x \sqrt{2} and SD of B as x\sqrt{5/3} Math Expert Clearly SD of A is greater than that of B and hence ans should be B Joined: 02 Sep 2009 Notice that x can be 0 for (2), so this statement is NOT sufficient. Posts: 17318 Correct solution is here: Followers: 2876 Links to solutions are here: NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Manager Bunuel wrote: Joined: 27 Jan 2013 3. The length of the median BD in triangle ABC is 12 centimeters, what is the length of side AC? Posts: 57 (1) ABC is an isosceles triangle. Clearly insufficient. Location: India (2) AC^2 = AB^2 + BC^2. This statement implies that ABC is a right triangle and AC is its hypotenuse. Important property: median from right angle is half of the hypotenuse, hence BD=12=AC/2, from which we have that AC=24. Sufficient. Concentration: Social Entrepreneurship, Answer: B. Hi Bunnel Schools: ISB '15 Could you elaborate how this is true - GMAT 1: 740 Q50 V40 median from right angle is half of the hypotenuse GPA: 3.51 WE: Other (Transportation) Thanks Followers: 2 Kudos [?]: 17 [0], given: This post received Expert's post Dipankar6435 wrote: Bunuel wrote: 3. The length of the median BD in triangle ABC is 12 centimeters, what is the length of side AC? (1) ABC is an isosceles triangle. Clearly insufficient. (2) AC^2 = AB^2 + BC^2. This statement implies that ABC is a right triangle and AC is its hypotenuse. Important property: median from right angle is half of the hypotenuse, hence BD=12=AC/2, from which we have that AC=24. Sufficient. Answer: B. Hi Bunnel Could you elaborate how this is true - Bunuel median from right angle is half of the hypotenuse Math Expert ?? Joined: 02 Sep 2009 Thanks Posts: 17318 Sure. Followers: 2876 Imagine a right triangle inscribed in a circle. We know that if a right triangle is inscribed in a circle, then its hypotenuse must be the diameter of the circle, hence half of the hypotenuse is radius. The line segment from the third vertex to the center is on the one hand radius of the circle=half of the hypotenuse and on the other hand as it's connecting the vertex with the midpoint of the hypotenuse it's median too. Hope it's clear. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Hi Bunuel.. How to be sure that A={0,0,0,0} and A={1,2,2,3} are the only sets possible from Statement2. Is there any quick method to find this. infact i was unable to find A Bunuel wrote: 10. What is the value of the media of set A? (1) No number in set A is less than the average (arithmetic mean) of set A. Joined: 02 Sep 2012 Since no number is less than the average, then no number is more than the average, which implies that the list contains identical elements: A={x, x, x, ...}. From this it Posts: 4 follows that (the average)=(the median). But we don't know the value of x, thus this statement is NOT sufficient. Followers: 0 (2) The average (arithmetic mean) of set A is equal to the range of set A. Kudos [?]: 0 [0], given: Not sufficient: if A={0, 0, 0, 0}, then (the median)=0, but if A={1, 2, 2, 3}, then (the median)=2. (1)+(2) From (1) we have that the list contains identical elements. The range of all such sets is 0. Therefore, from (2) we have that (the average)=(the range)=0 and since from (1) we also know that (the average)=(the median), then (the median)=0. Sufficient. Answer: C. Expert's post buffaloboy wrote: Hi Bunuel.. How to be sure that A={0,0,0,0} and A={1,2,2,3} are the only sets possible from Statement2. Is there any quick method to find this. infact i was unable to find A Bunuel wrote: 10. What is the value of the media of set A? (1) No number in set A is less than the average (arithmetic mean) of set A. Since no number is less than the average, then no number is more than the average, which implies that the list contains identical elements: A={x, x, x, ...}. From this it follows that (the average)=(the median). But we don't know the value of x, thus this statement is NOT sufficient. (2) The average (arithmetic mean) of set A is equal to the range of set A. Not sufficient: if A={0, 0, 0, 0}, then (the median)=0, but if A={1, 2, 2, 3}, then (the median)=2. (1)+(2) From (1) we have that the list contains identical elements. The range of all such sets is 0. Therefore, from (2) we have that (the average)=(the range)=0 and since Math Expert from (1) we also know that (the average)=(the median), then (the median)=0. Sufficient. Joined: 02 Sep 2009 Answer: C. Posts: 17318 A={0, 0, 0, 0} and A={1, 2, 2, 3} are NOT the only sets possible. For example A={0, 0, 0} and A={1, 2, 3}. You can find these sets by trial and error. Followers: 2876 _________________ NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests 4. Two machines, A and B, each working at a constant rate, can complete a certain task working together in 6 days. In how many days, working alone, can machine A complete the task? (1) The average time A and B can complete the task working alone is 12.5 days. Manhnip (2) It would take machine A 5 more days to complete the task alone than it would take for machine B to complete the task Intern Ans B , Joined: 23 May 2013 Given : Ta + Tb = 6 ( time for working together is 6 days) Posts: 29 Please explain if wrong Location: United Kingdom Explanation as WE: Project Management 1 As avg value given for individual working rate, (we want specific value for Ra) , as Ra can take multiple values in this case, stmt not sufficient. (Real Estate) 2. Ta= Tb +5 Followers: 0 & Ta + Tb =6 given in main stmt Kudos [?]: 20 [0], given: 50 therefore Ans B Please correct if wrong Correct me If I'm wrong !! looking for valuable inputs Math Expert Expert's post Joined: 02 Sep 2009 Posts: 17318 Followers: 2876 Bunuel wrote: monirjewel 3. The length of the median BD in triangle ABC is 12 centimeters, what is the length of side AC? Manager (1) ABC is an isosceles triangle. Clearly insufficient. Joined: 06 Feb 2010 (2) AC^2 = AB^2 + BC^2. This statement implies that ABC is a right triangle and AC is its hypotenuse. Important property: median from right angle is half of the hypotenuse, hence BD=12=AC/2, from which we have that AC=24. Sufficient. Posts: 175 Answer: B. Concentration: Marketing, Leadership if ABC is isosceles triangle then all sides are equal. so AC=24. why not? Schools: University of _________________ Dhaka - Class of 2010 Practice Makes a Man Perfect. Practice. Practice. Practice......Perfectly GMAT 1: Q0 V0 Critical Reasoning: best-critical-reasoning-shortcuts-notes-tips-91280.html GPA: 3.63 Collections of MGMAT CAT: collections-of-mgmat-cat-math-152750.html WE: Business Development (Consumer Products) MGMAT SC SUMMARY: mgmat-sc-summary-of-fourth-edition-152753.html Followers: 35 Sentence Correction: sentence-correction-strategies-and-notes-91218.html Kudos [?]: 466 [0], Arithmatic & Algebra: arithmatic-algebra-93678.html given: 182 Helpful Geometry formula sheet: best-geometry-93676.html I hope these will help to understand the basic concepts & strategies. Please Click ON KUDOS Button. Expert's post monirjewel wrote: Bunuel wrote: 3. The length of the median BD in triangle ABC is 12 centimeters, what is the length of side AC? (1) ABC is an isosceles triangle. Clearly insufficient. (2) AC^2 = AB^2 + BC^2. This statement implies that ABC is a right triangle and AC is its hypotenuse. Important property: median from right angle is half of the hypotenuse, hence BD=12=AC/2, from which we have that AC=24. Sufficient. Answer: B. if ABC is isosceles triangle then all sides are equal. so AC=24. why not? (1) says that ABC is an isosceles triangle, not equilateral. Also, if ABC were equilateral AC would be Math Expert not 24. Joined: 02 Sep 2009 Hope it's clear. Posts: 17318 Followers: 2876 NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests PUNEETSCHDV 4. Two machines, A and B, each working at a constant rate, can complete a certain task working together in 6 days. In how many days, working alone, can machine A complete the task? Given that 1/A+1/B=1/6, where A is the time needed for machine A to complete the task working alone and B is the time needed for machine B to complete the task working Joined: 31 Aug 2011 alone. Posts: 209 Can anyone please explain how 1/A + 1/B = 1/6 Followers: 2 _________________ Kudos [?]: 35 [0], given: If you found my contribution helpful, please click the +1 Kudos button on the left, I kinda need some =) Expert's post PUNEETSCHDV wrote: 4. Two machines, A and B, each working at a constant rate, can complete a certain task working together in 6 days. In how many days, working alone, can machine A complete the task? Given that 1/A+1/B=1/6, where A is the time needed for machine A to complete the task working alone and B is the time needed for machine B to complete the task working Can anyone please explain how 1/A + 1/B = 1/6 Two machines, A and B, each working at a constant rate, can complete a certain task working together in 6 days. In how many days, working alone, can machine A complete the A is the time needed for machine A to complete the task working alone, thus the rate of A is 1/A job/day. Bunuel B is the time needed for machine B to complete the task working alone, thus the rate of A is 1/B job/day. Math Expert Their combined rate is 1/A+1/B, which given to be equal to 1/6. Joined: 02 Sep 2009 Hope this helps. Posts: 17318 _________________ Followers: 2876 NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Bunuel wrote: 2. If x and y are both positive integers and x>y, what the remainder when x is divided by y? If x and y are positive integers, there exist unique integers q and r, called the quotient and remainder, respectively, such that y =divisor*quotient+remainder= xq + r and 0 Amateur \leq{r}<x. Manager (1) y is a two-digit prime number. Clearly insufficient since we know nothinf about x. Joined: 05 Nov 2012 (2) x=qy+9, for some positive integer q. It's tempting to say that this statement is sufficient and r=9, since given equation is very similar to y = divisor*quotient+remainder= xq + r . But we don't know whether y>9: remainder must be less than divisor. Posts: 145 For example: Followers: 1 If x=10 and y=1 then 10=1*1+9, then the remainder upon division 10 by 1 is zero. If x=11 and y=2 then 11=1*2+9, then the remainder upon division 11 by 2 is one. Kudos [?]: 2 [0], given: Not sufficient. (1)+(2) From (2) we have that x=qy+9 and from (1) that y is more than 9 (since it's a two-digit number), so we have direct formula of remainder, as given above. Sufficient. Answer: C. Hello Bunuel, the questions says x is divided by y.... So x will be dividend and y will be divisor.... the very first representation between y and x will confuse a better understanding of the latter explanation. Why don't you switch x and y in the first formula too Bunuel wrote: envoy2210 10. What is the value of the media of set A? Intern (1) No number in set A is less than the average (arithmetic mean) of set A. Joined: 12 Jul 2013 Since no number is less than the average, then no number is more than the average, which implies that the list contains identical elements: A={x, x, x, ...}. From this it follows that (the average)=(the median). But we don't know the value of x, thus this statement is NOT sufficient. Posts: 12 (2) The average (arithmetic mean) of set A is equal to the range of set A. Concentration: Economics, Finance Not sufficient: if A={0, 0, 0, 0}, then (the median)=0, but if A={1, 2, 2, 3}, then (the median)=2. GMAT 1: 700 Q50 V34 (1)+(2) From (1) we have that the list contains identical elements. The range of all such sets is 0. Therefore, from (2) we have that (the average)=(the range)=0 and since from (1) we also know that (the average)=(the median), then (the median)=0. Sufficient. GPA: 3.46 Answer: C.. Followers: 0 Bununel, what if Set A only contains one factor "1"? Kudos [?]: 1 [0], given: 28 _________________ What does't kill you makes you stronger. This post received Expert's post envoy2210 wrote: Bunuel wrote: 10. What is the value of the media of set A? (1) No number in set A is less than the average (arithmetic mean) of set A. Since no number is less than the average, then no number is more than the average, which implies that the list contains identical elements: A={x, x, x, ...}. From this it follows that (the average)=(the median). But we don't know the value of x, thus this statement is NOT sufficient. (2) The average (arithmetic mean) of set A is equal to the range of set A. Not sufficient: if A={0, 0, 0, 0}, then (the median)=0, but if A={1, 2, 2, 3}, then (the median)=2. Bunuel (1)+(2) From (1) we have that the list contains identical elements. The range of all such sets is 0. Therefore, from (2) we have that (the average)=(the range)=0 and since from (1) we also know that (the average)=(the median), then (the median)=0. Sufficient. Math Expert Answer: C.. Joined: 02 Sep 2009 Bununel, what if Set A only contains one factor "1"? Posts: 17318 The range of one element set is 0. If set A={1}, then it's range (0) does not equal to its mean (1). Thus this example contradicts the second statement and therefore is not Followers: 2876 valid. Hope it's clear. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Bunuel wrote: envoy2210 wrote: Bunuel wrote: 10. What is the value of the media of set A? (1) No number in set A is less than the average (arithmetic mean) of set A. Since no number is less than the average, then no number is more than the average, which implies that the list contains identical elements: A={x, x, x, ...}. From this it Joined: 12 Jul 2013 follows that (the average)=(the median). But we don't know the value of x, thus this statement is NOT sufficient. Posts: 12 (2) The average (arithmetic mean) of set A is equal to the range of set A. Concentration: Economics, Not sufficient: if A={0, 0, 0, 0}, then (the median)=0, but if A={1, 2, 2, 3}, then (the median)=2. (1)+(2) From (1) we have that the list contains identical elements. The range of all such sets is 0. Therefore, from (2) we have that (the average)=(the range)=0 and since GMAT 1: 700 Q50 V34 from (1) we also know that (the average)=(the median), then (the median)=0. Sufficient. GPA: 3.46 Answer: C.. Followers: 0 Bununel, what if Set A only contains one factor "1"? Kudos [?]: 1 [0], given: The range of one element set is 0. If set A={1}, then it's range (0) does not equal to its mean (1). Thus this example contradicts the second statement and therefore is not 28 valid. Hope it's clear. OMG, why can I forget this? thank you What does't kill you makes you stronger. Bunuel wrote: PUNEETSCHDV wrote: 4. Two machines, A and B, each working at a constant rate, can complete a certain task working together in 6 days. In how many days, working alone, can machine A complete the task? Given that 1/A+1/B=1/6, where A is the time needed for machine A to complete the task working alone and B is the time needed for machine B to complete the task working Can anyone please explain how 1/A + 1/B = 1/6 Two machines, A and B, each working at a constant rate, can complete a certain task working together in 6 days. In how many days, working alone, can machine A complete the Senior Manager task? Joined: 06 Aug 2011 A is the time needed for machine A to complete the task working alone, thus the rate of A is 1/A job/day. Posts: 402 B is the time needed for machine B to complete the task working alone, thus the rate of A is 1/B job/day. Followers: 2 Their combined rate is 1/A+1/B, which given to be equal to 1/6. Kudos [?]: 41 [0], given: Hope this helps. in question..A+b=6.. but in statement 1..its a+b=25? are these both not contradict with eachother? Seems like m missing something :/ Bole So Nehal.. Sat Siri Akal.. Waheguru ji help me to get 700+ score ! Expert's post sanjoo wrote: Bunuel wrote: PUNEETSCHDV wrote: 4. Two machines, A and B, each working at a constant rate, can complete a certain task working together in 6 days. In how many days, working alone, can machine A complete the task? Given that 1/A+1/B=1/6, where A is the time needed for machine A to complete the task working alone and B is the time needed for machine B to complete the task working Can anyone please explain how 1/A + 1/B = 1/6 Two machines, A and B, each working at a constant rate, can complete a certain task working together in 6 days. In how many days, working alone, can machine A complete the A is the time needed for machine A to complete the task working alone, thus the rate of A is 1/A job/day. B is the time needed for machine B to complete the task working alone, thus the rate of A is 1/B job/day. Their combined rate is 1/A+1/B, which given to be equal to 1/6. Hope this helps. in question..A+b=6.. but in statement 1..its a+b=25? are these both not contradict with eachother? Seems like m missing something :/ Math Expert What are A and B in your first equation? Joined: 02 Sep 2009 What are A and B in your second equation? Posts: 17318 If you don't pay attention what a variable represents you get each and every question wrong. Followers: 2876 From the stem: 1/A+1/B=1/6, where A is the time needed for machine A to complete the task working alone and B is the time needed for machine B to complete the task working alone From (2): A+B=25. Complete solution: Hope it helps. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests
{"url":"http://gmatclub.com/forum/new-ds-set-150653-100.html?sort_by_oldest=true","timestamp":"2014-04-19T16:02:11Z","content_type":null,"content_length":"319399","record_id":"<urn:uuid:2ae47cda-7f87-4671-908b-ebf13c6c846a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Splitting field of an irreducible polynomial June 24th 2009, 01:38 PM #1 Splitting field of an irreducible polynomial Suppose $K$ is the splitting field of an irreducible polynomial $p(x) \in \mathbb{Z}[x]$. What is a general condition to have $[K:\mathbb{Q}] = \mbox{ deg }p(x)$? Is there a systematic way to test if that is the case, given $p(x)$? as far as i know there's no general condition in this case but if we replace $\mathbb{Q}$ with any finite field, then the result is always true. (the proof is quite easy!) June 25th 2009, 12:14 AM #2 MHF Contributor May 2008
{"url":"http://mathhelpforum.com/number-theory/93660-splitting-field-irreducible-polynomial.html","timestamp":"2014-04-21T07:17:11Z","content_type":null,"content_length":"36425","record_id":"<urn:uuid:a9cb355b-ad73-42c6-8b25-9789bd052dac>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
3-year BA Physics The 3-year BA Honours Course. This course provides a general education in the basic principles of physics, their formulation and manipulation in mathematical terms and their application in the laboratory to experiments. This is probably the more appropriate degree for those not seeking a possible career in physics e.g. teaching or commerce. Physics courses investigate the basic principles of modern physics with a strong emphasis on its mathematical foundation. They also include a significant amount of experimental work and the possibility of studying a non-physics subject. There is also a common emphasis on individual development, discussion and the ability to work with others in the laboratory. Course Structure: The first year (foundation) and second year (core physics) courses are the same for both the BA and the MPhys. In the third year, BA students choose some of the third year subjects, and do a project. In each of years one, two and three, all students choose additional 'Short Options' from a range of courses. First Year In their first year, students on both courses (BA and MPhys) cover five subjects, four of which are compulsory. Two subjects cover fundamental areas of 'classical' physics: the mechanics of particles, special relativity, and the physics of electric and magnetic fields. A third subject covers differential equations, waves and elementary optics. The fourth subject is mathematical methods, including vectors and calculus. These four subjects provide a firm foundation for the rest of both courses. The fifth subject is chosen from a range of possible Short Options, which may change from year to year but are likely to include topics such as quantum ideas, additional mathematics and subjects from other physical sciences. Practical work Practical work complements lectures and tutorials and introduces students to areas that may be less familiar. For two terms of the first year, students spend one day each week working in pairs in the practical laboratories, on practicals such as: computing, electronics, optics and general physics. A new course on computer programming and numerical methods combines lectures with hands-on work in the computing laboratory. 1st year Exams (Prelims) Towards the end of the first year students take an examination, consisting of five papers, one in each of their chosen subjects. Students must pass the written exam and have a satisfactory record of practical work before they can proceed to the second year. In particular, each of the papers on the four compulsory subjects must be passed. Second Year The second year course provides a common core for both the BA and MPhys degrees. It develops the techniques and knowledge acquired in the first year. Electromagnetism, optics and mathematical methods are extended and further core topics such as quantum physics and thermal physics are covered in some depth. A short optional subject is studied towards the end of the year. Current subjects include energy studies, more advanced theoretical topics, a language or teaching option. Practical work Practicals occupy two days a fortnight in the second and third years. Students normally do a total of 12 days, but there are a number of alternatives for some of it. For example, the Teaching Physics in Schools short option involves working with a physics teacher in a local school for one half-day each week, and research into the learning of physics in school. Half the practical work may be substituted by a second short option. It is also possible to do extra practical work, as additional experiments, or as a mini-project, in place of a short option. 2nd year exams (Part A) Three written papers on the core topics plus a short option paper and practical work form the Part A exam at the end of the second year. Those who wish to take the four-year MPhys degree must meet a minimum standard comparable to a 2:1 honours in this exam. Third Year In the third year the BA and MPhys courses diverge. Six modules are offered: Flows, fluctuations and complexity; Symmetry & relativity; Quantum, atomic and molecular physics; Sub-atomic Physics; General relativity and cosmology and Condensed-matter physics. Students will choose four of these modules, and undertake a project in their final term. Physics students will also take a short option and relevant practical work. 3rd year exams (Part B) Students will be expected to do papers on 4 of the 6 modules plus a short option plus satisfactory practical work plus a project report. The BA honours degree classification is made on the combined results from the Part A & B exams.
{"url":"http://www2.physics.ox.ac.uk/study-here/undergraduates/the-courses/3-year-ba-physics","timestamp":"2014-04-19T17:01:35Z","content_type":null,"content_length":"18600","record_id":"<urn:uuid:b30f6e94-304c-4ba3-8cc5-050c4514d58a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Tutors Surprise, AZ 85374 Mathematics, Physics, and Writing Tutor ...from Rutgers University, and worked for many years as a director of research in private industry. I now tutor privately in English essay, dissertation, and report writing, although most of my students ask for my help in , Basic Algebra, Geometry, Trigonometry,... Offering 10+ subjects including physics
{"url":"http://www.wyzant.com/Sun_City_West_Physics_tutors.aspx","timestamp":"2014-04-21T12:48:07Z","content_type":null,"content_length":"59154","record_id":"<urn:uuid:6d5e9e7b-ea36-41dc-9f34-e8bc0f86b657>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Pythagenport teams records mweir145 wrote:Last year, the Jays had a horrible record in one run games, and that is probably one of the main reasons why they finished so far behind in the standings next to the Sox and Yankees. I guess you could say they're "unlucky," but good teams find ways to win those close games. I would argue the opposite. Each year, the teams with the best record in one run games tend to be the ones with the best records overall. This is not because they somehow "find a way to win the close ones." They have the best record becuase they got lucky and had a good record in one run games. A very large portion of teams that have a godd record in one run games one year fall back to the average (about .500) in one run games the next year. This is the case whether or not the team improves personel or not. The opposite is true for teams that are unlicky in one run games. They usually improve to the average thenext year. That is why if your team was extremely lucky in one run games this year (like the White Sox), you can expect a decrease in wins the next year unless you improve your personel.[/b] Again I ask: Is a team who wins every game by one run better than a team who wins half their games by five runs but loses the other half by one run? Your wisemen don't know how it feels to be thick as a brick... My Rangers were pretty unlucky last year. Now it's time for that luck to roll back around! Yes doctor, I am sick. Sick of those who are spineless. Sick of those who feel self-entitled. Sick of those who are hypocrites. Yes doctor, an army is forming. Yes doctor, there will be a war. Yes doctor, there will be blood..... Is it me, or do bullpens and pinch hitters make a giant difference in the outcome of 1-run games? I think some of it can be attributed to luck, but a lot of 1-run victories are such because of other factors. davidmarver wrote:Is it me, or do bullpens and pinch hitters make a giant difference in the outcome of 1-run games? I think some of it can be attributed to luck, but a lot of 1-run victories are such because of other factors. Pitching (both the starters and the bullpen) have the greatest effect on how teams do against their Pythag prediction. The better your pitching the more often you will meet or exceed your Pythag. In terms of 1-run games pitching is a major key and since bullpens are being used in so many innings now it has become as big of a factor as the starters. I don't think there is really much correlation between PH and the outcomes. Bury me a Royal. davidmarver wrote:Is it me, or do bullpens and pinch hitters make a giant difference in the outcome of 1-run games? I think some of it can be attributed to luck, but a lot of 1-run victories are such because of other factors. This is true, but a team with a higher run differential is usually better. I posed this question on another board about sports in general. This is a more football-oriented explanation, but just change all the terms to baseball terms and it makes sense (more, in fact, since baseball is governed by strict probability on a greater level): "Wow. To continue with baseball: Every batted ball has a set of probabilities governing it. Aside from very minimal control of direction due to swing speed/timing, the batter has no ability to control where the ball goes. As such, and considering the defense, it may or may not fall in for a hit. Because of this, every batted ball of Velocity A has roughly the same chance of falling in for a hit, given that the defensive team remains the same (no stamina loss, etc.). What makes a good hitter is hitting more balls, yes. However, it's still always a probability issue. If you're a .333 hitter, every at-bat gives you a one in three chance that you'll end up on base (or hitting a home run). You can't control that any more than I can control a coin flip -- it's going to come out relatively even, given enough chances, but that doesn't mean that it's not going to land heads fifty times then tails fifty times. Every time you swing, it's the same probability. Though there's not the same element of ball-goes-anywhere-at-even-probability in football, if you assume static conditions, a 60% completion rate is always 60%. You have a three-in-five chance of completing a pass. Again, the coin analogy. Now, let's take the coin analogy further, and say that heads is scoring, tails is not scoring. Each team has a given chance of scoring in any given situation. Say a team gets four flips per quarter/inning/whatever. In football, that's 16 per game. Let's say Team A has a 75% score rate, and Team B has a 50%. Game 1, A goes 8-8, B goes 7-9. Game 2, A goes 16-0, B goes 9-7. It continues this way throughout the entire season. By the end, we now have two .500 teams. But which is better? The one who scored 50% of the time or the one who scored 75% of the Okay, same thing, same assumptions: Team B has a 9/16 chance of scoring, Team C has a 50% chance. Every single game, B wins 9-8 over C. Is B now a superior team, even though their scoring rate is 56.25% compared to A's 75%? Remember, we're assuming absolute statical conditions, so probabilities are all that matter. A is going to be considered the better team. Now realize, we've already had fungability to the environment. In that first example, A had good days and bad days. Sure, it's never going to happen in quite that manner -- over the course of multiple seasons, they're more and more likely to go 12-4. But, sometimes they'll go 8-8, sometimes they'll go 16-0. Remember, same exact team. Same exact players, same exact skill level. The only difference between years is that players have different peaks and valleys, and if all of those players have a bad day at the same time, it's going to cause the team to lose." Your wisemen don't know how it feels to be thick as a brick... *** Final 2005 MLB Statistics - Records in One Run Games *** (Complete through Sunday, October 2nd) From The Sports Network National League Home Away Overall W L W L W L Arizona Diamondbacks 19 10 9 8 28 18 Atlanta Braves 14 5 9 15 23 20 Chicago Cubs 14 9 12 11 26 20 Cincinnati Reds 15 6 6 12 21 18 Colorado Rockies 11 8 14 16 25 24 Florida Marlins 13 8 7 15 20 23 Houston Astros 16 9 9 12 25 21 Los Angeles Dodgers 12 10 8 13 20 23 Milwaukee Brewers 15 6 6 15 21 21 New York Mets 14 7 7 17 21 24 Philadelphia Phillies 14 13 7 10 21 23 Pittsburgh Pirates 8 12 7 16 15 28 St. Louis Cardinals 12 13 9 12 21 25 San Diego Padres 20 8 9 12 29 20 San Francisco Giants 13 11 14 14 27 25 Washington Nationals 15 13 15 18 30 31 American League Home Away Overall W L W L W L Baltimore Orioles 6 10 8 15 14 25 Boston Red Sox 15 1 12 14 27 15 Chicago White Sox 16 7 19 12 35 19 Cleveland Indians 12 20 10 16 22 36 Detroit Tigers 12 12 10 14 22 26 Kansas City Royals 14 16 4 14 18 30 LA Angels of Anaheim 20 8 13 18 33 26 Minnesota Twins 16 12 11 18 27 30 New York Yankees 18 5 9 11 27 16 Oakland Athletics 16 9 10 15 26 24 Seattle Mariners 17 9 9 14 26 23 Tampa Bay Devil Rays 18 10 11 15 29 25 Texas Rangers 12 11 12 18 24 29 Toronto Blue Jays 9 15 7 16 16 31 I think that if a team has a poor one-run record, it just means they are inconsistent. Every team's going to have their ups and downs... but teams like the Angels, who have struggled offensively at times, find ways to win with their smallball offense and solid bullpen... Experience may factor into the equation as well... as the Indians and Blue Jays (both were terrible in one-run games) were very young. The Red Sox, Yanks, Angels, White Sox are all experienced teams and thus, had the good records in one run games... Other than that, I don't see much of anything I can take away from the one-run records... Maybe just the outright will to win is why teams have better records in one-run games... I don't know! New York Yankees: 27-16 Boston Red Sox: 27-15 Toronto Blue Jays: 16-31 OhMrScottyTrav06 wrote:Experience may factor into the equation as well... as the Indians and Blue Jays (both were terrible in one-run games) were very young. The Red Sox, Yanks, Angels, White Sox are all experienced teams and thus, had the good records in one run games... I agree, experience seems to help teams pull out those tough ball games. I found an article on this... http://www.hardballtimes.com/main/artic ... run-games/ Good teams win more one-run games. Here's a graph of the winning percentage of all teams in each of the past five years, according to the margin of victory in each game. I've combined seasonal teams into five different groups based on their overall record. As you can see, a team's true talent emerges as the margin of a game increases. One-run games do tend to bring all teams closer to .500, but the best teams still win one-run games more often than other teams. Bill James published an article three years ago in which he reviewed Tom Ruane's article, and added the useful insight that a team's record in one-run games can be projected by the ratio of its runs scored to runs allowed, each raised to the power of .865. In other words, he used the Pythagorean formula, but used .865 instead of 2 as the exponent. So, in essence, the Pythagorean formula actually captures the notion that good teams generally win more one-run games. But it obviously won't capture unexpected swings in one-run game outcomes. And as we've said, wild swings do occur.
{"url":"http://www.fantasybaseballcafe.com/forums/viewtopic.php?t=162037&start=10","timestamp":"2014-04-19T20:33:00Z","content_type":null,"content_length":"90627","record_id":"<urn:uuid:e071bbd8-2fa4-43ae-9f8e-cfeab74e519c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the coordinate of the fourth vertex of a rectangle with three vertices at (-7, 5), (3, -1), and (-7, -1) • one year ago • one year ago Best Response You've already chosen the best response. lets say A(-7,5) B(-7,-1) C(3,-1) D(x,y) u see that x(A )=x( B) and the distance of A and B is 6 units so x(C)=x(D)=3 and y(D)=5 so that the distance between C and D can be 6 as well | dw:1348425807971:dw| sorry slow conn :/ Best Response You've already chosen the best response. How would you find the area of a rectangle? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/505f5734e4b0583d5cd1fad2","timestamp":"2014-04-16T22:30:57Z","content_type":null,"content_length":"50164","record_id":"<urn:uuid:bbaeaf9c-1d3d-4149-9a6d-aa4ec49678bc>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
Function as parameter for Module up vote 2 down vote favorite How is it possible to use a mathematical function as module- parameter Such as: PersonalPlot[fun0_, var0_, min0_, max0_] := Module[{fun = fun0, var = var0 , min = min0, max = max0}, (*this is incorrect*) fun = fun[var_]; Plot[fun, {var, min, max}] add comment 1 Answer active oldest votes You're right, that statement is incorrect. Mathematica evaluates it to something like when you call PersonalPlot and that evaluates to, well, in words to x to the power of 2 of x which doesn't make a lot of sense. There are a number of ways round the problem. The simplest would be to dispense with a Module altogether and define: PersonalPlot1[fun0_, var0_, min0_, max0_] := Plot[fun0, {var0, min0, max0}] which you would call like this: PersonalPlot1[x^2, x, 0, 3] Note that a call like this PersonalPlot1[x^2, y, 0, 3] produces an empty plot because the variable in the function passed in is not the same variable as the second argument. Read If you want to define a module which takes a function as an argument, then this is one way of doing it: PersonalPlot2[fun0_, var0_, min0_, max0_] := Module[{fun = fun0, var = var0, min = min0, max = max0}, Plot[fun[var], {var, min, max}]] which you would call like this PersonalPlot2[#^2 &, x, 0, 3] • The function passed into the function is a pure function. If you are not already familiar with Mathematica's pure functions now would be a good time to consult the relevant parts of the documentation. • This explicitly tells the Plot command to evaluate fun[var] over the range you specify. • Your local variables are not strictly necessary since your function works by side-effect, producing a plot rather than manipulating (copies of) the arguments passed to it. You could rewrite this simply as: up vote 4 down PersonalPlot2b[fun0_, var0_, min0_, max0_] := Module[{}, vote accepted Plot[fun0[var0], {var0, min0, max0}]] Another possibility would be to drop the argument which represents the variable input to the function passed to PersonalPlot, like this: PersonalPlot3[fun0_, min0_, max0_] := Module[{x}, Plot[fun0[x], {x, min0, max0}]] which you would call like this PersonalPlot3[#^2 &, 0, 3] In this version I've made x local to the Module to avoid clashes with any workspace variable also called x. This avoids errors arising from using different names for the argument to the function (the pure function has no argument names) and the second argument to PersonalPlot; that has now been dropped. There are probably several other useful ways of passing arguments to functions whether those functions use modules or not. Most of us who've used Mathematica for a while don't, I think, regard #^2& as something to avoid. If you don't like it, you could use the more explicit syntax, like this: fun1 = Function[x,x^2] which you can then pass around like this By using this approach you can make your functions a bit less error prone by requiring the right types to be passed in, like this PersonalPlot[fun_Function, min_Real, max_Real] := ... but it's really up to you. Off the top of my head I don't know how Plot does it, I'd have to look in the documentation. Firstly, thanks a lot for this magnificent -I would call it an- article. But the Problem is, that I don't really want to make a simple Plot but do some more operations... furthermore, the use of the module should be as easy as possible.. so isn't there a way around this #^2 &? thx – Franz Ebner Jan 16 '13 at 16:05 How is this done for Plot itself? – Franz Ebner Jan 16 '13 at 16:18 HPM, I wish you would post more often at Mathematica.SE. – Mr.Wizard Jan 20 '13 at 23:37 add comment Not the answer you're looking for? Browse other questions tagged wolfram-mathematica or ask your own question.
{"url":"http://stackoverflow.com/questions/14361747/function-as-parameter-for-module","timestamp":"2014-04-20T21:28:07Z","content_type":null,"content_length":"68773","record_id":"<urn:uuid:b2c6e3a0-2d49-493b-888a-5164482d1415>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph Theory | Mathigon Bridges in Königsberg This story is about the Swiss mathematician Leonhard Euler (1707 – 1783). While he lived in the Prussian town of Königsberg at the Baltic sea, he was thinking about the following problem: Königsberg is divided into four parts by the river Pregel, and connected by seven bridges. Is it possible to tour Königsberg along a path that crosses every bridge once, and at most once? You can start and finish wherever you want, not necessarily in the same place. Trying various paths very quickly suggests that it is impossible: you always end up with a bridge you haven’t crossed, but with no way to get there without crossing another bridge twice. But it would take a very long time to try every possible path… Have a look at these cities and try to find the same kind of paths – crossing every bridge exactly once. If you can find one path, try to find a second one in the same city. Not all cities are possible. Read on for the explanation! Before we can solve the Königsberg Bridges problem, we have to think about an area of mathematics called graph theory. Graph Theory A graph is a collection of vertices (points) that are connected by edges (lines). Some graphs can be directed, which means the lines have an arrow and only go in one direction. Some (rather boring) graphs can have no edges at all, in others the edges could overlap, and there can be many edges coming out of the same vertex. Here is a very simple question we could ask about graphs: Which graphs can be drawn without lifting the pencil off the paper and without drawing any line twice? Leonard Euler observed that some graphs can be drawn in such a way, some even have many possibilities, while others can’t. Try the following examples and see whether you can find a pattern. Here are a couple of simple graphs. Determine which ones can be drawn without lifting the pencil and drawing a line twice. Now think about how many edges meet at every vertex and try to find a It is possible that whether a graph can be drawn or not depends on the number of vertices with an odd number of vertices. Can you work out what’s going on? Read on to see the explanation… Not all graphs can be drawn. Read on to see the explanation… Whether a graph can be drawn or not depends on the number of vertices with an odd number of edges. Above you may have observed that graphs which have more than two “odd” vertices can’t be drawn in the way we want. For the explanation we have to look at one individual vertex and see what happens to it as we connect the vertices of the graph. Initially there are no edges going through our vertex. At some point we will connect our vertex to the remainder of the graph, and unless it is the final vertex of the path we draw, we will draw another edge going away from our vertex. So in total we have added two edges. Maybe we will return to our vertex a second time. But again we have to move away again, so we add two more edges. You see that edges are always added in pairs so we know that if vertices in our graph have an odd number of edges, the graph can’t possibly be drawn. The only exceptions are the vertices where we start and where we finish our path. If we start and end at two different vertices, only these two will have an odd number of edges. If we start and finish at the same vertex, then all vertices are even. This makes it possible to determine whether graphs can be drawn just by looking at them – and we are even told where to start! Bridges in Königsberg revisited Having solved the Drawing Graphs problem, we can now solve the Königsberg Bridges problem by converting the map of Königsberg into a graph: Every “island” is represented by a vertex, and every bridge connecting two islands is represented by an edge connecting the two points. Here is what we get: Finding a tour through Königsberg that crosses every bridge, but not twice, is exactly the same as finding a way to draw the graph without crossing a line twice. If we count the number of edges that meet at each vertex, we see that there are more than 2 odd vertices. Therefore such a drawing or tour does not exist in Königsberg. Now you can go back to the city maps in the first exercise, convert them into graphs and determine whether they can be toured or not. Graph theory is now a very important part of mathematics with countless applications in technology, computing or science. Some important ideas in graph theory are outlined in the following sections. Travelling Salesman Problem When trying to find a route for the Königsberg Bridges problem we didn’t think about how long it would take. But in many cases this is of great importance: and we want to find the shortest possible route. This is called the Travelling Salesman problem: A salesman has to visit a number of cities to deliver items. What is the shortest route that connects all the cities? Of course we need to know how many cities there are and what the distances between any two cities are. But even with all that information the Traveling Salesman problem is still very difficult. We could just try every possible path, but with many cities this becomes very impractical. If we have 10 cities to visit there are more than 3 million possible orders to visit them in! Testing 100 cities in that way would take longer than the age of the universe! Visiting 100 cities does not happen very often in real life but it does happen if you plan airline networks or want to optimise production lines in factories. There are many other important application which you can read about below. Unfortunately mathematicians have not been able to find a way to solve the Travelling Salesman problem in a much more efficient way. Problems like this, which take a very long time to solve using a computer, are called NP-hard (Non-deterministic Polynomial-time hard) If we are connecting n cities there are n! (n-factorial) different routes (this is explained in the article on Combinatorics). The most efficient computer algorithm takes of the order of 2^n steps. This is not very useful since the number of steps increases exponentially with n. Instead there are approximate algorithms to solve the Travelling Salesman problem and similar ones. These are much faster and usually give very good results, but sometimes they miss the very best Travelling Salesman Simulator ... coming soon Four Colour Theorem Here is another famous problem that is related to Graph Theory: We want to colour a map of countries so that no two adjacent countries get the same colour. What is the maximum amount of colours we need to be able to colour any map? Here is an example that works with four colours. Notice that it wouldn’t work with only three colours: eventually we would always arrive at a point where we cannot colour a state because all three colours have already been used on its neighbours. But are four colours always enough? Or are there certain maps for which we need at least five colours? In 1852, Francis Guthrie together with Augustus De Morgan (1806 – 1871) conjectured that four colours are enough to colour any map: the four colour theorem. And while the problem is easy to understand, proving it seemed to be rather more difficult. There were several early proofs, and in some cases it took more than 10 years until a mistake was found. It is reasonably simple to prove that five colours are always enough – but the four colour problem remained unsolved for many years. The four colour theorem can be translated into graph theory: we convert every country into a vertex and whenever there is a border between two countries we connect their vertices with an edge. Now we want to colour a graph so that vertices connected by an edge get different colours. Mathematicians thought about the four colour theorem for a very long time. Unfortunately they were neither able to find a map for which 4 colours are not enough, nor were they be able to come up with a proof that four colours are always enough. Any map/graph we try does work with four colours, but of course we can’t try all infinitely many maps. In 1976 the mathematicians Kenneth Appel and Wolfgang Haken considered a total of 1936 smaller “subgraphs” which make up any graph corresponding to a map. They checked each of these cases on a computer and deduced that the four colour theorem is indeed true for any map. This was the first significant example in history when a computer was used to prove a mathematical theorem. This was subject to much controversy: how can you be absolutely sure that the computer didn’t make a mistake? Today there are a number of simpler and more trusted proofs. Colouring Maps Game ... coming soon Euler’s Formula Remember Leonard Euler, the mathematician who invented Graph Theory? He played around with many different graphs and discovered another interesting property related to the number of vertices, edges and faces. This applies to planar graphs, graphs which can be drawn so that no edges overlap unless they meet at a vertex. Click on the button below to count the number of vertices, edges and faces in these graphs: Faces 5 4 5 12 Vertices 6 6 8 9 Edges 10 9 12 20 Faces + Vertices 11 10 13 21 If you have a look at the last two rows of the table above you will notice that “faces + vertices” is always one more than the number of edges, or that faces + vertices = edges + 1 Euler managed to prove that this is true for all planar graphs, and it is known as Euler’s formula. In the article on Polygons and Polyhedra we found a very similar formula for the number of edges, vertices and faces of 3-dimensional solids, but the 1 in the equation above was replaced by a 2. We could imagine that 3-dimensional polyhedra are graphs on the surface of a sphere. We say that the Euler Characteristic of a flat surface is 1 and the Euler Characteristic of a sphere is 2. Applications of Graph Theory Everything in our world is linked: cities are linked by street, rail and flight networks. Pages on the internet are linked by hyperlinks. The different components of an electric circuit or computer chip are connected and the paths of disease outbreaks form a network. Scientists, engineers and many others want to analyse, understand and optimise these networks. And this can be done using graph For example, mathematicians can apply graph theory to road networks, trying to find a way to reduce traffic congestion. An idea which, if successful, could save millions every year which are lost due to time spent on the road, as well as mitigating the enormous environmental impact. It could also make life safer by allowing emergency services to travel faster and avoid car accidents in the first These Intelligent Transportation Systems could work by collecting location data from smartphones of motorists and telling them where and how fast to drive in order to reduce overall congestion. Graph theory is already utilised in flight networks. Airlines want to connect countless cities in the most efficient way, moving the most passengers with the fewest possible trips: a problem very similar to the Travelling Salesman. At the same time, air traffic controllers need to make sure hundreds of planes are at the right place at the right time and don’t crash: an enormous task that would be almost impossible without computers and graph theory. One area where speed and the best connections are of crucial importance is the design of computer chips. Integrated circuits (ICs) consist of millions of transistors which need to be connected. Although the distances are only a few millimetres, it is important to optimise these countless connections to improve the performance of the chip. Graph theory also plays an important role in the evolution of animals and languages, crowd control and the spread of diseases. Digital Graphs In recent years, there has been another important use of Graph Theory: the internet. Every page in the internet could be a vertex in a graph, and whenever there is a link between two pages, there is an edge between the corresponding vertices. The resulting graph is of course very, very large. Early web search engines had a very big problem: they could search the web for a particular keyword, but they couldn’t determine whether a page is “good” or just spam. If you searched for ‘London’, you might get hundreds of websites of small shops in London, or people who live there, before the official london.gov website. Google found a solution to this: any page that is very good will have many other pages linking to it. Pages that are rarely visited, or not very interesting, will be very “lonely” in the internet graph with only few other pages linking to it. This gives a way to rank websites and allows Google to display the best results at the beginning There is a another digital graph, of which you yourself are a part: Facebook. All the users form vertices and whenever two users are friends they are linked by an edge. Graph theory can help web developers improve the performance of social networking sites, and it can help us understand Facebook better. If I were to choose two users of Facebook completely at random, how many of these friendship edges do I need to get from one to the other? 10? 100? Surprisingly the answer, on average, is only six. On average you are less than six friends away from any other Facebook user: that’s globalisation! Experiments have been conducted to test this and the longest distance ever found on Facebook was only 12. The number of Degrees of Separation has inspired countless writers and film makers. How many degrees are you away from the Queen? Or the American president? Nobody knows the exact answer, but it is probably a lot less than you think!
{"url":"http://world.mathigon.org/Graph_Theory.html","timestamp":"2014-04-20T08:14:42Z","content_type":null,"content_length":"28240","record_id":"<urn:uuid:e3870aec-1d22-4927-9c83-d6be9bcd664b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Temple City Geometry Tutor Find a Temple City Geometry Tutor Hi everyone! Are you looking for a music teacher or a writing tutor? If so, I’d be the perfect fit for you! 42 Subjects: including geometry, English, reading, writing ...In addition to studying business at USC which included entry level accounting courses, I concentrated my MBA studies in finance and accounting at Pepperdine University. At Pepperdine, I took six accounting courses and received a solid "A" in each. I have exceeded the necessary number of units to sit for the Certified Public Accountant (CPA) exam. 38 Subjects: including geometry, reading, chemistry, statistics ...I am mainly focused on assisting students in the middle/high school and college in Biology and Math from Algebra to Calculus. I am a graduate from the University of Arizona with Biomedical Engineering and Molecular Biology degrees. I am very good at math and science and excelled in those courses throughout my education. 14 Subjects: including geometry, calculus, biology, algebra 1 ...I will use my experience in this area to teach strategies that will help you succeed in this subject and prepare you for future areas of math. From polynomials, to functions to graphs. I will help make the subject of Algebra make sense and even make it a little fun! 46 Subjects: including geometry, calculus, algebra 1, physics ...In the Mathematics Test, three subscores are based on six content areas: pre-algebra, elementary algebra, intermediate algebra, coordinate geometry, plane geometry, and trigonometry. I have had years of training in a applied math while in college. I also have the experience of applying those skills while working as an engineer. 10 Subjects: including geometry, algebra 1, algebra 2, trigonometry
{"url":"http://www.purplemath.com/Temple_City_geometry_tutors.php","timestamp":"2014-04-17T13:24:59Z","content_type":null,"content_length":"23959","record_id":"<urn:uuid:8c08098c-6bf7-4d20-9637-15f790a60b47>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Date Less Than Greater Than I have a list of dates B2:B248. All the cells have been formatted to date type dd/mm/yyyy. I'm just trying to count the number that are earlier than a certain date. I thought a COUNTIF function would work! I've tried DATE, DATEVALUE etc. I can work with the dates ie B2 (which would have 23/08/2007)-1 comes out as 22/08/2007. However when I try if(B2<B3,"lower","higher") it gives me a false reading - even when B2 is earlier than B3. How can I check if my list of dates is earlier than a specified date? View Complete Thread with Replies Sponsored Links: Related Forum Messages: Sum Where Date Is Greater Than Adjacent Date I have a spreadsheet with two dates for every entry. The first date is the "Projected Completion Date (F66:F139) and the second range is "Acutal Completion Date" (H66:H139). I want to sum all entries where the Projected Completion Date is Greater than the Actual Completion Date and then have that number divided by the amount of entries that have been completed (ie. Enties that have an Actual Completion Date entered. If the entries are outstanding then the Actual Completion Date field is left blank.) View Replies! View Related IF To Calculate If One Date Is Greater Than Another I am having great difficulties getting the following formula to calulate wether the date in cell f2 is greather than of less than the given date. =IF(F2<="18.11.06", "No Cover", "Under Warranty") f2 = 14.09.02 This does produce the wanted result of "No cover" however if the date is 14.09.07 and therefore greater the 18.11.06 i will not get the expected result of "Under Warranty" View Replies! View Related Date Calculation Greater Than Or Equal To I have start date(Column A) and an end date(Column B) in two columns and I have found out the difference between them in column C in hh:mm:ss format. I want to find out how many cases are greater than 2 Hrs but less than or equal to 4 Hrs. View Replies! View Related Index Match With Date Greater Than I have the following tables and would like to return the red cells via formula MinContract psuedo contractid WHERE Table1.'MinDateShip' between Table2.'MinDateContract' AND Table2.'MaxDateContract' AND Table1.ID = Table2.ID AND Table1.ReportTypeDescription = Table2.ReportTypeDescription View Replies! View Related Check If Each Cell Has A Date Greater Than TODAY I've got the following function that check if each cell has a date greater than TODAY(). If result is true, it'll display "NO GO". Otherwise, it'll display "GO". I would want to improve on it such that if any of the 'B5:F5' cell is empty, it'll display "Incomplete" instead of "No Go". View Replies! View Related Find July Greater Than Targetted Date? I'm looking for a function which will determine if a target date is greater than or equal to THE JULY FOLLOWING A GIVEN DATE. In the attached spreadsheet the Start Date (B5) would be the GIVEN DATE Target Date is listed in ROW 1. In sentence form the function in cell G2 would be something like, "If G1 >= July following B5, then True. View Replies! View Related Count Dates Greater Than Date In Cell Its a training list, and I want to count the number of staff with valid training dates, I want to keep invalid dates as a reminder and I also have text N/A to disregard. Have used an IF function array but there are approx 33 column entries I want to add and using array function limits the amount of formula entries up to column 24. Would be much easier if I used data validation to kick out the invalid date entries but we want to keep them if possible. View Replies! View Related Counting Number Of Dates Equal To Or Greater Than A Said Date In my Excel spreadsheet I enter todays date in a single cell (A2), then I list various dates that jobs come into shop in other cells (A8:A108). I have cells ( F8:F108) where I have been manually entering an asterik (*) for those jobs equal to or greater than five days old in cell (A2). Is there a formula that can do the math for me? I've tried Excel help but to no avail. View Replies! View Related Copy Rows Where Date In Column Is Greater Than Today I have been working on this issue for some time, searches let me down paths to tell me of the color of the cell, but can't put all the pieces together. What I am trying to do, is upon Clicking Command Button 1 it will go row by row of column D (there are 2 headers so D3 would be the first fillable data) looking for dates that is past todays date, if past, it will color the cell red then copy it to the next available row in sheet2 then continue, date past due, color red, copy entire row to sheet 2 looping until the end is reached View Replies! View Related Conditional Sum: Return The Latest Date Before The Stock Turns To Be Greater Than The Value In Cell D3 In the attached workbook - the stock Inventory is increased, every second day, by the value shown in cells of column A. Column B displays the date of the update. I'm looking for a Formula (might be an Array Formula) that will return the latest date before the stock turns to be greater than the value in cell D3. I managed to solve it, in cell F3, but with the help column C. View Replies! View Related Record Greater Than 50 In A Cell So It Reads As Greater Than 50 I have to make a table that shows that a if someone purchases less than 5 items they receive no discount 5-10 items they receive 2% discount 11-20 items they receive 5% discount 21-50 items they receive 8% discount over 50 items they receive 10% and it has to be done in a way that the discount rate can be calculated using Vlookup I am struggling to find the best way to write this table. i tried numbering 1 to 50 and writing the corresponding discount rate in the second column but this looks untidy and can't calculate greater than 50 as i am not sure how to write it in the cell so it reads as >50 and not just 50. View Replies! View Related Calendar Control 11: Selected On The Calendar Is Greater Than 12 The Date Is Entered Correctly Onto The Sheet I have a program where I can update the calibration due date of an item. I have attached a cut down version of my program showing the relevant areas. There is usually password protection on the worksheet so it can only be edited via the form (the vba coding removes the password protection before editing, then re-enables the password protection after editing). The "Update Calibration" button is usually on a "Menu" sheet. Once the form is opened a serial number is typed in the textbox. The calendar button is then clicked, which brings up another form with the calendar on. The due date is selected on the calendar. When "OK" is clicked, the date label caption is then changed to the selected calendar date. When "Submit" is clicked, the spreadsheet will search for the Serial Number, once found, the label caption (being the date selected) will be entered into the cell to the right of the serial. If the day selected on the calendar is greater than 12 the date is entered correctly onto the sheet. example: calendar date selected = 15/01/2010. shown on sheet as 15/01/2010. However, if the day selected on the calendar is 12 or less, the date is for some reason entered incorrectly onto the sheet. example: calendar date selected = 08/12/2010. shown on sheet as 12/08/2010???? What is going on here? how come the day and month are swapped around if the day is less than 12???? View Replies! View Related Greater Than / Less Than I'm struggling to complete this formula. No matter what the entry in M30 is (which is a concatenated formula from another sheet), I only get the highest response, which is 18. I'm assigning a risk score based on a dollar amount. The formula is: .... View Replies! View Related IF, THEN, Greater Than, Less Than Say I need to figure out bonuses based on income. If the income is less than 100,000 then the bonus is 5%. If the income is between 100,000 and 249,000 the bonus is $5000 + 6% the amount above 100,000. If the income is between 250,000 and 499,999 the bonus is $14000 + 7% the amount above 250,000. If the income is over 499,999 the bonus is $31,500 + 8% the amount above 500,000 What is the formula I need to enter to make this work. View Replies! View Related If Value Greater Than I have a Macro of 55 Columns and 2000 Rows I need to change the value in several Cells of the row in which the value of AZ is 200.01 or more. I need the following if ANY Cell in AZ2:AZ2000 is equal or greater than 200.01 then the following Cell in that Row will equal the following: AG = 20 AW = 11 AX = " " (BLANK) BC = N I have attached an example of the spread sheet with Macro embeded and how it should look after the above is run. I do not know if or how to I need to tag the macro within the attachement. View Replies! View Related Greater Than Zero Need to sum that won't work. i'm using: A B C 00:00 07:00 =if(A1>0,B1,"N/A") 07:00 04:00 =if(A2>0,B2,"N/A") 00:00 07:50 =if(A3>0,B3,"N/A") 00:00 06:50 =if(A4>0,B4,"N/A") This doesnt seem to work though using time formats. Column C just brings through Column B no matter what is in Column A. I've attatched an example below. View Replies! View Related SUMIF: Value Is Greater Than Zero I have this formula to tell me how many times in a given column the value is greater than zero: =COUNTIF('Data Entry'!AY3:AY117, ">0") But I also need to know how many times the value in AY3:AY117 is greater than zero PLUS how many times the value in AZ3:AZ117 is also greater than zero I've scoured this wonderful website and have got as far as: =SUM(IF('Data Entry'!AY3:AY117,"> 0"+SUM(IF(AZ3:AZ117,"> 0")))) but I'm clearly still way off. View Replies! View Related Add Value If Greater Than I am trying to create a formula that will automatically calculate greater than or less than and then add x or y depending. I am shipping some items and we can fit 100 or less in a small box that weighs 3oz and 100+ in a bigger box that weighs 6 oz. I want the formula to look at the quantity and determine if the quantity is 100 or less it should add 3 and if greater than 100 then add six. I tried tons of google searches but can't seem to figure it out. This is the first forum I could find that I figured might be able to help. I have been doing this all by hand in multiple columns... View Replies! View Related Greater Than Or Less Than In Same Formula I have the following values that I need to perform a calculation on, but I am not sure how. A1 = -.98 A2 = .98 A3 = 1.0 I would like the results in colmun B to be: B1 = 1.02 B2 = .98 B3 = 1.0 I would like to achieve this with a formula that I could fill down. View Replies! View Related Count If Greater Than But Less Than I want to do is count in a column numbers greater than 13 but less than 20. I am also trying to write another formula that counts numbers equal to or higher than 1 but less than 12. In other words, I do not want this count to include any cells that contain 0. View Replies! View Related Minimum Greater Than Zero I have a table populated with equations. I need to write a function to find the smallest value in that table. However, I want to ignore the zero values. From a dataset containing 8, 5, 0, 7 I want to find 5, not zero. View Replies! View Related Formula For Greater , Less Than I have this table with min and max amounts that requires a fixed amount when when the condition is met. How do I write a formula for this. If result is >$0 but < $100 = $15 and so on. I canlt get it View Replies! View Related Multiple Greater Than Less Than I want to calculate a mark-up on one cell. If it is under a set amount then I want the mark-up to change. If it is then over a set amount but below another, then I want the mark up to change again, etc. I have got this far (eg. below) but the calculation does not work properly when the value in "A3" is over "1". =IF(A3<=0.49,A3/0.2,IF(A3>=0.5,A3/0.25,IF(A3>=1,A3/0.35,IF(A3>=2,A3/0.5,0)))) View Replies! View Related Find First Value Greater Than Zero I have a list of numbers like this. I need to find the first greater than zero number and then add up that number and the following two numbers. In the case above the answer would be 16. View Replies! View Related Greater Than Or Less Than (Plus Or Minus) Here is my formula. It is perfect, except it doesn't have one final step. What I need it to do is be able to do that ONLY if it is greater than or less than by a specified amount. So there needs to be a modification of ... Sheet!J5<> (but by 100 or any other number that I set) $J$4,... View Replies! View Related Small Greater Than Zero is there any way of using the =SMALL function to rank only numbers above zero so that the zeros don't keep showing up as the smallest figures? View Replies! View Related Greater Than, Smaller Than Formula When grading children's test scores I want to apply letters and numerals to particular ranges eg between 21 and 25=3c 26and 30=3b. Please help with a formula. View Replies! View Related The Smallest Value That Is Greater Than Or Equal To I am looking for a function like MATCH if the match type were set to -1. However my data is sorted in ascending order. I am mining data from a Pivot Table, and it has dates across the top. Of course the pivot table will have the data sorted in ascending order from left to right. I want to find the first date that is greater than today. With weekends and holidays I can't just use TODAY()+1. Is there a function that can do what I am asking? Also I do not want to change the pivot table itself. View Replies! View Related Select Case - Greater Than / Less Than DayCompare = Day(Date) Select Case DayCompare Case Is < 7 And Sheets("Roster").Range("D8") = "N" MsgBox "Hello." Case Is > 15 And Sheets("Roster").Range("D8") = "N" MsgBox "Goodbye." Case Else End Select I thought this would work OK, however even though today is the 10th, and therefore ignored by this statement, it is picked up in the >=15 statement View Replies! View Related Greater Than Less Than Formula For An Input I need to display two separate values from a given input, but not exceed a specified number for that cell. I have this so far except for the maximum number that can be displayed; Example for what I want below .... View Replies! View Related Greater Than 7 Nested If Formulas I undrstand that you can only ave a certain amount of nested IF statements within one formular, is there a way round this? What i'm looking to do is this. VLOOKUP(VLOOKUP(location,IF(postcode="BA",LocationBA,IF(postcode=" BS",LocationBS,)),2,FALSE) I have 13 options that can be chosen, but only have room for 6 nests of which BA and BS are currently above. what would be the best way to have all 13 within the lookup table array? View Replies! View Related Greater Than And Less Than Or Equal To Formula I need a formula that looks at the total in H40 and if the number is between 32 and 40 I need it to return the number then if the number exceds forty I need to multiply the overage by 1.5 and add it to the 8 for a total of 11. I think it would be something like: View Replies! View Related Textbox Input Greater Than 9 I have a userform with a textbox1 on inputting the numbers 1-9 the code fires to open that qty. of worksheets it works fine. But, why if the user wanted to input 10 it reads the 1st number 1 and only opens 1 sheet. I need it to open another userform if 10 or greater is input. Option Explicit Private Sub Userform_Initialize() 'CALCULATES # OF SHEETS View Replies! View Related If Greater Number Msgbox I like to do is if cell H3 is Greater than cell A35 to promt the message box below. But if less than do nothing. And if possible if less than to look at sheet 2 cell A35 and do the same. If WorksheetFunction.Sum(Range("H3") >= ("A35")) Then Exit Sub MsgBox "QTY. MUST BE GREATER THAN END FEQUENCY", vbOKOnly View Replies! View Related Minimum Formula Greater Than To Zero I have a range of weekly sub totals that get entered each week for the year. As each week are entered I am trying to find the lowest week's production (using the =min formula) that is above 0 (weeks not yet entered appear as 0) the problem is that it keeps defaulting to the next column once a number greater than o is entered. View Replies! View Related Extract Data If Value Is Greater Than Zero I built an estimating spreadsheet for the electrical construction industry and am trying put together a "Materials List" on another worksheet. I want the materials list to display materials which have a value greater than zero. Example, the 1st worksheet is my estimating worksheet which contains a list of 30 materials. The 2nd worksheet is a "Bill of Materials" that I would like to display in a proposal format to the customer. I only want to show them materials that have a quanitity of more than zero from the estimating worksheet. View Replies! View Related Set All Values Greater Than X To Y I have a huge sheet with data. I want to fix all values exceeding e.g. 2000 to 2000. for instance: 400 --> 400 1600 --> 1600 2300 --> 2000 700 --> 700 3100 --> 2000 View Replies! View Related Greater Than Equal To With Autofilter I am creating my first Userform and having some problems. I take the data supplied by the userform and try to match it as closely as possible to a row of information. Currently I am using four cells to autofilter my spreadsheet data. Two of the cells I am looking for a exact match. The other two cells I am looking for the number that has been input or anything greater than it. Here is the code I have come up with... View Replies! View Related Message Variance Greater Than 5 I have a workfile with several worksheets. I have the word "variance" appearing in some of the rows in column B and a value in column C which is in the same row as the word "variance" for eg row B19 Variance C19 10 I need VBA code that where the value which is in line with the text "Variance" is greater than 5, the name of the worksheet is listed in the worksheet named "Variances" See Example below ******** ******************** ************************************************************************>Microsoft Excel - Statistical Data.xls___Running: xl2002 XP : OS = Windows XP (F)ile (E)dit (V) iew (I)nsert (O)ptions (T)ools (D)ata (W)indow (H)elp (A)boutC13C14C16C18= [HtmlMaker 2.42] To see the formula in the cells just click on the cells hyperlink or click the Name box PLEASE DO NOT QUOTE THIS TABLE IMAGE ON SAME PAGE! OTHEWISE, ERROR OF JavaScript OCCUR. View Replies! View Related Count Values Greater Than Zero I want to look to a range and if there is a value greater than zero I want to count it. I keep going round in circles trying to do this and now I give up. View Replies! View Related Copy If Cell Greater Then Zero I have one sheet where I am entering the necessary parts of the particular order. Once I have it completed I need to run a macro to copy only the elements that I have marked. From rows 3-17, if cells in E3-E15 are greater then 0 then I need the rows from Column "C" to Column "G" and Column "I" copied to another sheet. With rows 20 - 97 if cell in column "E" is > 0 then copy the relevant row from column "B" to column "E". I am attaching the file. View Replies! View Related If Greater Than Or Small Than, Or Equal To I have a cell, M87. The score in M87 can be less than 13 or greater than 25. I need a formula within M94 which refers to M87, and outputs depending on the the following criteria. If M87 is less than 13 then output as D. If M87 is 14, 15, 16, or 17 then output as C. If M87 is 18, 19, 20, 21, 22, 23 or 24 then output as B. If M87 is greater than 24 then output as A. View Replies! View Related IF Statement: Find The Greater Than Value I have a value in E12, and i need a formula that looks at the value and if it is equal to or greater than 5, then the output should be E12 x 500 +1000, but if the value in E12 is greater than 5, then the output needs to add the original 5 x $500 and now include all greater than 5 to be x by $250 + 1000. I got this far, but if the value is greater than 5, i don't get the original 5 * 500 that i also need. View Replies! View Related
{"url":"http://excel.bigresource.com/Date-less-than-greater-than-9YoUDlVj.html","timestamp":"2014-04-18T10:48:58Z","content_type":null,"content_length":"65147","record_id":"<urn:uuid:47279188-7da3-462f-99cc-240c5a468e55>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
normal probability November 9th 2008, 11:07 AM #1 Nov 2008 normal probability 4. (18 points / 3 points each) Mileages for taxi rides from Penn Station to their destination are approximately normally distributed with a mean of µ = 4.3 miles and a standard deviation of σ = 1.2 miles. a. Classify this problem by type using the letter choices: A. binomial probability problem, C. normal probability problem about a single X, B. uniform probability problem, D. normal probability problem about a E. none of the above. b. What is the probability that a taxi ride is greater than 6 miles? c. What is the probability that a taxi ride is between 1 and 3 miles? d. What is the expected length of any given taxi ride? e. Find the taxi ride mileage that would represent the 80th percentile or P80. e. If the longest 15% of taxi rides receive a discount, determine the mileage for which customers would receive the discount. 4. (18 points / 3 points each) Mileages for taxi rides from Penn Station to their destination are approximately normally distributed with a mean of µ = 4.3 miles and a standard deviation of σ = 1.2 miles. a. Classify this problem by type using the letter choices: A. binomial probability problem, C. normal probability problem about a single X, B. uniform probability problem, D. normal probability problem about a E. none of the above. Mr F says: C!! b. What is the probability that a taxi ride is greater than 6 miles? c. What is the probability that a taxi ride is between 1 and 3 miles? d. What is the expected length of any given taxi ride? e. Find the taxi ride mileage that would represent the 80th percentile or P80. e. If the longest 15% of taxi rides receive a discount, determine the mileage for which customers would receive the discount. These questions are done just like the other normal ones I showed you in your other question. Where are you stuck? What have you tried? November 9th 2008, 11:09 AM #2 November 9th 2008, 01:40 PM #3 Nov 2008 November 9th 2008, 06:38 PM #4
{"url":"http://mathhelpforum.com/statistics/58553-normal-probability.html","timestamp":"2014-04-19T20:18:57Z","content_type":null,"content_length":"43464","record_id":"<urn:uuid:dc349c66-ba26-4685-9005-0f2fb955972c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
List of Visiting Speakers: Professor James Keesling Professor James Keesling Department of Mathematics University of Florida P.O. Box 118105 Gainesville, FL 32611-8105 Phone: 352-392-0281 ext. 289 Fax: 352-392-8357 E-mail: jek@math.ufl.edu Professor Keesling (Ph.D., University of Miami) has been at the University of Florida since 1967 and Professor since 1975. He has published numerous research articles and given numerous invited addresses on his research in national and international forums. He is one of the managing editors of Topology and Its Applications. In recent years, he has also concentrated on talks aimed at undergraduates to increase interest in mathematics. Fractals: Jagged Geometry This talk is about the geometry of fractal sets. Two types of fractal sets are the focus: self-similar sets and Brownian motion. Self-similar sets have the same jagged appearance under any magnification. Examples began to appear in the mathematical literature in the early part of the twentieth century. However, the advent of the computer has increased fascination with these objects. This is because they can now be created so easily using algorithms programmed on the computer. These sets have an allure of their own, but also show promising applications. Fractals based on Brownian motion is the other focus. Brownian motion is a natural phenomenon which has been shown to be intrinsically fractal based on the physical principles governing it. The theory of Brownian motion and fractional Brownian motion has led to algorithms that generate realistic landscapes which are used in high tech cinema. It is also being used to evaluate algorithms processing radar images which pick out the important objects in the display. This talk can be adapted to any level of audience from high school to graduate and research level. Even students with limited background will learn to calculate the Hausdorff dimension of some examples. The Chaotic f(x)=Ax(1-x) Whenever one is studying any system, a good task would be to describe it mathematically. If this can be done, the next task is to analyze the behavior of the model. Not so long ago, most scientists had the naive notion that such models would be dominated by stable equilibria or periodic orbits. The quadratic family of functions shows that this is altogether false. For many values of the parameter A, iterations of f(x) will converge to a stable equilibrium or a stable periodic orbit as anticipated. However, for many values this fails. The bifurcation diagram illustrates just how strange the behavior can be for such a simple and familiar family of functions. This diagram is a helpful geometric focus for this talk. The quadratic family has been used as a model of growth of a population limited by food or other resources. That certain values of the parameter A lead to strange and unanticipated population fluctuations is a discovery that dates back to the early 1970's. It has changed the way biologists look at population changes. It is also changing the way scientists view nature. This talk can be adapted to undergraduates who have had calculus. The talk can also be given at a more advanced level including graduate and research audiences. It is important to be able to predict the behavior of biological systems. One can hardly claim to understand the behavior of some disease without being able to predict the spread of the disease or the effect of various countermeasures. Obviously, one needs mathematical models of these systems to make predictions. This talk gives several examples of biomathematical modelling. One model describes the spread of vectored diseases such as malaria, yellow fever, and dengue. There are models of crops and other models describing the population dynamics of agricultural pests to help in their management. On a more sophisticated level, research in embryology uses models based on reaction-diffusion chemical processes to understand the growth of an organism and the formation of its many tissues. Altogether, models such as these are leading to a new quantitative science of biology. This talk can be adapted to any audience. However, it is best for students who are upper division in mathematics or some other science.
{"url":"http://www.siam.org/visiting/speakers/keesling.php","timestamp":"2014-04-18T23:33:36Z","content_type":null,"content_length":"10828","record_id":"<urn:uuid:f21b5498-cf02-47d1-832c-249728fb020a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Sun Oct 28 2007 14:13 How Many Levels?: I was playing the 2D fan version of Portal and basically figured it had 40 levels. I was right. Then I wondered why I'd thought that. Probably something to do with the pacing as new game mechanics were introduced. But when I was a kid, puzzle games had 30 levels and then you had to register for two more sets of 30 because it was shareware. How many levels are in a typical video game that has a discrete number of levels? I decided to do a semi-scientific test where I measured this by searching for game "X levels" and comparing the numbers of results. I did a few Google searches until I remembered that Google doesn't want my business for this. I switched to Yahoo!, which has a handy web search service. I graphed the number of search results against the number of levels for every X in 1 to 100: According to that, most games have 10, 50, or 100 levels. 4 levels is also extremely common (though I think there's a lot of pollution there from things that have 4 levels, like malls). A number divisible by 10 or 5 is the most common, but you can plug in any small number of levels and find some oddball game (random example). Then I graphed X=1 to X=1000, with a skip of 10 between 100 and 1000. I had to use a log scale because 90% of the total matches are for X=1 to X=100. The left side of the graph is a "long tail" type graph, but there are very popular data points near round numbers. I also graphed X=1 to X=2000, again with a skip of 10. As we can see the adventure continues. Between 1900 and 2000 there's significant pollution from years ("returning to 1990 levels of greenhouse gas emissions"). I collected data up to X=5000, but 2000 levels is really the limit of how many levels one entity can create for a specific "game", unless that entity be a computer generating them randomly. I don't think I've ever even seen a Rocks 'N' Diamonds level set that had more than 2000 levels. So I won't show you another graph. But if you want the raw data I put up a Gnumeric spreadsheet with my original graphs. Around 3500 there's a lot of financial crap from India for some reason ("support at 3575 levels from the past three trading sessions"). Random tidbit: judging from hits, the least popular number of levels 1-100 is 79. However, 46 stands out from its neighbors as being especially unpopular. Also mysterious: why is 80 more popular than
{"url":"http://www.crummy.com/2007/10/28/0","timestamp":"2014-04-21T12:23:46Z","content_type":null,"content_length":"7615","record_id":"<urn:uuid:59932490-3570-4f15-8616-b1ba2b790bb9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Help finding propeller thrust. From what I have found, propeller thrust is: Thrust = Mass airflow X (slipstream velocity-aircraft velocity). That's similar to the equation for rocket thrust. The term "exit velocity" is used to describe the velocity of the affected air at the moment its pressure returns to ambient. Nasa article: This thrust equation isn't very useful, since it's essentially just two ways of expressing thrust after you've already calculated what the thrust is. What you want is a "thrust calculator" that uses propeller characteristics (blade shape, blade size, number of blades), rpm, and aircraft speed. You can try a web search for thrust calculator, but most of these will be for "static" thrust, where the aircraft is not moving. You can also try searcing for "blade element theory", which will provide more mathematical based articles. Here is one web article that goes through the math, but I don't know if it will provide the answer you're looking for, or how accurate it is (not sure if it takes into account the induced flow effect on a prop with more than 2 blades): You may be able to find other and more useful articles.
{"url":"http://www.physicsforums.com/showthread.php?p=4148078","timestamp":"2014-04-21T02:04:48Z","content_type":null,"content_length":"32849","record_id":"<urn:uuid:d19c224a-8d0d-47d5-8b89-c727da2708e7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Newton's Laws : Quiz 3 Practice Questions 1. A motorcycle of mass 100 kg travels around a flat, circular track of radius 10 m with a constant speed of 20 m/s. What force is required to keep the motorcycle moving in a circular path at this speed ? 200 N 400 N 2000 N 4000 N 2. A car going around a curve is acted upon by a centripetal force, F. If the speed of the car were twice as great, the centripetal force necessary to keep it moving in the same path would be F 2F F/2 4F 3. A motorcycle travels around a flat circular track. If the speed of the motorcycle is increased, the force required to keep it in the same circular path decreases increases remains the same 4. As the distance between two masses increases, the gravitational force of attraction between them decreases increases remains the same 5. If the distance between two masses is tripled, the gravitational force between them becomes 1/9 as great 1/3 as great 3 times as great 9 times as great 6. A rocket weighs 10,000 N at the earth's surface. If the rocket rises to a height equal to the earth's radius above the earth's surface, its weight will be 2500 N 5000 N 10,000 N 40,000 N 7. If the distance between two objects of constant mass is doubled, the gravitational force of attraction between them is halved doubled quartered quadrupled 8. An object weighing 20.0 N at the earth's surface is moved to an altitude where its weight is 10.0 N. The acceleration due to gravity at this altitude is 2.45 m/s^2 4.90 m/s^2 9.80 m/s^2 19.6 m/s^2 9. An object weighing 16 N at the earth's surface is moved above the earth a distance equal to the earth's radius. The gravitational field strength at this new position is 2.5 N/kg 4.9 N/kg 9.8 N/kg 4.0 N/kg 10. As the mass of a body increases, its gravitational force of attraction on the Earth decreases increases remains the same 11. The magnitude of the gravitational force between two objects is 20. N. If the mass of each object were doubled, the magnitude of the gravitational force between the objects would be 5.0 N 10. N 20.N 80. N 12. The gravitational force of attraction between two objects is 9.0 N. If the mass of one of the objects is tripled, the new gravitational force of attraction will be 1.0 N 3.0 N 27 N 81 N 13. The gravitational force of attraction between two objects is F. If the mass of each object is doubled, and the distance between them is doubled, what is the new gravitational force of attraction ? 1/4 F 1/2 F F 4 F
{"url":"http://www.oocities.org/capecanaveral/8236/U3Q3.html","timestamp":"2014-04-18T13:08:24Z","content_type":null,"content_length":"14534","record_id":"<urn:uuid:bd788be2-b700-4a4c-a88e-ce7e9949e3f0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
help with DE problem: Newton's law of cooling January 22nd 2008, 06:27 AM #1 Senior Member Jul 2006 Shabu City help with DE problem: Newton's law of cooling At 1:00 PM, a thermometer reading 70 degrees F is taken outside where the temperature is -10 degrees F (ten below zero). At 1:02PM, the reading is 26 degrees. At 1:05PM the thermometer is taken back indoors, where the air is at 70 degrees F. What is the temperature reading at 1:09PM? u is the temperature reading u_o is the medium $-k(u - u_o) = \frac{du}{dt}$ and by integrating $u - u_o = Ce^{-kt}$ my solutions in solving the problem: "At 1:00 PM, a thermometer reading 70 degrees F is taken outside where the temperature is -10 degrees F (ten below zero)": at t = 0 , u = 70 and u_o = 10 $70 - (-10) = Ce^{-k(0)}$ C = 80 Now we already know the constant "At 1:02PM, the reading is 26 degrees": at t = 2; u = 26 $26 - (-10) = 80e^{-k(2)}$ k = 0.39925 my problem is that i dont understand when it is put indoors "At 1:05PM the thermometer is taken back indoors, where the air is at 70 degrees F": the constant of proportionality is the same $(u - 70) = Ce^{0.39925t}$ ....and i dont understand the problem the answer to this problem is 56 degrees Farenheit At 1:00 PM, a thermometer reading 70 degrees F is taken outside where the temperature is -10 degrees F (ten below zero). At 1:02PM, the reading is 26 degrees. At 1:05PM the thermometer is taken back indoors, where the air is at 70 degrees F. What is the temperature reading at 1:09PM? u is the temperature reading u_o is the medium $-k(u - u_o) = \frac{du}{dt}$ and by integrating $u - u_o = Ce^{-kt}$ my solutions in solving the problem: "At 1:00 PM, a thermometer reading 70 degrees F is taken outside where the temperature is -10 degrees F (ten below zero)": at t = 0 , u = 70 and u_o = 10 $70 - (-10) = Ce^{-k(0)}$ C = 80 Now we already know the constant "At 1:02PM, the reading is 26 degrees": at t = 2; u = 26 $26 - (-10) = 80e^{-k(2)}$ k = 0.39925 my problem is that i dont understand when it is put indoors "At 1:05PM the thermometer is taken back indoors, where the air is at 70 degrees F": the constant of proportionality is the same $(u - 70) = Ce^{0.39925t}$ ....and i dont understand the problem the answer to this problem is 56 degrees Farenheit i didn't check, but i assume all your calculations are correct so far first you need to find what reading the thermometer has at 1:05 pm, you have the equation for that. use that as u(0), that is, the temperature at time 0. you want u(4). the thermometer is going to heat up. so we can use the equation you have, except make k positive (that should work) for the time at exactly before 5 minutes i used the equation: $u - (-10) = 80e^{-0.39925(5)}$ u = 0.8675 degrees "At 1:05PM the thermometer is taken back indoors, where the air is at 70 degrees F": so here it is now indoors and the air temperature is 70 for the time exactly after 5 minutes: when t = 0, u = 0.8675 degrees; $(0.8675-70) = C(e^{0})$ C = -69.1325 "What is the temperature reading at 1:09PM?" t = 9min, u = ? $(u - 70) = -69.1325e^{-0.39925(9)}$ u = 68.098 degrees edit: thanks jhevon!!! i fogot that it should be u(4) not u(9) Last edited by ^_^Engineer_Adam^_^; January 22nd 2008 at 09:02 PM. January 22nd 2008, 10:16 AM #2 January 22nd 2008, 03:20 PM #3 Senior Member Jul 2006 Shabu City
{"url":"http://mathhelpforum.com/calculus/26573-help-de-problem-newton-s-law-cooling.html","timestamp":"2014-04-18T04:20:20Z","content_type":null,"content_length":"41726","record_id":"<urn:uuid:ceed7ede-bde1-4e0e-bebf-710102a968a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Skeletal mesh In response to this discussion: I am posting my definition for generating an open foam mesh. (I didn't post back at the time because there were some troubles with weaverbird back then and I was having to use some ugly workarounds, but now that is all fixed) To sum up the approach - take a random cloud of points generate the 3d voronoi scale the edges of the cells towards their centres, and also towards the centres of the faces. connect these 2 sets of scaled edges with mesh quads and join cull some of the outer faces subdivide and smooth with weaverbird (In the video there were some other variations on the smoothing/relaxation, both of the initial point positions and the final mesh, using hoopsnake and/or kangaroo)
{"url":"http://www.grasshopper3d.com/forum/topics/skeletal-mesh?commentId=2985220%3AComment%3A558044","timestamp":"2014-04-21T14:49:35Z","content_type":null,"content_length":"92129","record_id":"<urn:uuid:b51482e7-15e3-4877-a4fa-e7d71c0ba438>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
multiple choice question December 9th 2009, 02:13 PM multiple choice question If $\frac{16^x}{\frac{1}{2} x^2}-32=0$, then $x=$ (a) -1 or -4 (b) 1 or -5 (c) 2 or -4 (d) -2 or 4 December 9th 2009, 02:59 PM December 9th 2009, 03:07 PM Prove It I would advise graphing this equation and seeing where they cross the $x$ axis. Equivalently, you could graph $y = 16^x$ and $16x^2$ and see where they intersect. December 9th 2009, 03:27 PM actually that's how i interpreted the question to be since it wasn't clearly written. This is exactly how i saw it written down: So is there any way that the above question can be "interpreted" to give one of the 4 responses mentioned previously? December 9th 2009, 03:53 PM as Prove IT suggested to graph it did graph this and none of answers showed except x = 1 is the only interger value and got 1/2 its a strange multiple choice December 9th 2009, 03:54 PM actually that's how i interpreted the question to be since it wasn't clearly written. This is exactly how i saw it written down: So is there any way that the above question can be "interpreted" to give one of the 4 responses mentioned previously? The only answers I can see are 1 and 1/2. Putting it into Maple confirms, as well as another way funky answer. None of these are of course a choice. Something tells me the problem might be written down wrong?
{"url":"http://mathhelpforum.com/algebra/119612-multiple-choice-question-print.html","timestamp":"2014-04-17T01:12:45Z","content_type":null,"content_length":"8269","record_id":"<urn:uuid:05e3ce43-34ea-491c-9927-e8aa59171b50>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
HCSSiM Workshop, day 12 math education > HCSSiM Workshop, day 12 HCSSiM Workshop, day 12 July 15, 2012 This is a continuation of this, where I take notes on my workshop at HCSSiM. We originally defined $\mathbb{C}$ as $\mathbb{R}[x]/(x^2+1),$ and now we re-introduce it as the set of matrices of the form with the obvious map where $a + bi$ is sent to: After we reminded people about matrix addition and multiplication, we showed this was an injective homomorphism under addition and also jived with the multiplication that we knew coming from $\mathbb {C}.$ Overall our lesson was not so different from this one. Then we talked about actions on the plane including translations and scaling and showed that under the above map, “multiplication by $r e^{i \theta}$” or by its corresponding matrix gives us the same Platonic Solids We went over the 5 platonic solids from yesterday- we’d proved it was impossible to have more than 5, and now it was time to show all 5 are actually possible! That’s when we whipped out Wayne Daniel’s “all five” puzzle: We then introduced the concept of dual graph, and showed which platonic solids go to which under this map. We saw an example of a toy which flips from one platonic solid (cube) to its dual (octahedron) when you toss it in the air, the Hoberman Flip Out. Here is it in between states: Finally, we talked about symmetries of regular polyhedra and saw how we could embed an action into the group of symmetries on its vertices. So symmetries on tetrahedra is a subgroup of $S_4$. It’s a lot easier to understand how to play with 4 numbers than to think about moving around a toy, so this is a good thing. Although it’s more fun to play with a toy. Then one of our students, Milo, showed us how he projected a tiling of the plane onto the surface of a sphere using the language “processing“. Unbelievable. After that we went to an origami workshop to construct yellow pigs as well as platonic solid type structures. 1. July 16, 2012 at 5:43 pm | Does your version of the complex number field come equipped with a distinguished square root of minus one? As you almost certainly know, some mathematicians expend nontrivial effort to avoid making such a choice. □ July 16, 2012 at 8:37 pm | Why would anyone waste time on that? ☆ July 17, 2012 at 6:14 am | That is a good question – if you happen to meet the author of this MathOverflow question, you might ask him: http://mathoverflow.net/questions/45638/ 2. September 12, 2012 at 3:06 am | It is definitly time to refresh my math skills. I received an A+ on my trig class, just wish I still knew the material.
{"url":"http://mathbabe.org/2012/07/15/hcssim-workshop-day-12/","timestamp":"2014-04-16T15:59:51Z","content_type":null,"content_length":"51943","record_id":"<urn:uuid:550a75b5-1602-4a8c-8415-26a865dda97b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
Horatio Scott Carslaw Born: 12 February 1870 in Helensburgh, Dumbartonshire, Scotland Died: 11 November 1954 in Burradoo, Bowral, New South Wales, Australia Horatio Carslaw was the son of the Rev William Henderson Carslaw. He attended Glasgow Academy and entered Glasgow University in 1887 to study mathematics and physics. The breadth of his course in comparison to courses of today is shown by the fact that he also studied Latin, Greek, Moral Philosophy and Logic. He received his MA degree in 1891 with First Class Honours in mathematics and From Glasgow Carslaw went to Emmanuel College, Cambridge, where he graduated in 1894. Returning to the University of Glasgow as a lecturer in 1896, he made a visit to Göttingen during session 1896-97, where he worked with Sommerfeld, as well as visiting Rome and Palermo. In 1903 Carslaw, then 33 years old, moved from his native Scotland to Australia where he had been offered the Chair of Mathematics at the University of Sydney. He had some impressive supporters. Thomson described his teaching as follows:- His zeal and high acquirements as a mathematician, and his personal qualities, render him, in my opinion, remarkably well fitted for mathematical teaching in universities ... Thomson also said he was:- ... an enthusiast in original research, and having studied the mathematical papers and memoirs bearing on Fourier's series and their application in mathematical physics, purposes writing a book on the subject. It is doubtful whether his research record would put him in line for a chair today since before taking up the chair in Sydney he had published only four papers. However, he published two important books within three years of being appointed to Sydney. One was An introduction to the infinitesimal calculus published in 1905. Deakin remarks in [4] that the book was probably influenced by Hardy's lectures, saying:- ... it would be a brave historian indeed who saw Carslaw's 'little book' as being better than Hardy's tome, and a downright foolish one to claim it as more influential, nevertheless it did come The second book was Introduction to the theory of Fourier's series and integrals and the mathematical theory of the conduction of heat. This was to be the main area of Carslaw's research throughout his life. Jaeger in [5], [2] and [6] claims this book to be Carslaw's most important contribution but Deakin in [4] claims it to be his later work on Laplace transforms. The fact that Jaeger himself collaborated with Carslaw on the Laplace transform work may explain why there are differing opinions here. Carslaw married Ethel Maude Clarke from Rupertswood, Victoria, in 1907 but, sadly, she died within a year of their marriage. Jaeger and Carslaw published Operational methods in applied mathematics in 1941. This put Heaviside's operational calculus on a rigorous footing following the approach proposed by Gustav Doetsch. Deakin writes in [4]:- In 1935, the Laplace transform was a topic of frontline research, by 1955 it was standard fare in undergraduate courses. No other advance has achieved such ready acceptance, and Carslaw and Jaeger's text can take a great deal of the credit. In fact this text was published six years after Carslaw retired. His final work published in 1947 was on income tax scales, one of the interests he had throughout his life [1]:- In his old age he interested himself in formulas designed to help in the just and efficient collection of income tax. Other topics to interest Carslaw throughout his career, which we have not touched on above, included an interest in non-euclidean geometry, Green's functions and the history of Napier's logarithms. Article by: J J O'Connor and E F Robertson October 2003 MacTutor History of Mathematics
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Printonly/Carslaw.html","timestamp":"2014-04-19T14:31:39Z","content_type":null,"content_length":"4827","record_id":"<urn:uuid:28cae710-27b5-45bc-803e-65e3276595cf>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Residue Theory 02-25-2012, 04:02 PM #1 New Member Join Date Jan 2012 Residue Theory I am having trouble understanding how to calculate residues. I have the function [TEX]f(z)=\frac {z^a}{1+z^2}[/TEX] and am supposed to find [TEX]\int^\infty_0 f(z)\,dz[/TEX] using the keyhole contour. So I found singularities at +i and - i and am trying to find the residues there using [TEX]\lim_{z\to+i}\frac{{z^a}{(z-i)}}{1+z^2}[/TEX] and [TEX]\lim_{z\to-i}\frac{{z^a}{(z+i)}}{1+z^2} [/TEX] Is ths the right way to find residues? What are these limits? Last edited by monomocoso; 02-25-2012 at 04:28 PM.
{"url":"http://www.freemathhelp.com/forum/threads/74647-Residue-Theory?p=307054&mode=threaded","timestamp":"2014-04-18T03:09:47Z","content_type":null,"content_length":"69102","record_id":"<urn:uuid:f001958b-e8c9-4ae2-a484-ed9286b2cca0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
DMOZ - Computers: Programming: Languages: Fortran: Source Code Please suggest only websites containing source code for solving ordinary differential equations. Full libraries should be submitted to the Libraries category instead. Sites containing additional source code for solving other types of problems should be suggested for inclusion in the parent category. Fortran routines, distributed in source form, for solving systems of ordinary differential equations (ODEs).
{"url":"http://www.dmoz.org/desc/Computers/Programming/Languages/Fortran/Source_Code","timestamp":"2014-04-17T16:29:41Z","content_type":null,"content_length":"17224","record_id":"<urn:uuid:43fdd886-e4d8-4843-a244-86cc06df3aa0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Help with a construction Replies: 3 Last Post: Nov 24, 1997 4:48 PM Messages: [ Previous | Next ] Re: Help with a construction Posted: Nov 11, 1997 5:36 PM Gilles - Without giving it away - there are a couple of approaches I'd try. I'm leaving the details out on purpose so you still have work to do. 1) You can create a pair of similar triangles. One with two sides of 1 and X and the other with the corresponding sides of X and X-squared. 2) You can use the theorem that says two intersecting cords (AB and CD) of a circle cut each other (say at E) so that the product of the parts are the same (AE * EB = CE * ED). If you choose your cords to be perpendicular and the parts to be convenient values (AE=1, CE = ED = X) you can construct the circle and construct EB = X-squared). At 4:54 PM -0500 11/11/97, Gilles G. Jobin wrote: >Please excuse my very poor English. >My question might be trivial for many of you, but I really need to >know how to construct a segment measuring the square of the length of >a given segment, using only Euclid's tools. >Thank you, >Gilles Jobin Michael Thwaites <Michael.Thwaites@ucop.edu> Date Subject Author 11/11/97 Gilles G. Jobin 11/11/97 Re: Help with a construction Michael Thwaites 11/11/97 Re: Help with a construction John Conway 11/24/97 duplicate a cube, square a circle Jeff Clark
{"url":"http://mathforum.org/kb/thread.jspa?threadID=354371&messageID=1084163","timestamp":"2014-04-17T19:35:21Z","content_type":null,"content_length":"20467","record_id":"<urn:uuid:2e75946b-3d77-42d8-95bf-e734b78da894>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Residuated Lattices Proving Standard Results about Residuated Lattices using the Gentzen system` You can also test yourself by deciding the following randomly generated RL equations. To display the proofs in a linear fashion, rather than the more usual Gentzen style, the proof trees are displayed from the root downwards, with the depth of nodes indicated by the indentation level. (Only the branching rules increase the indentation depth.) Each sequent is labeled by the name of the rule that produced it. When a terminal node is reached, the indentation level decreases again. With a bit of practice, these proofs are actually easier to read (and write) than the standard tree-like presentation. The symbol !- can be interpreted as not less or equal, in which case each line implies the one below it (if at the same indentation level) or one of the two indented alternatives (in the case of a branching rule). With this interpretation, the terminal nodes represent contradictions, reminiscent of a tableau style proof. Here are some standard (in)equational results about residuated lattices, followed by a a list of their Gentzen system proof. Here is a random inequality in the language of residuated lattices, followed by a proof or proof attempt. If the inequality does not hold then the steps show all of the potential proof-tree that was checked to reach this conclusion. On the other hand, a successful proof only shows those branches which demonstrate that the result holds.
{"url":"http://www1.chapman.edu/~jipsen/reslat/rlequations.htm","timestamp":"2014-04-21T12:09:38Z","content_type":null,"content_length":"3842","record_id":"<urn:uuid:f47f487f-0741-4041-b21e-431c26dd710a>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Question about indexing Keith Goodman kwgoodman@gmail.... Thu May 29 22:08:51 CDT 2008 On Thu, May 29, 2008 at 6:32 PM, Alan G Isaac <aisaac@american.edu> wrote: > On Thu, 29 May 2008, Keith Goodman apparently wrote: >> >>> a[[0,1]] >> That one looks odd. But it is just shorthand for: >> >>> a[[0,1],:] > Do you mean that ``a[[0,1],:]`` is a more primitive > expression than ``a[[0,1]]``? In what sense, and does it > ever matter? > Is ``a[[0,1]]`` completely equivalent to ``a[[0,1],...]`` > and ``a[[0,1],:]``? I can see how the difference between is not obvious at first, especially if you come from octave/matlab. The first example has an obvious i and j. But the second example doesn't. So I tried to point out that i=[0,1] and j=:. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-May/034623.html","timestamp":"2014-04-20T16:18:52Z","content_type":null,"content_length":"3440","record_id":"<urn:uuid:93d891ee-0079-4ab6-955a-3ebf3b334f3b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
In the complex plane, we define $\cos z=\frac{e^{iz}+e^{-iz}}{2}$. Letting $z=ix$, we have $\cos ix=\frac{e^{i(ix)}+e^{-i(ix)}}{2}=\frac{e^{i^2x}+e^{-i^2x}}{2}=\frac{e^{-x}+e^{-(-x)}}{2}=\frac{e^x+e^ {-x}}{2}=\cosh x$ Thus, when evaluating $\cos ix$ at an angle $x$, its the same as evaluating $\cosh x$. An angle can't be imaginary! But the "trig functions", sine, cosine, etc. are not always related to angles. Sine and cosine, in particular are the "ideal" periodic functions and are often used to model periodic phenomena that have nothing to do with angles (although engineers will still insist upon talking about the "phase angle" of an electrical current where there is no actual angle involved). The easiest thing to do with complex numbers is to multiply and add them so for more complicated functions it is best to work with power series expansions: $e^x= 1+ x+ \frac{1}{2}x^2+ \ cdot\cdot\cdot+ \frac{1}{n!}x^n+ \cdot\cdot\cdot$ $cos(x)= 1- \frac{1}{2!}x^2+ \cdot\cdot\cdot+ \frac{(-1)^n}{(2n)!}x^{2n}+ \cdot\cdot\cdot$ $sin(x)= x- \frac{1}{31}x^3+ \cdot\cdot\cdot+ \frac{(-1)^ n}{(2n+1)!}x^{2n+1}$ If you replace x by "ix" in the first, you get $e^{ix}= 1+ ix+ \frac{1}{2}(ix)^2+ \cdot\cdot\cdot+ \frac{1}{n!}(ix)^n+ \cdot\cdot\cdot$ But $i^2= -1$, $i^3= (i^2)(i)= -i$, $i^4= (i^2)(i^2)= (-1)(-1)= 1$ and then $i^5= (i^4)i= i$, etc. $e^{ix}= 1+ ix- \frac{1}{2}x^2- i\frac{1}{3!}x^3+ \frac{1}{4!}x^4+ \cdot\cdot\cdot$ so all of the "odd" terms have "i" while the "even" terms do not. Separating "real" and "imaginary" parts, $e^{ix}= (1- \frac{1}{2}x^2+ \frac{1}{4!}x^4+ \cdot\cdot\cdot)+ i(x- \frac{1}{3!}x^3+ \frac{1}{5!}x^5+ \cdot\cdot\cdot)$ or $e^{ix}= cos(x)+ i sin(x) $. Replacing x by -x, $e^{-ix}= cos(-x)+ i sin(-x)= cos(x)+ i sin(x)$, since cos(-x)= cos(x) and sin(-x)= -sin(x). Adding those two equations, $e^{ix}+ e^{-ix}= cos(x)+ i sin(x)+ cos(x)- i sin(x)= 2 cos(x)$ so $cos(x)= \frac{e^{ix}+ e^{-ix}}{2}$. In particular, if we now replace x by ix, we have $cos(ix)= \frac{e^{-x}+ e^{x}}{2}= cosh(x)$. (Which is why you titled this "hyperbolics", right?) Last edited by mr fantastic; September 6th 2009 at 04:20 AM. Reason: Added a latex tag. what does the term e^ix mean? what is the significance of raising soth to imaginary power? Last edited by mr fantastic; September 6th 2009 at 04:20 AM. Reason: Fixed a latex tag
{"url":"http://mathhelpforum.com/differential-geometry/100420-hyperbolics.html","timestamp":"2014-04-17T13:44:59Z","content_type":null,"content_length":"50458","record_id":"<urn:uuid:b5b451ab-b403-4693-a487-8997b386878a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Properties of Rational Numbers 2.1: Properties of Rational Numbers Difficulty Level: At Grade Created by: CK-12 Practice Properties of Rational Numbers What if you wanted to order the numbers 2, $-\frac{5}{2}$$\frac{5}{2}$ Watch This CK-12 Foundation: 0201S Integers and Rational Numbers Try This To make graphing rational numbers easier, try using the number line generator at http://themathworksheetsite.com/numline.html. You can use it to create a number line divided into whatever units you want, as long as you express the units in decimal form. One day, Jason leaves his house and starts walking to school. After three blocks, he stops to tie his shoe and leaves his lunch bag sitting on the curb. Two blocks farther on, he realizes his lunch is missing and goes back to get it. After picking up his lunch, he walks six more blocks to arrive at school. How far is the school from Jason’s house? And how far did Jason actually walk to get Graph and Compare Integers Integers are the counting numbers (1, 2, 3...), the negative opposites of the counting numbers (-1, -2, -3...), and zero. There are an infinite number of integers and examples are 0, 3, 76, -2, -11, and 995. Example A Compare the numbers 2 and -5. When we plot numbers on a number line, the greatest number is farthest to the right, and the least is farthest to the left. In the diagram above, we can see that 2 is farther to the right on the number line than -5, so we say that 2 is greater than -5. We use the symbol “$>$$2 > -5$ Classifying Rational Numbers When we divide an integer $a$$b$$b$rational number. It’s called this because it is the ratio of one number to another, and we can write it in fraction form as $\frac{a}{b}$numerator and the bottom number is called the denominator.) You can think of a rational number as a fraction of a cake. If you cut the cake into $b$$a$ For example, when we see the rational number $\frac{1}{2}$$\frac{1}{2}$ With the rational number $\frac{3}{4}$$\frac{3}{4}$ The rational number $\frac{9}{10}$$\frac{9}{10}$ Proper fractions are rational numbers where the numerator is less than the denominator. A proper fraction represents a number less than one. Improper fractions are rational numbers where the numerator is greater than or equal to the denominator. An improper fraction can be rewritten as a mixed number – an integer plus a proper fraction. For example, $\frac{9}{4}$$2\frac{1}{4}$ Equivalent fractions are two fractions that represent the same amount. For example, look at a visual representation of the rational number $\frac{2}{4}$$\frac{1}{2}$ You can see that the shaded regions are the same size, so the two fractions are equivalent. We can convert one fraction into the other by reducing the fraction, or writing it in lowest terms. To do this, we write out the prime factors of both the numerator and the denominator and cancel matching factors that appear in both the numerator and denominator. $\frac{2}{4} = \frac{2 \cdot 1}{2 \cdot 2 \cdot 1} = \frac{1}{2 \cdot 1} = \frac{1}{2}$ Reducing a fraction doesn’t change the value of the fraction—it just simplifies the way we write it. Once we’ve canceled all common factors, the fraction is in its simplest form. Example B Classify and simplify the following rational numbers a) $\frac{3}{7}$ b) $\frac{9}{3}$ a) 3 and 7 are both prime, so we can’t factor them. That means $\frac{3}{7}$ b) $\frac{9}{3}$$9 > 3$$\frac{3 \cdot 3}{3 \cdot 1} = \frac{3}{1} = 3$ Order Rational Numbers Ordering rational numbers is simply a matter of arranging them by increasing value—least first and greatest last. Example C Put the following fractions in order from least to greatest: $\frac{1}{2}, \frac{3}{4}, \frac{2}{3}$ $\frac{1}{2} < \frac{2}{3} < \frac{3}{4}$ Simple fractions are easy to order—we just know, for example, that one-half is greater than one quarter, and that two thirds is bigger than one-half. But how do we compare more complex fractions? Example D Which is greater, $\frac{3}{7}$or $\frac{4}{9}$? In order to determine this, we need to rewrite the fractions so we can compare them more easily. If we rewrite them as equivalent fractions that have the same denominators, then we can compare them directly. To do this, we need to find the lowest common denominator (LCD), or the least common multiple of the two denominators. The lowest common multiple of 7 and 9 is 63. Our fraction will be represented by a shape divided into 63 sections. This time we will use a rectangle cut into 9 by $7 = 63$ 7 divides into 63 nine times, so $\frac{3}{7} = \frac{9 \cdot 3}{9 \cdot 7} = \frac{27}{63}$ We can multiply the numerator and the denominator both by 9 because that’s really just the opposite of reducing the fraction—to get back from $\frac{27}{63}$$\frac{3}{7}$ The fractions $\frac{a}{b}$$\frac{c \cdot a}{c \cdot b}$$c eq 0$ Therefore, $\frac{27}{63}$$\frac{3}{7}$ 9 divides into 63 seven times, so $\frac{4}{9} = \frac{7 \cdot 4}{7 \cdot 9} = \frac{28}{63}$ By writing the fractions with a common denominator of 63, we can easily compare them. If we take the 28 shaded boxes out of 63 (from our image of $\frac{4}{9}$$\frac{3}{7}$ Since $\frac{28}{63}$$\frac{27}{63}$$\frac{4}{9}$$\frac{3}{7}$ Graph and Order Rational Numbers To plot non-integer rational numbers (fractions) on the number line, we can convert them to mixed numbers (graphing is one of the few occasions in algebra when it’s better to use mixed numbers than improper fractions), or we can convert them to decimal form. Example E Plot the following rational numbers on the number line. a) $\frac{2}{3}$ b) $-\frac{3}{7}$ If we divide up the number line into sub-intervals based on the denominator of the fraction, we can look at the fraction’s numerator to determine how many of these sub-intervals we need to include. a) $\frac{2}{3}$ b) $-\frac{3}{7}$ Watch this video for help with the Examples above. CK-12 Foundation: Integers and Rational Numbers • Integers (or whole numbers) are the counting numbers (1, 2, 3, ...), the negative counting numbers (-1, -2, -3, ...), and zero. • A rational number is the ratio of one integer to another, like $\frac{3}{5}$$\frac{a}{b}$numerator and the bottom number (which can’t be zero) is called the denominator. • Proper fractions are rational numbers where the numerator is less than the denominator. • Improper fractions are rational numbers where the numerator is greater than the denominator. • Equivalent fractions are two fractions that equal the same numerical value. The fractions $\frac{a}{b}$$\frac{c \cdot a}{c \cdot b}$$c eq 0$ • To reduce a fraction (write it in simplest form), write out all prime factors of the numerator and denominator, cancel common factors, then recombine. • To compare two fractions it helps to write them with a common denominator. Guided Practice 1. Classify and simplify the rational number $\frac{50}{60}$ 2. Plot the rational number $\frac{17}{5}$ 1. $\frac{50}{60}$$\frac{50}{60} = \frac{5 \cdot 5 \cdot 2}{5 \cdot 3 \cdot 2 \cdot 2} = \frac{5}{3 \cdot 2} = \frac{5}{6}$ 2. $\frac{17}{5}$$3\frac{2}{5}$ Another way to graph this fraction would be as a decimal. $3\frac{2}{5}$ 1. Solve the problem posed in the Introduction. 2. The tick-marks on the number line represent evenly spaced integers. Find the values of $a, b, c, d$$e$ In 3-5, determine what fraction of the whole each shaded region represents. For 6-10, place the following sets of rational numbers in order, from least to greatest. 6. $\frac{1}{2}, \frac{1}{3}, \frac{1}{4}$ 7. $\frac{1}{10}, \frac{1}{2}, \frac{2}{5}, \frac{1}{4}, \frac{7}{20}$ 8. $\frac{39}{60}, \frac{49}{80}, \frac{59}{100}$ 9. $\frac{7}{11}, \frac{8}{13}, \frac{12}{19}$ 10. $\frac{9}{5}, \frac{22}{15}, \frac{4}{3}$ For 11-15, find the simplest form of the following rational numbers. 11. $\frac{22}{44}$ 12. $\frac{9}{27}$ 13. $\frac{12}{18}$ 14. $\frac{315}{420}$ 15. $\frac{244}{168}$ Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Algebra-I-Concepts/r1/section/2.1/","timestamp":"2014-04-21T15:21:33Z","content_type":null,"content_length":"171198","record_id":"<urn:uuid:543c4d0b-9744-44a8-855c-63f57e7d5c2a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Baylor University || Mathematics Department || Mark Sepanski Mark Sepanski Professor of Mathematics Ph.D., M.I.T., 1990-1994 (Advisor: B. Kostant) B.S., Purdue University, 1987-1990 Dr. Sepanski joined the Baylor faculty in 1997. Prior to coming to Baylor, he taught at Oklahoma State University (1995-1997) and Cornell University (1994-1995). Though being born in Minnestoa and ending up in Indiana, he mostly grew up in Wisconsin. He has been married to Laura Sepanski since 1990 and has three delightful children: Sarah, Benjamin, and Shannon. Besides mathematics, he enjoys gardening with native Texas plants, playing the guitar, Tae Kwon Do, rock climbing, hiking, camping, biking, reading fantasy, spending time with his family, and hopes to dabble with cooking once his children grow taste buds. Academic Interests and Research: Dr. Sepanski's research is in Lie theory and, in particular, in representation theory of real reductive Lie groups. Selected Research Articles: "Positivity of zeta distributions and small unitary representations," joint with L. Barchini and R. Zierau, In: The ubiquitous heat kernel, 1-46, Contemp. Math. 398, Amer. Math. Soc., Providence, RI, "On SL(2,R) Lie symmetries and representation theory," joint with R. Stanke, J. Funct. Anal. 224 (2005), 1-21. "Infinite commutative product formulas for relative extremal projectors," joint with C. Conley, Adv. Math. 196 (2005), 52-77. "Singular projective bases and the generalized Bol operator," joint with C. Conley, Adv. Appl. Math. 33 (2004), 158-191. "K-types of SU(1,n) representations and restriction of cohomology," Pacific J. Math. 192 (2000), 385-398. "Block-compatible metaplectic cocycles," joint with W.D. Banks and J. Levy, J. Reine Angew. Math. 507 (1999), 131-163. "Closure ordering and the Kostant-Sekiguchi correspondence," joint with D. Barbasch, Proc. Amer. Math. Soc. 126 (1998), 311-317. Compact Lie Groups, Graduate Texts in Mathematics, Springer-Verlag, 2006. Current Ph.D. Students: Teaching Interests: Dr. Sepanksi's teaching interests range from introductory calculus classes for undergraduates to specialized courses for Ph.D. students. He just finished writing a textbook on undergraduate abstract Courses taught at Baylor: • MTH 1304 - Pre-Calculus • MTH 1321 - Calculus I • MTH 1322 - Calculus II • MTH 2311 - Linear Algebra • MTH 2321 - Calculus III • MTH 3312 - Foundations of Combinatorics and Algebra • MTH 3323 - Introduction to Analysis • MTH 3325 - Ordinary Differential Equations • MTH 4314 - Abstract Algebra • MTH 4326 - Advanced Calculus I • MTH 4327 - Advanced Calculus II • MTH 5323 - Theory of Functions of Real Variables I • MTH 5324 - Theory of Functions of Real Variables II • MTH 5330 - Topology • MTH 5340 - Differential Geometry • MTH 5331 - Algebraic Topology I • MTH 5332 - Algebraic Topology II • MTH 6340 - Compact Lie Groups • MTH 6341 - Lie Algebras • MTH 6V43 - Advanced Topics in Representation Theory
{"url":"http://www.baylor.edu/math/index.php?id=54018","timestamp":"2014-04-20T16:43:00Z","content_type":null,"content_length":"20114","record_id":"<urn:uuid:c8416226-e9c8-462c-b4ca-35e6cf522f9b>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Web Resources Lesson Plans Yummy Gummy Subtraction This Web site gives a lesson plan using gummy bears to represent subtraction in simple math equations. Teachers can adapt the lesson to also include addition, multiplication or division. Learning Addition and Subtraction Lessons and videos on addition and subtraction. Learning Addition and Subtraction Lessons and videos on addition and subtraction. IXL Math - Addition Sentences This interactive web site asks students to identify the graphic that represents the specified addition number sentence, up to five. Learning Activities Math: Addition and Subtraction This is a lesson plan for helping students develop an awareness of addition and subtraction. Sensational Subtraction Centers This is a resource list for manipulative based centers that allow students to model and illustrate subtraction using numbers less than ten. IXL Math - Addition Sentences This interactive web site asks students to identify the graphic that represents the specified addition number sentence, up to five.
{"url":"http://alex.state.al.us/weblinks_category.php?stdID=53524","timestamp":"2014-04-17T06:50:59Z","content_type":null,"content_length":"36487","record_id":"<urn:uuid:6f1f9aeb-5229-4dff-9436-33adec08fefa>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Can anyone suggest a few textbooks that might be helpful in either setting a good foundation for this course or following along with the course material itself? Thanks! • 9 months ago • 9 months ago Best Response You've already chosen the best response. Although the course is designed to be self-contained, I found myself wanting to supplement it with a textbook. Poking around a bit, I got the impression that when the course was originally taught, the students were assigned Calculus with Analytic Geometry by George F. Simmons (2d ed.). A new copy runs about $200. I would rate the book excellent, one of the best math books I've seen. The book offers a huge number of exercises, and includes answers (but no explanations) to the odd numbered exercises. If you're willing to spend another $100 or so, you can get the Student Solutions Manual, which shows how to work through those exercises (still just the odd-numbered ones, but that's plenty). You may or may not feel the need for this. To me it was worth it. As for setting a foundation, the first chapter of this text (about 50 pages) offers a condensed review of many of the key points, and other topics are reviewed when needed (the chapter that derives calculus formulas for exponential and logarithm functions begins with a review of those functions, and same for trig functions). The same author has an inexpensive paperback called Precalculus Mathematics in a Nutshell, which offers a condensed review of pretty much all you're expected to know as you begin to tackle calculus. I worked through this and found it helpful, but much of the same material appears in the main text. The course doesn't exactly follow the Simmons text but I suspect there isn't any text out there that corresponds to the course. The main differences I've noticed so far are that Prof. Jerison covers all derivatives before getting into integrals, and also includes much more on the topic of linear and quadratic approximation than you'll find in the text (or at least, more than I've found so far in the text). If you're on a tight budget, the CK-12 math books (including algebra, trig and calculus) are available in free electronic format. You can get them in the Amazon Kindle store and if you don't have a Kindle, download a free Kindle app to read them on your computer or tablet. But if you can afford the Simmons text, you'll learn the material more easily and quickly. Best Response You've already chosen the best response. Also, there is Strang's calculus textbook on MIT ocw, look for it in supplementary resocures, and of course http://www.math.wisc.edu/~keisler/calc.html which is a kind of wird calculus textbook that approcahes calculus from a very different perspective than this course. Best Response You've already chosen the best response. Thomas' Calculus Best Response You've already chosen the best response. My math skills were weak. I tried to take this course, but some concepts were difficult to understand; so I found an useful book about Precalculus. I read it and did the exercises and was an enormous help, now I feel confident to take this course. Here is the link of the book: http://www.math.washington.edu/~m120/TheBook/TB2011-12.pdf Best Response You've already chosen the best response. It's not a text book but I found the trigonometry and calculus lectures in the Khan Academy playlists of the same names on youtube to be enough of a grounding that I didn't feel too lost. http:// www.youtube.com/channel/UC4a-Gbdw7vOaccHmFo40b9g (I didn't know what a sine or tangent was before watching them. Best Response You've already chosen the best response. can i have s.e.x with you Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51e23891e4b076f7da42572f","timestamp":"2014-04-16T13:18:36Z","content_type":null,"content_length":"47945","record_id":"<urn:uuid:b6ac9056-a412-4ebe-9970-95b36ac8cf7c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluate Stock Price With Reverse-Engineering DCF If you've ever thumbed through a stock analyst's report, you will have probably come across a stock valuation technique called discounted cash flow analysis , or DCF for short. DCF entails forecasting future company cash flows, applying a discount rate according to the company's risk and coming up with a precise valuation or "target price" for the stock. The trouble is that the job of predicting future cash flows requires a healthy dose of guesswork. However, there is a way to get around this problem. By working backward - starting with the current share price - we can figure out how much cash flow the company would be expected to make to generate its current valuation. Depending on the plausibility of the cash flows, we can decide whether the stock is worth its going price. DCF Sets Target Prices There are basically two ways of valuing a stock. The first, "relative valuation," involves comparing a company with others in the same business area, often using a price ratio such as price/earnings, price/sales, price/book value and so on. It is a good approach for helping analysts decide whether a stock is cheaper or more expensive than its peers. However, it's a less reliable method of determining what the stock is really worth on its own. As a consequence, many analysts prefer the second approach, DCF analysis, which is supposed to deliver an "absolute valuation" or bona fide price on the stock. The approach involves explaining how much free cash flow the company will produce for investors over, say, the next 10 years, and then calculating how much investors should pay for that stream of free cash flows based on an appropriate discount rate. Depending on whether it is above or below the stock's current market price, the DCF-produced target price tells investors whether the stock is currently overvalued or undervalued. In theory, DCF sounds great, but like ratio analysis, it has its fair share of challenges for analysts. Among the challenges is the tricky task of coming up with a discount rate, which depends on a risk-free interest rate , the company's cost of capital and the risk its stock faces. But an even bigger problem is forecasting reliable future free cash flows. While trying to predict next year's numbers can be hard enough, modeling precise results over a decade is next to impossible. No matter how much analysis you do, the process usually involves as much guesswork as science. What's more, even a small, unexpected event can alter cash flows and make your target price Reverse-Engineering DCF Discounted cash flow, however, can be put to use in another way that gets around the tricky problem of accurately estimating future cash flows. Rather than starting your analysis with an unknown, a company's future cash flows, and trying to arrive at a target stock valuation, start instead with what you do know with certainty about the stock: its current market valuation. By working backward, or reverse-engineering the DCF from its stock price, we can work out the amount of cash that the company will have to produce to justify that price. If the current price assumes more cash flows than what the company can realistically produce, then we can conclude that the stock is overvalued. I f the opposite is the case, and the market's expectations fall short of what the company can deliver, then we should conclude that it's undervalued . An Example Here's a very simple example of how to think through a reverse-engineered DCF. Consider a company that sells widgets. We know for certain that its stock is at $14 per share and, with a total share count of 100 million, the company has a market capitalization of $1.4 billion. It has no debt, and we assume that its cost of equity is 12%. This year the company delivered $5 million in free cash What we don't know is how much the company's free cash flow will have to grow year after year for 10 years to justify its $14 share price. To answer the question, let's employ a simple 10-year DCF forecast model that assumes the company can sustain a long-term annual cash flow growth rate (also known as the terminal growth rate) of 3.0% after 10 years of more rapid growth. Of course, you can create multi-stage models that incorporate varying growth rates through the 10-year period, but for the purpose of keeping things simple, let's stick to a single stage model. Instead of setting up the DCF calculations yourself, spreadsheets that only require the inputs are usually already available. So using a DCF spreadsheet, we can reverse-engineer the necessary growth back to the share price. Lots of websites provide a free DCF template that you can download, such as office.microsoft.com and www.stockodo.com. Take the inputs that are already known: $5 million in initial free cash flow, 100 million shares, 3% terminal growth rate, 12% discount rate (assumed) and plug the appropriate numbers into the spreadsheet. After entering the inputs, the goal is to change the growth rate percentage in years 1-5 and 6-10 that will give you an intrinsic value per share (IV/share) of approximately $14. After a bit of trial and error, we come up with a 50% growth rate for the next 10 years, which results in a $14 share price. In other words, pricing the stock at $14 per share, the market is expecting that the company will be able to grow its free cash flow by about 50% per year for the next 10 years. The next step is to apply your own knowledge and intuition to judge whether 50% growth performance is reasonable to expect. Looking at the company's past performance, does that growth rate make sense? Can we expect a widget company to more than double its free cash flow output every two years? Will the market be big enough to support that level of growth? Based on what you know about the company and its market, does that growth rate seem too high, too low or just about right? The trick is to consider as many different plausible conditions and scenarios until you can say with confidence whether the market's expectations are correct and whether you should invest in it. The Bottom Line Reverse-engineered DCF doesn't eliminate all the problems of DCF, but it sure helps. Instead of hoping that our free cash flow projections are correct and struggling to come up with a precise value for the stock, we can work backward using information that we already know to make a general judgment about the stock's value. Of course, the technique doesn't completely free us from the job of estimating cash flows. To assess the market's expectations, you still need to have a good sense of what conditions are required for the company to deliver them. That said, it is a much easier task to judge the plausibility of a set of forecasts rather than having to come up with them your own. More From Investopedia • Finance • Investment & Company Information • free cash flow
{"url":"http://finance.yahoo.com/news/evaluate-stock-price-reverse-engineering-150000130.html","timestamp":"2014-04-20T00:53:32Z","content_type":null,"content_length":"218425","record_id":"<urn:uuid:66be440e-cf3f-4542-b3e6-927c4067b3a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Ramona, CA Math Tutor Find a Ramona, CA Math Tutor -Experienced math tutor in all levels of math.-Bachelor degree from University of California, Irvine-Specialize in Algebra, Calculus, SAT I Math, and SAT II Math.-Worked at Palomar College Math Center for more than a year.-Worked in Upward Bound Program, which help college-bound high school students by providing after school tutoring.-Worked at Anna Little's Learning center for two 11 Subjects: including calculus, elementary (k-6th), Chinese, statistics Having taught at Cuyamaca Community College for seven years, the biggest thing I learned was how to communicate with students so they understood, and thereby learned, what they were studying. My peer reviews were outstanding and my student reviews were also outstanding. The student's learning alwa... 17 Subjects: including algebra 1, prealgebra, reading, accounting ...I specialize in tutoring mathematics. I have tutored all math subjects from basic arithmetic past advanced calculus and differential equations since 2009. I love helping students find their own learning style, and giving them the tools to learn the subject on their own without me! 37 Subjects: including trigonometry, Microsoft Excel, elementary math, study skills ...I was diagnosed with ADHD in 8th grade and was fortunate enough to have supportive parents, teachers, and tutors who helped me to build up my confidence and reach my potential. I am passionate about helping other students realize their strengths and successfully navigate around their own cogniti... 23 Subjects: including algebra 2, algebra 1, chemistry, reading ...I am a college professor whose degrees are in biochemistry and chemistry. Throughout my career as a tutor I have taught both biology and chemistry. While my expertise is in chemistry my skills in teaching biology are not far behind. 9 Subjects: including algebra 1, algebra 2, prealgebra, chemistry Related Ramona, CA Tutors Ramona, CA Accounting Tutors Ramona, CA ACT Tutors Ramona, CA Algebra Tutors Ramona, CA Algebra 2 Tutors Ramona, CA Calculus Tutors Ramona, CA Geometry Tutors Ramona, CA Math Tutors Ramona, CA Prealgebra Tutors Ramona, CA Precalculus Tutors Ramona, CA SAT Tutors Ramona, CA SAT Math Tutors Ramona, CA Science Tutors Ramona, CA Statistics Tutors Ramona, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/ramona_ca_math_tutors.php","timestamp":"2014-04-20T19:16:11Z","content_type":null,"content_length":"23791","record_id":"<urn:uuid:12ab7efe-e682-4d74-a3eb-dd719ee7278c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Venus Atmosphere Puzzle - Unmanned Spaceflight.com First 2 questions. Are you now satisfied that you have found the reason why your integral doesn't go on to infinity, and are you now getting an answer for the atmospheric mass of Venus that is in line with what you expected intuitively? For me there had to be a common sense reason why the integral terminates, and it had to be independent of external factors. A finite atmosphere must be able to restrain it's own urge to become infinite without being disciplined by the Van Allen police! So I did another thought experiment: This time I provided my gravitating globe with an extremely thin atmosphere. In fact I gave it just one single molecule of gas. The molecule hops about receiving thermal kicks from the surface at each bounce. I quickly realised that this atmosphere could not be isothermal, because the molecule slows down as it rises and accelerates again as it falls. At the very top of the highest bounces (which must be vertical) the molecule stops altogether, thus momentarily reaching zero Kelvin. The atmosphere therefore has zero volume - and zero mass - from here on out. So it is the fact that molecules are of finite size that causes the atmosphere to terminate. If the molecules were infinitely many and infinitesimally small we would never reach the point where the mean free paths become long and thermal collisions give way to ballistic trajectories. The atmosphere would indeed go on for ever whilst still having a finite total mass, a contradiction, therefore molecules are of finite size. I don't know if this is of any use but thanks anyway for making me think. QUOTE (qraal @ Jun 5 2006, 05:45 PM) That's exactly what I did and I still get the non-intuitive answer that a polytropic atmosphere with the same surface pressure masses less than an isothermal atmosphere. Consider the following thought experiment: 1/ Take one large solid globe, in vacuo. 2/ Release in it's vicinity finite amount of gas. 3/ Wait for the gas to form an atmosphere around the globe. 4/ Arrange for the atmosphere to be at a uniform temperature throughout. Question: At what point does the mass of this finite amount of gas jump to infinity? QUOTE (ngunn @ Jun 8 2006, 12:46 AM) Well you've done the detailed calculations, not me, but I'm surprised by your statement that an isothermal atmosphere would have an infinite mass per unit area. This would be true only for an infinitesimally small planet (!) assuming such an object could have an atmosphere. Are you starting your integrals from r=0 or h=0 ? QUOTE (Phil Stooke @ Jun 7 2006, 06:50 AM) There's some great Venus atmosphere stuff (and some on Mars) from a presentation here: Check out the zipped powerpoint presentation I-4.1 on the Venera missions. QUOTE (ngunn @ Jun 6 2006, 11:37 PM) In an atmosphere in which temperature decreases rapidly with height for the first few scale heights the upper parts will not be as high above the surface as they would be in the isothermal case. This means they will experience greater g and hence contribute more to the pressure at the surface. More pressure for the same mass = less mass for the same pressure. I like to imagine two simplistic pictures - One is the thin atmosphere approximation where all the atmospheric mass is assumed to be just above the surface. Two is the isothermal case. Venus should be somewhere in between. I like to imagine two simplistic pictures - One is the thin atmosphere approximation where all the atmospheric mass is assumed to be just above the surface. Two is the isothermal case. Venus should be somewhere in between. QUOTE (remcook @ Jun 6 2006, 02:15 AM) This is the equation for hydrostatic equilibrium: dp = rho * g * dz For isothermal atmosphere, this gives an exponential density profile: rho = rho0 * exp(-z/H) - H is scale height. Integrating this will give you what you want I think. If you know temperature and pressure vs. height (from probes for instance), you can determine the density from the ideal gas law at different altitudes. Integrating that should be more accurate. do.Zo = Po/g This might seem like a really dumb question, but what's the mass of the Cytherean atmosphere per unit area? At first pass I thought it was easy - same as for an isothermal atmosphere, Po/g, where Po is surface pressure and g is surface gravity. Simple. Except Venus doesn't come close to approximating an isothermal atmosphere. From a graph in Mark Bullock's PhD thesis (Hi Mark if you're visiting) I pulled the figures for Po and To as 92 bar and 735 K, while the left-side of the temperature curve was 250 K at 0.1 bar and 63 km. At about 210 K the temperature drop with altitude stops, then slowly rises into the Cytherean stratosphere. Ok. My atmospheric physics is pretty limited - I 'modelled' that lapse rate pressure curve as a power law: P/Po = (T/To)^n and likewise for density, d/do = (T/To)^n. Temperature, T, as a function of altitude, Z, I computed as T(Z) = To*(1-Z/(n.Zo)). Zo = (k.T/m.g), where k is Boltzmann's constant and m is the molecular mass of the atmosphere. These equations I then integrated between 210 K and 0.033 bar, 70 km, and 735 K and 92 bar, zero altitude. The resulting equation is m = (n/(n+1))*(do.Zo)*(1 - (T/To))^(n+1) - a bit of simple algebra and the Gas equation shows that do.Zo = Po/g. Thus the mass is lower than for a simple isothermal atmosphere by roughly (n/(n+1)). In this case n = 6.33, higher than the dry adiabat for CO2 which gives n = 4.45. Now an adiabatic or polytropic atmosphere is an idealisation, but it seems odd to me that whenever Venus' atmospheric mass is discussed people always use the higher isothermal value. Have I missed something important in the physics, or is Venus's atmospheric mass just 86.4% of the usually quoted value? Hi Nigel Ok. Let's try again. Everything you've said about the discreteness of the atmosphere is true and as I've already noted it's what really obtains - eventually the gases stop running into each other and get separated out by gravity. I know that, though your eloquent restatement was better than my clumsy exposition. What I was discussing was an idealisation, the hydrostatic model of an isothermal atmosphere that's in a gravity field dropping off via the inverse square law. Assume the gas is infinitely divisible and you get an infinite mass at infinity - infinity BTW is the usual limit when integrating the equation (rho) = (rho)o*e^-([Z/Zo), which as you can see only achieves a zero value at Z/Zo = infinity. That's just the nature of the maths and I know its limitations. When molecules only collide every few metres or so things change drastically from the case of micrometre path lengths. My quandary concerns the case of the adiabatic atmosphere - a whole different ballgame, to coin a phrase. An adiabatic atmosphere in its idealisation has a finite altitude at which temperature, pressure and density fall to zero. That's because of the temperature-altitude relation - i.e. T/To = [(n.Zo - Z)/(n.Zo)] so T/To = 0 at Z = n.Zo, and because (rho) = (rho)o*(T/To)^n it means it drops to zero too. So the mass integral has finite limits rather than the unphysical "infinity". Integrated, the equation gives: m = (n/(n+1))*(n.Zo) and n.Zo = Po/g. So as I've already noted the mass is lower than the isothermal case for the same surface pressure. Why is this so? And is it correct? The maths says "yes" and it means that the atmospheric mass on Venus is less than the usually quoted figures. So 88.8 bar of CO2 is actually 76.8 'bar' in weight. And 865 tons/sq.m in mass not the usual 1,000. Ok. So that's the state of play. Why is it so? My first thought is that because expansion cooling lowers the pressure as you climb in height the hydrostatic balance can be maintained without relying on just the weight of the gas column itself. The temperature gradient acts like an extra potential. Am I even close? QUOTE (ngunn @ Jun 10 2006, 10:39 PM) Hi graal. I think that the messenger's messageless message is trying to say we have gone round in a circle . . . however I still need to understand things in my slow, hand-waving way. First 2 questions. Are you now satisfied that you have found the reason why your integral doesn't go on to infinity, and are you now getting an answer for the atmospheric mass of Venus that is in line with what you expected intuitively? For me there had to be a common sense reason why the integral terminates, and it had to be independent of external factors. A finite atmosphere must be able to restrain it's own urge to become infinite without being disciplined by the Van Allen police! So I did another thought experiment: This time I provided my gravitating globe with an extremely thin atmosphere. In fact I gave it just one single molecule of gas. The molecule hops about receiving thermal kicks from the surface at each bounce. I quickly realised that this atmosphere could not be isothermal, because the molecule slows down as it rises and accelerates again as it falls. At the very top of the highest bounces (which must be vertical) the molecule stops altogether, thus momentarily reaching zero Kelvin. The atmosphere therefore has zero volume - and zero mass - from here on out. So it is the fact that molecules are of finite size that causes the atmosphere to terminate. If the molecules were infinitely many and infinitesimally small we would never reach the point where the mean free paths become long and thermal collisions give way to ballistic trajectories. The atmosphere would indeed go on for ever whilst still having a finite total mass, a contradiction, therefore molecules are of finite size. I don't know if this is of any use but thanks anyway for making me think. Fine. I agree the adiabatic model is probably a better approximation for the real Venus atmosphere than the isothermal (or quasi-isothermal with a ballistic 'cap'). I would therefore prefer your lower estimate of the mass to one calculated on the quasi-isothermal model. The only thing I don't quite understand is this quote from your post #3:- 'I still get the non-intuitive answer that a polytropic (adiabatic?) atmosphere with the same surface pressure masses less than an isothermal atmosphere.' To me this result is exactly what one WOULD expect intuitively. The atmosphere is cooler at the top, therefore closer to the surface and experiencing higher g, therefore less mass is required to account for the observed surface pressure. My extended ramble into the isothermal model was just because I was intrigued by the infinite integral and its implications, especially the fact that the particulate nature of gases can be 'deduced' in this way, which I had not realised before. QUOTE (ngunn @ Jun 12 2006, 05:23 AM) Fine. I agree the adiabatic model is probably a better approximation for the real Venus atmosphere than the isothermal (or quasi-isothermal with a ballistic 'cap'). I would therefore prefer your lower estimate of the mass to one calculated on the quasi-isothermal model. The only thing I don't quite understand is this quote from your post #3:- 'I still get the non-intuitive answer that a polytropic (adiabatic?) atmosphere with the same surface pressure masses less than an isothermal atmosphere.' To me this result is exactly what one WOULD expect intuitively. The atmosphere is cooler at the top, therefore closer to the surface and experiencing higher g, therefore less mass is required to account for the observed surface pressure. Confusing, but I find myself agreeing with Gaal. The fact that the temperature is decreasing with altitude means the upper atmosphere is more dense at that altitude than it would be under isothermal conditions, and each inversion in temperature would lead to more compacting of the molecules relative to isothermal conditions. At the altitude after which there are no further inversions, the expansion is the same as under the isothermal condition, except that there is a denser layer under the inversion than in the isothermal case. The only exception would be if the last inversion is very close to the surface, and the temperature increase after this inversion is much much greater than the gradient between the surface and the It is worth noting that in the current model of the Titan atmosphere, they use many inversion layers in order to support the density distribution found by Huygens - (Titan's atmosphere is very thick in general, relative to the earth's, but also has a more exaggerated vertical scale due to the lower mass of the moon.) We will know more about Titan after the limb and bistatic radar measurements are evaluated...The Cassini altimeter data should help, too.
{"url":"http://www.unmannedspaceflight.com/index.php?showtopic=2822&st=0&p=57213","timestamp":"2014-04-19T14:33:01Z","content_type":null,"content_length":"138275","record_id":"<urn:uuid:b8e81dee-c206-491e-8a13-9b0bb8a2450a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability in Algebra. k is uniformly chosen from the interval (-5,5) . Let p be the probability that the quartic f(x)=kx^4+(k^2+1)x^2+k has 4 distinct real roots such that one of the roots is less than -4 and the other three are greater than -1. Find the value of 1000p. I had posted this thread 3 days ago too nobody took the initiative to COMPLETELY solve it. I need help as i m no pro in probability.
{"url":"http://mathhelpforum.com/algebra/215275-probability-algebra.html","timestamp":"2014-04-18T14:06:19Z","content_type":null,"content_length":"38946","record_id":"<urn:uuid:60decc68-6962-4b96-bd99-d99ddf5d2392>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
This is the SHA-3 page of David A. Wilson. For information about the NIST SHA-3 competition, go here. For an unofficial list of entries and analyses, go here. I submitted a hash function called DCH. It is a block-cipher-based algorithm that alternates a nonlinear substitution, a Fourier-like linear transform, and a round key addition. You can find the submission package here. If you have any questions, comments, or analyses of DCH, please contact me at dwilson at alum dot mit dot edu. Note: There is an error in the reference implementation (see third bullet point below). • Christian Rechberger has kindly pointed out that while the dithering scheme used renders DCH invulnerable to the second-preimage attacks of Kelsey and Schneier, it does not protect against the variant of those attacks by Andreeva et al. published earlier this year. Since DCH uses a 512-bit chaining value for all digest lengths, this appears to be a valid attack for DCH-512 (requiring slightly more than 2^{450} computations), although for shorter digest lengths brute force is stil faster. □ There are several defenses against such an attack, some of which would involve changing the algorithm (e.g. to include a block number instead of just a small dither value, following the HAIFA approach). One option within the bounds of the submitted entry, however, is to increase the block size, which is explicitly listed as a tunable parameter. This will result in a larger chaining value; a block size of 576 bits should be large enough to make the attack of Andreeva et al. worse than brute force against 512-bit DCH. • Dmitry Khovratovich and Ivica Nikolic of the University of Luxembourg have pointed out that DCH contains an incorrect implementation of the Miyaguchi-Preneel iteration method, and thus is susceptible to collision and preimage attack via Wagner's generalized birthday algorithm. Their writeup is available here. □ This is correct; DCH as submitted is broken. This attack capitalizes on an error in implementation of the iteration method; it does not attack the compression function itself. Thus, correcting the error in the iteration method would defend against this attack, although for the purposes of the SHA-3 competition it appears that DCH is out. • Mario Lamberger and Florian Mendel of IAIK, Graz University of Technology, have pointed out that the above results in relatively trivial collision and preimage attacks, since once a dither input is repeated, if the same message block is used in both positions then the resulting contributions to the end hash value will cancel out. Other Algorithms My analyses of other SHA-3 entries: • Abacus: A second-preimage and collision attack. Writeup is here (PDF). Sample code for generating random second-preimage candidates is here.
{"url":"http://web.mit.edu/dwilson/www/hash/","timestamp":"2014-04-17T01:12:58Z","content_type":null,"content_length":"3755","record_id":"<urn:uuid:09be402d-50f3-4932-8ca9-cb8ace2db002>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof: rank(AB)+n >= rank(A)+rank(B) $<br /> A\in R^{s\times n}~,~B\in R^{n\times s}<br />$ Proof: $<br /> rank(AB)+n \geqslant rank(A)+rank(B)$ this is a nice problem! i think by that $R$ you mean the field of real numbers $\mathbb{R}.$ well, you really don't need your ground field to be $\mathbb{R}.$ so i'll assume that $F$ is any field and $A$ and $B$ are $s \times n$ and $n \times t$ matrices respectively, with entries in $F.$ let $T_1,T_2$ be the linear transformations corresponding to $A$ and $B$ respectively, i.e. $T_1: F^n \to F^ s$ and $T_2: F^t \to F^n$ are defined by $T_1(x)=Ax$ for all $x \in F^n$ and $T_2(x)=Bx$ for all $x \in F^t.$ claim. $nul(T_1T_2) \leq nul(T_1) + nul(T_2),$ where $nul$ means "dimension of the kernel". proof. define $f: \ker (T_1T_2) \to \ker T_1$ by $f(x)=T_2(x)$ for all $x \in \ker(T_1T_2).$ note that $f$ is well-defined because if $x \in \ker(T_1T_2)$, then $T_1T_2(x)=0$ and thus $f(x)= T_2(x) \in \ker T_1.$ it's obvious that $f$ is linear and $\ker f = \ker T_2.$ therefore, by the rank-nulity theorem, we have $rank(f)+nul(f)=\dim \ker(T_1T_2)=nul(T_1T_2).$ but $rank(f)=\dim im(f) \ leq \dim \ker T_1=nul(T_1)$ and $nul(f)=\dim \ker(f)=\dim \ker T_2 =nul(T_2). \Box$ now solving your problem is easy: applying the above claim and the rank-nulity theorem we have: $rank(T_1T_2)+n= t-nul(T_1T_2)+n \geq n-nul(T_1)+ t-nul(T_2)=rank(T_1) + rank(T_2).$ Last edited by NonCommAlg; September 9th 2010 at 03:52 PM. I make a mistake It should be $A\in R^{s\times n}~,~B\in R^{n\times t}$ The key is constructing a transformation. It's different with normal $T:\mathbb{R}^{m}\rightarrow \mathbb{R}^{n}$
{"url":"http://mathhelpforum.com/advanced-algebra/155656-proof-rank-ab-n-rank-rank-b.html","timestamp":"2014-04-20T14:35:19Z","content_type":null,"content_length":"50605","record_id":"<urn:uuid:82fd6dfa-0d74-4ebd-897a-e1da0fec8632>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Copper Canyon, TX Precalculus Tutor Find a Copper Canyon, TX Precalculus Tutor I have had a career in astronomy which included Hubble Space Telescope operations, where I became an expert in Excel and SQL, and teaching college-level astronomy and physics. This also involved teaching and using geometry, algebra, trigonometry, and calculus. Recently I have developed considerable skill in chemistry tutoring. 15 Subjects: including precalculus, chemistry, physics, calculus ...But he enjoys what he does, especially having the satisfaction of helping others achieve success in academics. His method is geared towards understanding of the material, rather than just rote completion of homework. This is, as the story goes - and so many people like this reference - like "te... 37 Subjects: including precalculus, Spanish, reading, chemistry ...I taught courses at Richland College and Collin County Community College. My specialties are Physics I and Physics II, both algebra and calculus based. I also have experience with laboratory experiments and writing lab reports. 8 Subjects: including precalculus, calculus, physics, geometry ...As a Secondary Math Teacher, an essential part of my job was preparing my students for the TAKS exam. We spent a significant amount of time each week with drills and testing strategies in order to help prepare them for the test. We also conducted practice testing sessions through the year to identify areas in which each student needed additional instruction. 82 Subjects: including precalculus, English, chemistry, calculus ...I started teaching 13 years ago and the TAKS test has been around almost as long. I assisted in conducting Saturday TAKS tutorial as well as after school tutorials during my first 7 years. During my next three years of teaching I initiated and conducted Saturday TAKS tutorials at another district. 10 Subjects: including precalculus, geometry, algebra 1, algebra 2 Related Copper Canyon, TX Tutors Copper Canyon, TX Accounting Tutors Copper Canyon, TX ACT Tutors Copper Canyon, TX Algebra Tutors Copper Canyon, TX Algebra 2 Tutors Copper Canyon, TX Calculus Tutors Copper Canyon, TX Geometry Tutors Copper Canyon, TX Math Tutors Copper Canyon, TX Prealgebra Tutors Copper Canyon, TX Precalculus Tutors Copper Canyon, TX SAT Tutors Copper Canyon, TX SAT Math Tutors Copper Canyon, TX Science Tutors Copper Canyon, TX Statistics Tutors Copper Canyon, TX Trigonometry Tutors Nearby Cities With precalculus Tutor Addison, TX precalculus Tutors Argyle, TX precalculus Tutors Bartonville, TX precalculus Tutors Corinth, TX precalculus Tutors Double Oak, TX precalculus Tutors Flower Mound precalculus Tutors Hickory Creek, TX precalculus Tutors Highland Village, TX precalculus Tutors Lake Dallas precalculus Tutors Lakewood Village, TX precalculus Tutors Lewisville, TX precalculus Tutors Little Elm precalculus Tutors Northlake, TX precalculus Tutors Oak Point, TX precalculus Tutors Shady Shores, TX precalculus Tutors
{"url":"http://www.purplemath.com/Copper_Canyon_TX_Precalculus_tutors.php","timestamp":"2014-04-21T00:11:36Z","content_type":null,"content_length":"24571","record_id":"<urn:uuid:ee05543b-33e1-49af-aaf7-2fdef700d26a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
CFR Cluj-Napoca v Internazionale Dr. Constantin Radulescu Stadium Referee: Pawel Gil | Attendance: 6000 * Local time based on your geographic location. • • Fredy Guarin 22' • • Fredy Guarin 45' +2' • • Marco Benassi 88' • Ioan Hora lobs a good right footed shot, but it is off target. Outcome: hit bar • Samir Handanovic takes a long goal kick • Ioan Hora hits a good header, but it is off target. Outcome: miss left • Daniel Ferreira Mendonca Ivo Pinto crosses the ball. Outcome: shot • Javier Zanetti takes a direct freekick with his right foot from his own half. Outcome: pass • Robert Maah commits a foul on Ibrahima Mbaye resulting on a free kick for Internazionale • Samir Handanovic takes a long goal kick • Marco Benassi hits a good right footed shot. Outcome: goal • That last goal was assisted by Antonio Cassano (Pass from Left Channel) • Throw-in: Jorge Moreno Diogo Valente takes it (Attacking) • Samir Handanovic takes a long goal kick • Felice Piccolo drills a good right footed shot, but it is off target. Outcome: miss right • Throw-in: Felice Piccolo takes it (Attacking) • Ricardo Cadu takes a direct freekick with his right foot from his own half. Outcome: pass • Offside called on Antonio Cassano • Álvaro Pereira takes a direct freekick with his left foot from the left wing. Outcome: pass • Ricardo Cadu is awarded a yellow card. Reason: dissent • Ricardo Cadu commits a foul on Antonio Cassano resulting on a free kick for Internazionale • Samir Handanovic makes a very good save (Catch) • Pantelis Kapetanos hits a good right footed shot. Outcome: save • Throw-in: Álvaro Pereira takes it (Defending) • Samir Handanovic takes a long goal kick • Nicolas Godemeche drills a good right footed shot, but it is off target. Outcome: miss right • Esteban Cambiasso clears the ball from danger. • Daniel Ferreira Mendonca Ivo Pinto crosses the ball. Outcome: clearance • Throw-in: Javier Zanetti takes it (Attacking) • Antonio Cassano hits a good right footed shot. Outcome: hit wall • Antonio Cassano takes a direct freekick with his right foot from the right channel. Outcome: shot • Mario Camora has been shown the red card and will no longer take part of this game! Reason: professional foul • Mario Camora commits a foul on Marco Benassi resulting on a free kick for Internazionale • Álvaro Pereira takes a direct freekick with his left foot from the left wing. Outcome: pass • Pantelis Kapetanos commits a foul on Antonio Cassano resulting on a free kick for Internazionale • Throw-in: Javier Zanetti takes it (Defending) • Juan clears the ball from danger. • Mario Camora crosses the ball. Outcome: clearance • Mario Camora takes a direct freekick with his left foot from the left wing. Outcome: cross • Marco Benassi commits a foul on Jorge Moreno Diogo Valente resulting on a free kick for CFR Cluj-Napoca • Samir Handanovic makes an outstanding save (Parry) • Mario Camora drills a good left footed shot. Outcome: save • Throw-in: Álvaro Pereira takes it (Defending) • Mateo Kovacic takes a direct freekick with his right foot from his own half. Outcome: pass • Nicolas Godemeche commits a foul on Mateo Kovacic resulting on a free kick for Internazionale • Throw-in: Javier Zanetti takes it (Defending) • Throw-in: Daniel Ferreira Mendonca Ivo Pinto takes it (Attacking) • Samir Handanovic takes a short goal kick • Throw-in: Daniel Ferreira Mendonca Ivo Pinto takes it (Attacking) • Álvaro Pereira blocks the shot • Ioan Hora hits a good right footed shot. Outcome: blocked • Pantelis Kapetanos hits a good left footed shot. Outcome: blocked • CFR Cluj-Napoca makes a sub: Jorge Moreno Diogo Valente enters for Ramalho Couto Rui Pedro. Reason: Injury • Jorge Quintas Mario Felgueiras takes a direct freekick with his right foot from his own half. Outcome: open play • Offside called on Antonio Cassano • Ramalho Couto Rui Pedro takes the corner kick from the right byline with his right foot and hits an outswinger to the centre, resulting in: open play • Samir Handanovic makes an outstanding save (Block) • Ioan Hora drills a good right footed shot. Outcome: save • Laszlo Sepsi drills a good left footed shot. Outcome: hit wall • Laszlo Sepsi takes a direct freekick with his right foot from the right channel. Outcome: shot • Juan commits a foul on Pantelis Kapetanos resulting on a free kick for CFR Cluj-Napoca • Throw-in: Javier Zanetti takes it (Defending) • Samir Handanovic takes a long goal kick • Internazionale makes a sub: Ibrahima Mbaye enters for Fredy Guarin. Reason: Tactical • Laszlo Sepsi drills a good left footed shot, but it is off target. Outcome: miss right • Throw-in: Mario Camora takes it (Attacking) • Juan takes a direct freekick with his left foot from his own half. Outcome: pass • Offside called on Pantelis Kapetanos • Fredy Guarin takes a direct freekick with his right foot from the right channel. Outcome: pass • Nicolas Godemeche commits a foul on Fredy Guarin resulting on a free kick for Internazionale • Samir Handanovic takes a short goal kick • Nicolas Godemeche drills a good right footed shot, but it is off target. Outcome: miss right • Throw-in: Daniel Ferreira Mendonca Ivo Pinto takes it (Defending) • Throw-in: Álvaro Pereira takes it (Attacking) • Throw-in: Javier Zanetti takes it (Defending) • Samir Handanovic takes a short goal kick • Throw-in: Daniel Ferreira Mendonca Ivo Pinto takes it (Attacking) • Ioan Hora takes the corner kick from the left byline with his right foot and hits an inswinger to the centre, resulting in: open play • Ramalho Couto Rui Pedro hits a good right footed shot, but it is off target. Outcome: over bar • Throw-in: Álvaro Pereira takes it (Defending) • Jorge Quintas Mario Felgueiras takes a direct freekick with his right foot from his own half. Outcome: pass • Offside called on Marco Benassi • Samir Handanovic takes a short goal kick • Throw-in: Daniel Ferreira Mendonca Ivo Pinto takes it (Attacking) • Internazionale makes a sub: Simone Pasa enters for Ricardo Alvarez. Reason: Tactical • Ioan Hora takes a direct freekick with his right foot from his own half. Outcome: pass • Mateo Kovacic commits a foul on Ioan Hora resulting on a free kick for CFR Cluj-Napoca • Samir Handanovic takes a short goal kick • Ricardo Cadu takes a direct freekick with his right foot from his own half. Outcome: pass • Fredy Guarin commits a foul on Felice Piccolo resulting on a free kick for CFR Cluj-Napoca • Juan takes a direct freekick with his left foot from his own half. Outcome: pass • Pantelis Kapetanos commits a foul on Mateo Kovacic resulting on a free kick for Internazionale • Throw-in: Álvaro Pereira takes it (Defending) • Throw-in: Daniel Ferreira Mendonca Ivo Pinto takes it (Attacking) • Throw-in: Javier Zanetti takes it (Defending) • Internazionale makes a sub: Marco Benassi enters for Rodrigo Palacio. Reason: Injury • That last goal was assisted by Ricardo Alvarez (Pass from Centre Penalty Area) • Fredy Guarin hits a good right footed shot. Outcome: goal • Samir Handanovic takes a long goal kick • Throw-in: Álvaro Pereira takes it (Attacking) • Throw-in: Álvaro Pereira takes it (Attacking) • Mario Camora takes a direct freekick with his left foot from his own half. Outcome: pass • Fredy Guarin commits a foul on Laszlo Sepsi resulting on a free kick for CFR Cluj-Napoca • Samir Handanovic takes a long goal kick • Laszlo Sepsi hits a good left footed shot, but it is off target. Outcome: miss left • Throw-in: Ricardo Cadu takes it (Attacking) • Samir Handanovic makes a very good save (Block) • Ioan Hora hits a good right footed shot. Outcome: save • Throw-in: Javier Zanetti takes it (Attacking) • Throw-in: Javier Zanetti takes it (Attacking) • Throw-in: Daniel Ferreira Mendonca Ivo Pinto takes it (Attacking) • Throw-in: Daniel Ferreira Mendonca Ivo Pinto takes it (Attacking) • Throw-in: Daniel Ferreira Mendonca Ivo Pinto takes it (Attacking) • Throw-in: Mario Camora takes it (Defending) • Throw-in: Mario Camora takes it (Defending) • Throw-in: Mario Camora takes it (Attacking) • Samir Handanovic takes a direct freekick with his right foot from his own half. Outcome: open play • Pantelis Kapetanos is awarded a yellow card. Reason: unsporting behaviour • Pantelis Kapetanos commits a foul on Juan resulting on a free kick for Internazionale • Throw-in: Javier Zanetti takes it (Defending) • Ricardo Alvarez takes a direct freekick with his left foot from his own half. Outcome: pass • CFR Cluj-Napoca makes a sub: Pantelis Kapetanos enters for Ionut Rada. Reason: Tactical • Mario Camora commits a foul on Fredy Guarin resulting on a free kick for Internazionale • Samir Handanovic takes a long goal kick • Jorge Quintas Mario Felgueiras makes a very good save (Parry) • Rodrigo Palacio hits a good left footed shot. Outcome: save • Ricardo Cadu takes a direct freekick with his right foot from his own half. Outcome: pass • Offside called on Álvaro Pereira • Antonio Cassano takes a direct freekick with his right foot from the right channel. Outcome: pass • Felice Piccolo commits a foul on Antonio Cassano resulting on a free kick for Internazionale • Throw-in: Javier Zanetti takes it (Defending) • Throw-in: Ricardo Cadu takes it (Attacking) • Throw-in: Javier Zanetti takes it (Attacking) • Samir Handanovic takes a long goal kick • Juan takes a direct freekick with his left foot from his own half. Outcome: open play • Daniel Ferreira Mendonca Ivo Pinto commits a foul on Álvaro Pereira resulting on a free kick for Internazionale • Mario Camora takes a direct freekick with his left foot from the left wing. Outcome: pass • Javier Zanetti commits a foul on Mario Camora resulting on a free kick for CFR Cluj-Napoca • Samir Handanovic takes a long goal kick • Jorge Quintas Mario Felgueiras takes a short goal kick • Throw-in: Álvaro Pereira takes it (Defending) • Fredy Guarin hits a good right footed shot. Outcome: goal • That last goal was assisted by Rodrigo Palacio (Pass from Centre Penalty Area) • Andrea Ranocchia clears the ball from danger. • Mario Camora crosses the ball. Outcome: clearance • Mario Camora takes a direct freekick with his left foot from the left channel. Outcome: cross • Mateo Kovacic commits a foul on Ioan Hora resulting on a free kick for CFR Cluj-Napoca • Antonio Cassano takes a direct freekick with his right foot from his own half. Outcome: pass • Ricardo Cadu commits a foul on Antonio Cassano resulting on a free kick for Internazionale • Nicolas Godemeche takes a direct freekick with his right foot from the right wing. Outcome: pass • Esteban Cambiasso commits a foul on Ramalho Couto Rui Pedro resulting on a free kick for CFR Cluj-Napoca • Throw-in: Ricardo Cadu takes it (Defending) • Antonio Cassano takes the corner kick from the left byline with his right foot and passes it to a teammate resulting in: open play • Mario Camora crosses the ball. Outcome: open play • Mario Camora takes a direct freekick with his left foot from the right wing. Outcome: cross • Fredy Guarin commits a foul on Robert Maah resulting on a free kick for CFR Cluj-Napoca • Samir Handanovic takes a long goal kick • Ramalho Couto Rui Pedro crosses the ball. Outcome: out of play • Ramalho Couto Rui Pedro takes a direct freekick with his right foot from the left channel. Outcome: cross • Mateo Kovacic is awarded a yellow card. Reason: unsporting behaviour • Handball called on Mateo Kovacic • Throw-in: Fredy Guarin takes it (Attacking) • Daniel Ferreira Mendonca Ivo Pinto clears the ball from danger. • Esteban Cambiasso crosses the ball. Outcome: clearance • Esteban Cambiasso takes a direct freekick with his left foot from the left channel. Outcome: cross • Nicolas Godemeche commits a foul on Antonio Cassano resulting on a free kick for Internazionale • Throw-in: Javier Zanetti takes it (Defending) • Jorge Quintas Mario Felgueiras takes a short goal kick • Fredy Guarin takes a direct freekick with his right foot from the right wing. Outcome: pass • Ionut Rada commits a foul on Fredy Guarin resulting on a free kick for Internazionale • Samir Handanovic takes a long goal kick • Robert Maah curls a good right footed shot, but it is off target. Outcome: miss right • Jorge Quintas Mario Felgueiras takes a long goal kick • Antonio Cassano drills a good right footed shot, but it is off target. Outcome: over bar • CFR Cluj-Napoca makes a sub: Ioan Hora enters for Gabriel Muresan. Reason: Injury • Throw-in: Ionut Rada takes it (Defending) • Álvaro Pereira takes a direct freekick with his left foot from his own half. Outcome: pass • Laszlo Sepsi commits a foul on Álvaro Pereira resulting on a free kick for Internazionale • Throw-in: Álvaro Pereira takes it (Defending) • Samir Handanovic takes a long goal kick • Ramalho Couto Rui Pedro hits a good right footed shot, but it is off target. Outcome: miss left • Throw-in: Javier Zanetti takes it (Attacking) • Throw-in: Javier Zanetti takes it (Attacking) • Samir Handanovic takes a long goal kick • Throw-in: Ricardo Cadu takes it (Defending) • Throw-in: Javier Zanetti takes it (Attacking) • Throw-in: Ionut Rada takes it (Defending) • Samir Handanovic takes a long goal kick • Throw-in: Mario Camora takes it (Attacking) • Throw-in: Javier Zanetti takes it (Attacking) • Shots (on goal) • tackles • Fouls • possession • CFR Cluj-Napoca • - Match Stats • CFR Cluj-Napoca • Internazionale 17(7) Shots (on goal) 3(2) 15 Fouls 9 2 Corner kicks 1 1 Offsides 4 50% Time of Possession 50% 2 Yellow Cards 1 1 Red Cards 0 1 Saves 4
{"url":"http://espnfc.com/en/gamecast/358649/gamecast.html?soccernet=true&cc=","timestamp":"2014-04-24T00:24:49Z","content_type":null,"content_length":"152639","record_id":"<urn:uuid:44119dbf-5a35-4657-9f79-be09f72027c7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram's Mathematica August 3rd 2006, 07:27 PM Wolfram's Mathematica I'm soliciting thoughts on this program. This is where I'm coming from (for now): I am self-teaching myself math. I've always been good at math (according to my grades), but never went beyond Precalc. I am now reviewing my precalc book, page by page, exercise by exercise, skipping nothing. By the end of the year, I should be starting Calculus. If I have to take a course to get the benefit of an instructor, I will, but I don't think I'll need to. As I go along, and being aware of calculators (like the TI-89, which I have), and math programs (like Mathematica), it seems that justification for *solving* a problem by hand boils down to "I should know how the solution is arrived at, that way the answer won't just seem like a magic trick". I can deal with that, but at some point in time I'm pretty sure that I will simply no longer want to work out the answers. Recognizing problems and understanding the relationships of the variables and quantities such that I can *define* the problem is what will give me satisfaction. And I'll be more than happy to let a program figure out the solutions. Any thoughts on Mathematica? August 4th 2006, 11:43 AM I have never used Mathematica, but understand it is outstanding. I have Maple 10 on my personal computer. It is wonderful. August 5th 2006, 08:06 PM After doing some internet research, it seems that Mathematica and Maple 10 are comparable in strength, with Mathcad being a sort of intermediate, falling between programs like Scientific Notebook and the high end programs. There's a reseller on Ebay selling download versions of Mathematica at a deep discount, so I decided to purchase it. I am probably at least a year away before doing anything that would require more power than what a TI-89 provides, but I couldn't pass up a commerical license at $599 (which normally goes for $1880). Maple 10 is even more expensive than Mathematica (by about a hundred dollars). Are you a hobbyist, or do you use it for work? August 5th 2006, 10:10 PM Originally Posted by spiritualfields After doing some internet research, it seems that Mathematica and Maple 10 are comparable in strength, with Mathcad being a sort of intermediate, falling between programs like Scientific Notebook and the high end programs. There's a reseller on Ebay selling download versions of Mathematica at a deep discount, so I decided to purchase it. I am probably at least a year away before doing anything that would require more power than what a TI-89 provides, but I couldn't pass up a commerical license at $599 (which normally goes for $1880). Maple 10 is even more expensive than Mathematica (by about a hundred dollars). Are you a hobbyist, or do you use it for work? You should also investigate the other alternatives. The main other commercial product is MuPad, which recognises that there are other user out there other than commercial and educational. They appear to have a individual private licence at ca. 400 euro. The main free product is Maxima which is a mature product with a history going back 30 years to the Macsyma project at MIT. This is well worth a look. Another free product is xcas which is also worth a look. August 6th 2006, 02:47 AM Originally Posted by spiritualfields After doing some internet research, it seems that Mathematica and Maple 10 are comparable in strength, with Mathcad being a sort of intermediate, falling between programs like Scientific Notebook and the high end programs. There's a reseller on Ebay selling download versions of Mathematica at a deep discount, so I decided to purchase it. I am probably at least a year away before doing anything that would require more power than what a TI-89 provides, but I couldn't pass up a commerical license at $599 (which normally goes for $1880). Maple 10 is even more expensive than Mathematica (by about a hundred dollars). Are you a hobbyist, or do you use it for work? With Maple, more of a hobbyist. I do not believe you need to lay out such a large amount for Maple. The student version runs around $130(at least, that's what it was a few years ago when I purchased it), which should be more than adequate for your needs. One of my former math professors told me that the high-priced 'teachers version' isn't worth the extra money. Also, if you want a nice calculator, think about a TI's Voyage 200. It's the granddaddy of calculators. As you can see, there are many options for you to consider. August 7th 2006, 12:54 PM Mathematica SCAM ALERT Since I originally brought this thread up, and since I mentioned that there is a reseller on Ebay selling commercial licenses of Mathematica at a deep discount, I thought that I better follow up with this warning. I got took. The seller (tinam01) is a scam artist. Unfortunately, I fell for this and chalked up $599.00. The license number I got was one already owned by a guy from Queensland, and the download page I was sent to was only a download FAQ page. I have spent about an hour on the phone with Wolfram this morning, and have forwarded to them all correspondence I had with the seller. Their legal department is now on it. And I have contacted Ebay about this also. Do not fall for this scam! August 7th 2006, 01:32 PM I feel sorry for ya spiritualfields. Those kind of people just frustrate me :mad: August 7th 2006, 01:54 PM It's a shame. There are a lot of scumbags out there. I purchased a brand-new Maple 9.5 from a reputable dealer on eBay about a year and a half ago. It was only about $130. Of the many dealings I have transacted on eBay I had a run-in with a puke only once. Fortunately, eBay refunded my listing fees and banned the offender from doing any more business. August 7th 2006, 07:36 PM I got a copy of Mathematica through my university math department a while ago for a great subsidized academic price (somewhere around US$100~150). I love working with it. For you, it will definitely satisfy all of your precalc and calc needs. August 7th 2006, 09:04 PM The problem is that I'm not a student, so if I want one of the high end programs I'll have to shell out some bucks. I'll just stick with my TI-89 for now and keep my eyes open for legitimate good deals. I think that crunch time will be when I start doing matrices. I suspect that the TI might not do that so conveniently or prettily as a good program with editing features would.
{"url":"http://mathhelpforum.com/math-software/4666-wolframs-mathematica-print.html","timestamp":"2014-04-19T06:54:35Z","content_type":null,"content_length":"12022","record_id":"<urn:uuid:82b04385-2e1c-4c04-89a1-642abfc69760>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
Crossover random fields: A practical framework for learning and inference with graphical models and applications to computer vision and image processing problems Graphical Models, such as Markov random fields, are a powerful methodology for modeling probability distributions over large numbers of variables. These models, in principle, offer a natural approach to learning and inference of many computer vision problems, such as stereo, denoising, segmentation, and image labeling. However, graphical models face severe computational problems when dealing with images, due to the fact that the uncertainty structure is a "grid", and not a one-dimensional tree or chain. In this talk, I will discuss a practical and efficient framework for joint learning and inference in situations where a normal graphical model would be intractable. This framework is based on two basic ideas: 1. Iteratively using a series of tractable models. 2. New loss functions measuring only univariate accuracy. That is, the problem is attacked through a sequence of models, each of which is tractable. The motivating example is an image-- the first model is defined over scanlines, while the next model is defined over columns, "crossing over" the first model. The results of each model can be computed efficiently by dynamic programming, and are used by the next layer. During learning, the parameters of the entire "stack" of models are simultaneously fit to give maximally accurate univariate marginal distributions. This talk will include experimental results on several problems, including automatic labeling of outdoor scenes. Speaker: Justin Domke Google Tech Talks September 9, 2008 Justin Domke is pursing a Ph.D. at the University of Maryland. Before coming to Maryland he received B.S. degrees in Physics and Computer Science from Washington University is St. Louis. His research interest is efficient learning and inference with graphical models and applications to computer vision and image processing problems.
{"url":"http://www.bestechvideos.com/2008/09/16/crossover-random-fields-a-practical-framework-for-learning-and-inference-with-graphical-models-and-applications-to-computer-vision-and-image-processing-problems","timestamp":"2014-04-18T23:15:27Z","content_type":null,"content_length":"25835","record_id":"<urn:uuid:fedabe7c-2c46-4357-8b41-3fc601b008ae>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Mechanics – A car May 3rd 2006, 09:19 AM #1 Senior Member Apr 2006 Mechanics – A car Any help with this would be most appreciated. Thanks in advance The EMF (Effective Motive Force) for a car is the engine driving force minus friction and air resistance. a)A car of mass 1000Kg travels from rest with constant acceleration to 25 m/s in 30 seconds. Use the impulse-momentum relationship to find the EMF? b)Now suppose that instead the EMF is made by the driver to increase uniformly during the 30 seconds (by gradually increasing foot pressure on the accelerator). Use the impulse-momentum relationship to find the final speed reached? c)Compare a) and b)? Any help with this would be most appreciated. Thanks in advance The EMF (Effective Motive Force) for a car is the engine driving force minus friction and air resistance. a)A car of mass 1000Kg travels from rest with constant acceleration to 25 m/s in 30 seconds. Use the impulse-momentum relationship to find the EMF? b)Now suppose that instead the EMF is made by the driver to increase uniformly during the 30 seconds (by gradually increasing foot pressure on the accelerator). Use the impulse-momentum relationship to find the final speed reached? c)Compare a) and b)? The impulse-momentum theorem is: $\overline{\sum F} \Delta t = \Delta p$ where $\overline{\sum F}$ is the average net force applied to the object. a) The acceleration is constant, so the net force applied is constant. I will call the net force F for simplicity. $F \Delta t = \Delta p = m \Delta v$ $F = m \frac{ \Delta v}{ \Delta t} = 1000 \frac{25}{30} \, N$ Thus F = 833 N. b) The acceleration is now a linear function of time over the first 30 seconds of motion. Thus the net force is also a linear function of time. Using the Calculus version of the impulse-momentum $\int_{t_0}^t F \,dt = \int_{v_0}^v m \, dv$ The problem I am having here is that the acceleration is a linear function of time, and we don't have the constant: $a(t) = ct$, giving $F(t)=ma(t)=mct$. In terms of c: $\int_0^{30}mct \, dt = \int_0^v m \, dv$ $\frac{1}{2}mct^2|_0^{30} = mv|_0^v$ So $v = 450c \, \, m/s$. Last edited by topsquark; May 3rd 2006 at 12:10 PM. May 3rd 2006, 12:05 PM #2
{"url":"http://mathhelpforum.com/calculus/2802-mechanics-car.html","timestamp":"2014-04-18T06:51:49Z","content_type":null,"content_length":"36901","record_id":"<urn:uuid:87534b50-bd72-40f4-b8fd-7daaef1f5817>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: ASCII Text x Joshua U. Turner, "Accurate Solid Modeling Using Polyhedral Approximations," IEEE Computer Graphics and Applications, vol. 8, no. 3, pp. 14-28, May/June, 1988. BibTex x @article{ 10.1109/38.510, author = {Joshua U. Turner}, title = {Accurate Solid Modeling Using Polyhedral Approximations}, journal ={IEEE Computer Graphics and Applications}, volume = {8}, number = {3}, issn = {0272-1716}, year = {1988}, pages = {14-28}, doi = {http://doi.ieeecomputersociety.org/10.1109/38.510}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - MGZN JO - IEEE Computer Graphics and Applications TI - Accurate Solid Modeling Using Polyhedral Approximations IS - 3 SN - 0272-1716 EPD - 14-28 A1 - Joshua U. Turner, PY - 1988 VL - 8 JA - IEEE Computer Graphics and Applications ER - Although curved-surface solid modeling systems achieve a higher level of accuracy than faceted systems, they also introduce a host of topological, geometric, and numerical complications. A method for calculating accurate boundary representations of solid models is introduced that reduces the impact of these complications. The method uses a pair of bounding polyhedral approximations to enclose the boundary of each object. A structural analysis automatically determines where to make adaptive refinements to the polyhedrons to assure the topological validity of the results. Potential singularities are localized. The implementation is an experimental extension to the Geometric Design Processor (GDP) solid modeling system. 1. A.A.G. Requicha, "Representation of Rigid Solid Objects,"Computer Aided Design, J. Encarnacao, ed., Springer-Verlag, New York, 1980, pp. 2-78. 2. A. A. Requicha, "Representations for rigid solids: Theory, methods, and systems,"Comput. Surveys, vol. 12, no. 4, pp. 437-465, 1980. 3. M. Mantyla and R. Sulonen, "GWB: A Solid Modeler with Euler Operations,"CG&A, Sept. 1982, pp. 17-31. 4. B.G. Baumgart, "Winged Edge Polyhedron Representation," Stanford Artificial Intelligence Lab., Memo AIM-179 and Stanford Univ. Computer Science Report No. STAN-CS-320, Oct. 1972. 5. I. Braid, "The Synthesis of Solids Bounded by Many Faces,"CACM, Apr. 1975, pp. 209-216. 6. A.A.G. Requicha and H.B. Voelcker, "Boolean Operations in Solid Modeling: Boundary Evaluation and Merging Algorithms,"Proc. IEEE, Jan. 1985, pp. 30-44. 7. H.U. Pfeifer, "Methods Used for Intersecting Geometrical Entities in the GPM Module for Volume Geometry,"Computer-Aided Design, Sept. 1985, pp. 311-318. 8. G. Allen, "Testing the Accuracy of Solid Models,"Computer-Aided Engineering, June 1985, pp. 50-54. 9. M.A. Wesley et al., "A Geometric Modeling System for Automated Mechanical Assembly,"IBM J. Research and Development, Jan. 1980, pp. 64-74. 10. W. Fitzgerald, F. Gracer, and R. Wolfe, "GRIN: Interactive Graphics for Modeling Solids,"IBM J. Research and Development, July 1981, pp. 281-294. 11. J.G. Griffiths, "A Data-Structure for the Elimination of Hidden Surfaces by Patch Subdivision,"Computer-Aided Design, July 1975, pp. 171-178. 12. J.G. Griffiths, "A Surface Display Algorithm,"Computer-Aided Design, Jan. 1978, pp. 65-73. 13. S.L. Hanna, J.F. Abel, and D.P. Greenberg, "Intersection of Parametric Surfaces by Means of Look-Up Tables,"CG&A, Oct. 1983, pp. 39-48. 14. J.R. Rossignac and A.A.G. Requicha, "Offsetting Operations in Solid Modelling," Tech. Memo. No. 53, Production Automation Project, Univ. of Rochester, 1985. 15. J.M. Lane and R.F. Riesenfeld, "A Theoretical Development for the Computer Generation and Display of Piecewise Polynomial Surfaces,"IEEE Trans. Pattern Analysis and Machine Intelligence, Jan. 1980, pp. 35-46. 16. W.E. Carlson, "An Algorithm and Data Structure for 3D Object Synthesis Using Surface Patch Intersections,"Computer Graphics(Proc. SIGGRAPH), July 1982, pp. 255-263. 17. E.G. Houghton et al., "Implementation of a Divide-and-Conquer Method for Intersection of Parametric Surfaces,"Computer-Aided Geometric Design, Vol. 2, 1985, pp. 173-183. 18. Computer-Graphics Aided Three-Dimensional Interactive Application (CATIA), user manual, SH20-2629, IBM Corp., Poughkeepsie, N.Y., 1985. 19. C.M. Brown, "PADL-2: A Technical Summary,"CC&A, Mar. 1982, pp. 69-84. 20. A.L. Klosterman, "A Geometric Modeler Based on a Dual-Geometry Representation--Polyhedrons and Rational B-Splines,"NASA Symp. Computer-Aided Geometry Modeling, NASA Langley Research Center, Hampton, Va., 1983, pp. 7-9. 21. P.R. Wilson, "Solid Modeling Research and Applications in the USA,"Geometric Modeling for CAD Applications, M. Wozny, J. Encarnacao, and H. McLaughlin, eds., North-Holland, Amsterdam, 1988. 22. A.A.G. Requicha and H.B. Voelcker, "Solid Modeling: A Historical Summary and Contemporary Assessment,"CG&A, Mar. 1982, pp. 9-24. 23. A.A.G. Requicha and H.B. Voelcker, "Solid Modeling: Current Status and Research Directions,"CG&A, Oct. 1983, pp. 25-37. 24. G.A. Crocker and W.F. Reinke, "Boundary Evaluation of Non-Convex Primitives to Produce Parametric Trimmed Surfaces,"Computer Graphics(Proc. Siggraph), Vol. 21, No. 4, July 1987, pp. 129-136. 25. J.U. Turner, "Precise Solid Modeling with a Faceted Modeler," IBM Tech. Report No. TR 00.3365, Poughkeepsie, N.Y., 1985. 26. C.M. Eastman and K. Weiler, "Geometric Modeling Using the Euler Operators,"First Ann. Conf. Computer Graphics in CAD/CAM Systems, MIT, Cambridge, Mass., May 1979. 27. P.R. Wilson, "Euler Formulas and Geometric Modeling,"CG&A, Aug. 1985, pp. 24-36. 28. I. Faux and M. Pratt,Computational Geometry for Design and Manufacture. Ellis Horwood, 1979. 29. W. Boehm, G. Farin, and J. Kahmann, "A survey of curve and surface methods in CAGD,"Comput. Aided Geometric Des., vol. 1, no. 1, pp. 1-60, July 1984. 30. T.W. Sederberg and R.N. Goldman, "Algebraic Geometry for Computer-Aided Geometric Design,"CG&A, June 1986, pp. 52-59. 31. T.W. Sederberg and S.R. Parry, "Comparison of Three Curve Intersection Algorithms,"Computer-Aided Design, Jan./Feb. 1986, pp. 58-63. 32. J.U. Turner, "Topological Matching of Boundary Models," IBM Tech. Report No. TR 00.3481, Poughkeepsie, N.Y., 1987. 33. J. Stoer and R. Bulirsch,Introduction to Numerical Analysis, Springer-Verlag, New York, 1980. 34. P.J. Deuflhard, "A Modified Newton Method for the Solution of Ill-Conditioned Systems of Nonlinear Equations with Application to Multiple Shooting,"Numerical Mathematics, 1974, pp. 289-315. 35. J.V. Uspensky,Theory of Equations, McGraw-Hill, New York, 1948. Joshua U. Turner, "Accurate Solid Modeling Using Polyhedral Approximations," IEEE Computer Graphics and Applications, vol. 8, no. 3, pp. 14-28, May-June 1988, doi:10.1109/38.510 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/mags/cg/1988/03/mcg1988030014-abs.html","timestamp":"2014-04-24T01:36:21Z","content_type":null,"content_length":"54230","record_id":"<urn:uuid:0df80f2d-077d-45f4-ba92-2c7feee8d085>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration: Numerical Methods February 1st 2009, 09:44 AM #1 Oct 2008 Integration: Numerical Methods I worked out the answer to this question but I got confused because all the answers of the other ones that I had worked out lay between the values of 0 and 1 (e.g 0.845) but I got a different value here: Can this answer be right? close ... I get 1.149430964 February 1st 2009, 09:51 AM #2
{"url":"http://mathhelpforum.com/calculus/71148-integration-numerical-methods.html","timestamp":"2014-04-19T08:38:28Z","content_type":null,"content_length":"32495","record_id":"<urn:uuid:99a6e078-c14e-4519-a0b5-b6c11bde78aa>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Marina Bay, MA Math Tutor Find a Marina Bay, MA Math Tutor ...This varied background has given me insight into how best approach the subject of Geometry. I have taught Prealgebra for more than 25 years. I have also worked as a tutor of this subject for the same time period. 6 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...As a professor at a top rated university, I have mentored undergraduate as well as graduate students. I have edited countless papers and have sat an numerous admissions committees. Thus, I know what is likely to get the attention of top schools and the quality of writing they expect. 19 Subjects: including SPSS, reading, Spanish, writing ...I helped many students in Trigonometry in my high school and college years. I always loved studying mathematics and helping others make significant progress in it. Many of my skills come from my participation in several mathematics Olympiads. 14 Subjects: including algebra 1, algebra 2, calculus, geometry As a computer science and math double major from the College of William and Mary, a native Chinese speaker, and someone with years of tutoring experience, I am very confident that you will like my personality, my teaching methods, and most importantly, the results. I would love to tutor anything on... 15 Subjects: including geometry, algebra 1, algebra 2, calculus I have a Master's degree in Mechanical Engineering and a Bachelor's in Material Science Engineering. I can cover any engineering topic and Math, Physics and Chemistry for all levels. During my education I have the experience of teacher assisting for more than 4 years both in college and grad school. 23 Subjects: including calculus, chemistry, Microsoft Excel, geometry Related Marina Bay, MA Tutors Marina Bay, MA Accounting Tutors Marina Bay, MA ACT Tutors Marina Bay, MA Algebra Tutors Marina Bay, MA Algebra 2 Tutors Marina Bay, MA Calculus Tutors Marina Bay, MA Geometry Tutors Marina Bay, MA Math Tutors Marina Bay, MA Prealgebra Tutors Marina Bay, MA Precalculus Tutors Marina Bay, MA SAT Tutors Marina Bay, MA SAT Math Tutors Marina Bay, MA Science Tutors Marina Bay, MA Statistics Tutors Marina Bay, MA Trigonometry Tutors Nearby Cities With Math Tutor East Milton, MA Math Tutors Grove Hall, MA Math Tutors Houghs Neck, MA Math Tutors Milton Village Math Tutors Nantasket Beach, MA Math Tutors Norfolk Downs, MA Math Tutors North Quincy, MA Math Tutors Quincy Center, MA Math Tutors Reservoir, MS Math Tutors South Quincy, MA Math Tutors Squantum, MA Math Tutors Stony Brook, MA Math Tutors West Hanover, MA Math Tutors West Quincy, MA Math Tutors Wollaston, MA Math Tutors
{"url":"http://www.purplemath.com/Marina_Bay_MA_Math_tutors.php","timestamp":"2014-04-18T13:32:54Z","content_type":null,"content_length":"23925","record_id":"<urn:uuid:ade23fbb-f86f-42b1-b304-dfd432372f93>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by tea Total # Posts: 28 Well, Steve sorry to steal your thunder, but it would actually be 4.8 cm if it was 48mm high. Because you would have t convert itinto centimeters because the answer for the question requires that it be in centimeters. Your welcome Valina, I had kinda got lost for a moment too.... Hi! Does anyone have a very difficult exercise on future tenses (will,going to and present continuous)? Almost impossible to solve? It can be some kind of a text, or typical gap exercise but it has to be for B2 or C1. I have checked almost the entire Internet but everything is... western civilization Niccolo Machiavelli, author of "The Prince", is considered to be the first modern political theorist. Why is Machiavelli considered the first modern political observer and discuss his ideas on the relationship between the Prince and God; the importance of honesty in ... western civilization discuss the reasons for Martin Luther s break with the Catholic Church and his radical views on the role of the Papacy, the road to salvation, and the special nature of the priesthood need to unscramble these letters and make a sentence: hinhakinyteftiditelt Even though, it is almost summer we still have overcast skies and light rain. 7. A recent study investigated the effects of a "Buckle Up Your Toddlers" campaign to get parents to use the grocery cart seat belts. When gathering baseline data prior to the campaign, investigators observed 86 out of 640 parents buckling up their toddlers. Assume ... A random sample of the houses in a particular city is selected and the level of radon gas is measured for each house in the sample. The values collected are given below in parts per million (ppm). Experience has shown that radon gas level is approximately normally distributed ... One method for estimating the availability of office space in large cities is to conduct a random sample of offices, and calculate the proportion of offices currently being used. Suppose that real estate agents believe that of all offices are currently occupied, and decide to ... According to internal testing done by the Get-A-Grip tire company, the mean lifetime of tires sold on new cars is 23,000 miles, with a standard deviation of 2500 miles. a) If the claim by Get-A-Grip is true, what is the mean of the sampling distribution of for samples of size ... In a study of termite populations in Swaziland, the following data were gathered on the percentage of termites that were classified as "major workers" at seven randomly selected sites:37.6 20.0 28.8 29.9 51.9 13.6 34.5 29.0 a)Use the given data to produce a point est... x^3 - x^2 - 14x +24 Factor the polynomial X^3 + 7x^2 +15x + 9 I tried factoring by grouping. I got stuck at this part: x^2 ( x + 7 )... If 6x - 3y= 30 and 4x= 2-y then find x+y. basic math Identify the rate, base, and amount in the following applications. Do not solve the applications at this point. 22% of Shirley s monthly salary is deducted for withholding. If those deductions total $209, what is her salary? basic math question 1 x/32=7/8 and question 2 y/12=5/0.6 please help basic math Determine if the given rates are equivalent. 9in/57 miles = 6 in/38 miles A man pushing a mop across a floor causes it to undergo two displacements. The first has a magnitude of 152 cm and makes an angle of 125° with the positive x axis. The resultant displacement has a magnitude of 150 cm and is directed at an angle of 35.0° to the positive... Calculate the volume ( L ) of the solute C3H8O3 and the mass ( g ) of the solvent H2O that should be added to produce 6050 g of a solution that is 1.17 m C3H8O3. chemistry(can anyone help me with this) Which of the following is NOT true? A. Except for absorbance, most measurements contain significant figures and units. B. All numbers should be recorded from a digital readout. C. The uncertain digit is estimated between the last two markings. D. If an average of 4 trials is ... Select all cases in which the units comprising the solid are best classified as ions? P4 Mg(ClO3)2 Ge Pb(NO3)2 Ag C2H4O2 The Rydberg equation (1=Rni 2 Rnf 2) can be treated as a line equation. What is the value of nf as a function of the slope (m) and y-intercept(b)? A. mb^2 B. ( m/b)^(1/2) C. m/b D. (m/b)^(1/2) E. (mb)(1/2) F. None of these are correct. i got this question already, thanks The hydrogen emission spectrum has four series (or sets) of lines named Balmer, Brackett, Paschen, and Lyman. Indicate the energy (infrared, ultraviolet, or visible), the nf value for each series, and all possible ni values up to 7 social studies i need help trying to write a limerick on goerge washington or thomas jefferson sociolinguistics-one more question i forgot to ask one more thing..what is a regional dialect?i can't find anything about that A regional dialect is the use of certain words and accents peculiar to a given region. Y'all, for instance, marks a southern U.S. speaker. The pronunciation of some words such a... does anyone know what a superposed language is? To superpose is to place something OVER something else. In this case, if you are talking about a language, you would be taking a new language and making it the language of common usage. Example, if you are an immigrant into the U... king arthur i need help....!!i need to explicite king arthur's court...can anyone help me..i am not allowed tu use the internet...i have to read description of arthur's court from sir gawain and the green knight..and write about the court...so can anyone help me??? http://www.hti....
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=tea","timestamp":"2014-04-18T13:45:57Z","content_type":null,"content_length":"13125","record_id":"<urn:uuid:f8fe2303-ad7a-442a-ac84-b49dbdc5d680>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithms for bounding Folkman numbers Abstract: For an undirected, simple graph G, we write G -> (a_1,...,a_k)^v (G -> (a_1,...,a_k)^e) if for every vertex (edge) k-coloring, a monochromatic K_(a_i) is forced in some color i in {1,...,k}. The vertex (edge) Folkman number is defined as F_v(a_1,...,a_k;p) = min{|V(G)| : G -> (a_1,...,a_k;p)^v, K_p not in G} F_e(a_1,...,a_k;p) = min{|V(G)| : G -> (a_1,...,a_k;p)^e, K_p not in G} for p > max{a_1,...,a_k}. Folkman showed in 1970 that these numbers always exist for valid values of p. This thesis concerns the computation of a new result in Folkman number theory, namely that F_v(2,2,3;4)=14. Previously, the bounds stood at 10 <= F_v(2,2,3;4) <= 14, proven by Nenov in 2000. To achieve this new result, specialized algorithms were executed on the computers of the Computer Science network in a distributed processing effort. We discuss the mathematics and algorithms used in the computation. We also discuss ongoing research into the computation of the value of F_e(3,3; 4). The current bounds stand at 16 <= F_e(3,3;4) <= 3e10^9. This number was once the subject of an Erd s prize---claimed by Spencer in 1988.
{"url":"https://ritdml.rit.edu/handle/1850/2765","timestamp":"2014-04-18T01:15:26Z","content_type":null,"content_length":"13710","record_id":"<urn:uuid:41a6bf71-65ea-4bd7-9ba5-e654df78374f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Students Teaching Students – The Peer Math Experiment Questioning ourselves can often be the first step on the road to redemption – and student success. A few years ago, students in my class were given a math problem to complete. The question involved solving an equation with one variable (e.g. 2x + 5 = 11). In the classes leading up to this activity, we had learned about variables, how to isolate them and how to solve for the unknown. Or at least, I thought the class had learned about these things. What I found as I walked around the room to see how students were doing is that about half the class did not, in fact, know how to solve the problem. My first reaction was frustration since I knew that I had taken the time to teach the lessons needed to attain this skill. How could they not understand how to do this? We have been over this for days now? Were they not listening? Was I not reaching them? What am I missing here? I put my own practice under the lens for a moment and realized that the problem could lie in many places. The most important thing at this moment was to teach a skill (i.e. solving math equations) to the students – in one way or another. I asked the class to hold up the whiteboards that they were working on, and I took note of the students who had the correct process and response. I asked those students to move to the left side of the room. The students who did not demonstrate the skill were paired up with students who were successful. It was the job of the students who already knew how to solve the equation to teach the other student. I wrote a new set of equations on the board and asked them to complete 3 together. Now in an intermediate classroom, you might find that the student who already knows how to solve the questions will just go ahead and do it – without teaching their peer. To remedy this, I announced that the marks for the “peer teachers” depended on the success of their teaching of their assigned “peer student.” There would be a reflection following the ‘peer teaching’ lesson and the results received by the “peer helper” would be the same as “the helped”. The shift in mindset was complete. Collaborative motivation and commitment skyrocketed. In fact, some of the pairings (that would not have otherwise cooperated) found a way to make it work and be successful. I found a resource in the classroom that exponentially improved my teaching success. Why be the “talking head,” when you can use the many heads around you to help educate? Think of a time when you have been taught something by a peer. What made it so successful for you? In what ways could peer teaching allow us to reach our students in unconventional ways?
{"url":"http://weinspirefutures.com/idea-bank/students-teaching-students-the-peer-math-experiment/","timestamp":"2014-04-17T07:39:41Z","content_type":null,"content_length":"35821","record_id":"<urn:uuid:0ac0d605-ee5b-4a92-b3e1-6c7fc961f47e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00334-ip-10-147-4-33.ec2.internal.warc.gz"}