content
stringlengths
86
994k
meta
stringlengths
288
619
Graph Theory - Size of a Line Graph July 25th 2010, 10:14 AM Graph Theory - Size of a Line Graph Hi, all! Line graph $L(G)$ of a graph G is a graph in which every edge in $E(G)$ is represented with a vertex. Two vertices in $L(G)$ are adjacent if and only if the corresponding edges in G share a Now suppose graph G has $n$ vertices, labeled $v_1, v_2, \dots, v_n$ and the degree of each vertex is $deg(v_i) = r_i$. Find the size of $L(G)$. I have attempted to solve it, but I'm stuck. The order of $L(G)$ is $\frac{1}{2} \sum^{n}_{i=1} r_i$. Let's call it m. The vertex $v \in V(L(G))$ is the edge from some vertex $u_i \in V(G)$ to another $u_k \in V(G)$, it's degree is therefore $deg(v_j) = (deg(u_{i_j}) - 1) + (deg(u_{k_j}) - 1) = r_{i_j} + r_{k_j} - 2$. Size of $L(G) = \frac{1}{2} \sum^{m}_{j=1}(r_{i_j}+r_{k_j}) - m$ While I'm quite sure this is correct, it doesn't seem to be very useful - or indeed the expected answer. Any help? July 25th 2010, 11:15 PM Okay, I think I got now. If you write out the terms, you see that each $r_i$ is repeated exactly $r_i$ times. We can then write the size of $L(G)$ as $|E(L(G))| = \frac{1}{2}\sum^{n}_{i=1}{r_i^2} - m$.
{"url":"http://mathhelpforum.com/discrete-math/151943-graph-theory-size-line-graph-print.html","timestamp":"2014-04-18T21:59:23Z","content_type":null,"content_length":"7578","record_id":"<urn:uuid:a9b0c659-2454-473c-88b9-9004b5550343>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Ashland, MA SAT Math Tutor Find an Ashland, MA SAT Math Tutor I have 11 years' experience as a tutor. I have also written questions for tests such as the SAT, GMAT, and GRE. I really enjoy working with students and helping them to understand the material. 8 Subjects: including SAT math, geometry, algebra 1, GRE ...Additionally, I have published articles available online. I was an SAT instructor for Princeton Review and Kaplan. I was also a Summit private tutor for SAT, both Math and English. 67 Subjects: including SAT math, English, calculus, reading ...My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English. 16 Subjects: including SAT math, French, calculus, algebra 1 ...Helping students find success where they have only struggled in the past is what drives me. Please don't hesitate to get in touch if you would like to talk further!One aspect of my job as a special educator and one that I find very important is that I teach study skills. I work with students on... 29 Subjects: including SAT math, reading, English, writing ...In other words, I love to teach! I love getting to know my students, and helping them succeed. I believe that any person can find the joy in learning, so that school becomes a passion and not just a chore. 16 Subjects: including SAT math, reading, writing, algebra 1 Related Ashland, MA Tutors Ashland, MA Accounting Tutors Ashland, MA ACT Tutors Ashland, MA Algebra Tutors Ashland, MA Algebra 2 Tutors Ashland, MA Calculus Tutors Ashland, MA Geometry Tutors Ashland, MA Math Tutors Ashland, MA Prealgebra Tutors Ashland, MA Precalculus Tutors Ashland, MA SAT Tutors Ashland, MA SAT Math Tutors Ashland, MA Science Tutors Ashland, MA Statistics Tutors Ashland, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/Ashland_MA_SAT_Math_tutors.php","timestamp":"2014-04-17T11:29:18Z","content_type":null,"content_length":"23665","record_id":"<urn:uuid:487bbb32-c74c-4b89-9597-d3ad1440b6f9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
The Library Geodesics and flows in a Poissonian city Kendall, Wilfrid S. (2009) Geodesics and flows in a Poissonian city. Working Paper. Coventry: University of Warwick. Centre for Research in Statistical Methodology. (Working papers). WRAP_Kendall_09-43w.pdf - Published Version - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader Download (843Kb) The stationary isotropic Poisson line network was used to derive upper bounds on mean excess network-geodesic length in Aldous and Kendall (2008). This new paper presents a study of the geometry and fluctuations of near-geodesics in such a network. The notion of a "Poissonian city" is introduced, in which connections between pairs of nodes are made using simple "no-overshoot" paths based on the Poisson line process. Asymptotics for geometric features and random variation in length are computed for such near-geodesic paths; it is shown that they traverse the network with an order of efficiency comparable to that of true network geodesics. Mean characteristics and limiting behaviour at the centre are computed for a natural network flow. Comparisons are drawn with similar network flows in a city based on a comparable rectilinear grid. A concluding section discusses several open problems. Item Type: Working or Discussion Paper (Working Paper) Subjects: Q Science > QA Mathematics Divisions: Faculty of Science > Statistics Library of Geodesics (Mathematics), Poisson processes Series Name: Working papers Publisher: University of Warwick. Centre for Research in Statistical Methodology Place of Coventry Date: 2009 Volume: Vol.2009 Number: No.43 Number of 35 Status: Not Peer Reviewed Access Open Access rights to Afimeimounga, H., W. Solomon, and I. Ziedins (2005). The Downs-Thomson paradox: existence, uniqueness and stability of user equilibria. Queueing Syst. 49(3-4), 321– 334. Aldous, D. J. and W. S. Kendall (2008, March). Short-length routes in low-cost networks via Poisson line patterns. Advances in Applied Probability 40(1), 1–21. Alsmeyer, G., A. Iksanov, and U. Roesler (2009). On Distributional Properties of Perpetuities. Journal of Theoretical Probability 22, 666–682. Ambartzumian, R. (1990). Factorization Calculus and Geometric Probability. Cambridge: Cambridge University Press. Baccelli, F., K. Tchoumatchenko, and S. Zuyev (2000). Markov paths on the Poisson- Delaunay graph with applications to routing in mobile networks. Advances in Applied Probability 32(1), 1–18. Baricz, Á. (2008). Mills’ ratio: monotonicity patterns and functional inequalities. J. Math. Anal. Appl. 340(2), 1362–1370. Bertoin, J. and M. Yor (2001). On subordinators, self-similar Markov processes and some factorizations of the exponential variable. Electronic Communications in Probability 6, 95–106 (electronic). Bertoin, J. and M. Yor (2002). On the entire moments of self-similar Markov processes and exponential functionals of Lévy processes. Annales de la Faculté des Sciences de Toulouse Mathématiques (Série 6) 11(1), 33–45. Bertoin, J. and M. Yor (2005). Exponential functionals of Lévy processes. Probability Surveys 2, 191–212 (electronic). Beskos, A. and G. O. Roberts (2005, November). Exact Simulation of Diffusions. The Annals of Applied Probability 15(4), 2422–2444. Birnbaum, Z. W. (1942). An inequality for Mill’s ratio. Annals of Mathematical Statistics 13, 245–246. Böröczky, K. J. and R. Schneider (2008). The mean width of circumscribed random polytopes. Submitted manuscript. Calvert, B., W. Solomon, and I. Ziedins (1997). Braess’s paradox in a queueing network with state-dependent routing. Journal of Applied Probability 34(1), 134–154. Davidson, R. (1974). Line-processes, roads, and fibres. In E. F. Harding and D. G. Kendall (Eds.), Stochastic geometry (a tribute to the memory of Rollo Davidson), pp. 248–251. London: Wiley. Dufresne, D. (1990). The distribution of a perpetuity, with applications to risk theory and pension funding. Scand. Actuar. J. 1-2 (1-2), 39–79. Goldie, C. M. and R. Grübel (1996). Perpetuities with thin tails. Advances in Applied Probability 28(2), 463–480. Hitczenko, P. and J. Wesolowski (2010). Perpetuities with thin tails, revisited. The Annals of Applied Probability To appear. Kellerer, H. (1992). Ergodic behaviour of affine recursions III: positive recurrence and null recurrence. Technical report, Math. Inst. Univ. München, Theresienstrasse 39, 8000 München, Germany. Kendall, W. S. References: (1997). On some weighted Boolean models. In D. Jeulin (Ed.), Advances in Theory and Applications of Random Sets, Singapore, pp. 105–120. World Scientific. Kendall, W. S. (2008). Networks and Poisson line patterns: fluctuation asymptotics. Oberwolfach Reports 5(4), 2670–2672. Littlewood, J. E. (1969). On the probability in the tail of a binomial distribution. Advances in Applied Probability 1, 43–72. McKay, B. D. (1989). On Littlewood’s estimate for the binomial distribution. Advances in Applied Probability 21(2), 475–478. Miles, R. E. (1964). Random polygons determined by random lines in a plane. Proc. Nat. Acad. Sci. U.S.A. 52, 901–907. Narasimhan, G. and M. Smid (2007). Geometric spanner networks. Cambridge: Cambridge University Press. Prömel, H. J. and A. Steger (2002). The Steiner tree problem. Advanced Lectures in Mathematics. Braunschweig: Friedr. Vieweg & Sohn. A tour through graphs, algorithms, and complexity. Rebolledo, R. (1980). Central limit theorems for local martingales. Z. Wahrsch. Verw. Gebiete 51(3), 269–286. Rényi, A. and R. Sulanke (1968). Zufällige konvexe Polygone in einem Ringgebiet. Zeitschrift für Wahrscheinlichkeitstheorie und Verwe Gebiete 9, 146–157. Sampford, M. R. (1953). Some inequalities on Mill’s ratio and related functions. Annals of Mathematical Statistics 24, 130–132. Santaló, L. A. (1976). Integral geometry and geometric probability. Addison-Wesley Publishing Co., Reading, Mass.-London-Amsterdam. With a foreword by Mark Kac, Encyclopedia of Mathematics and its Applications, Vol. 1. Steele, J. M. (1997). Probability theory and combinatorial optimization, Volume 69 of CBMS-NSF Regional Conference Series in Applied Mathematics. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM). Stoyan, D., W. S. Kendall, and J. Mecke (1995). Stochastic geometry and its applications (Second ed.). Chichester: John Wiley & Sons. (First edition in 1987 joint with Akademie Verlag, Berlin). Vervaat, W. (1979). On a stochastic difference equation and a representation of nonnegative infinitely divisible random variables. Advances in Applied Probability 11(4), 750–783. Voss, F., C. Gloaguen, and V. Schmidt (2009). Scaling limits for shortest path lengths along the edges of stationary tessellations. Preprint, Dept Math, University of Ulm. Wardrop, J. G. (1952). Some theoretical aspects of road traffic research. Proceedings, Institute of Civil Engineers, Part II 1, 325–378. Whitt, W. (2007). Proofs of the martingale FCLT. Probability Surveys 4, 268–302. Yor, M. (1992). On some exponential functionals of Brownian motion. Adv. in Appl. Probab. 24(3), 509–531. Yukich, J. E. (1998). Probability theory of classical Euclidean optimization problems, Volume 1675 of Lecture Notes in Mathematics. Berlin: Springer-Verlag. URI: http://wrap.warwick.ac.uk/id/eprint/35230 Available Versions of this Item • Geodesics and flows in a Poissonian city. (deposited 04 Jul 2011 14:31) [Currently Displayed] Actions (login required)
{"url":"http://wrap.warwick.ac.uk/35230/","timestamp":"2014-04-18T15:44:08Z","content_type":null,"content_length":"50575","record_id":"<urn:uuid:9ab9c8b8-f1af-4b7d-9d34-e9a18371c944>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Imaging Spectroscopy Tutorial Imaging Spectrometers Imaging spectrometers acquire a data cube consisting of two spatial and one spectral dimension. There are a variety to do this, depending on whether spectra are dispersed on 0, 1, or 2 dimensions. The availability of focal plane detectors arrays enables highly efficient acquisition of spectrally and spatially resolved data. Examples of imaging spectrometers are For example, a long slit grating spectrometer which disperses light in 1 spatial dimension and provides imaging data in an orthogonal direction. By scanning of a succession of 1-d slices of the scene a 2-d image may be reconstructioned. Other variants of the DS, are the intral field unit (IFU) and the multi-object spectrometer (MOS), which use different strategies to project the slit on the sky. The IFU slices the slit into short sections, and stacks the slices side by side to provide a contiguous view of thes sky. The MOS either uses fibers or a focal plane mask of slitlets. This could be a filter wheel or Fabry-Perot. An imaging TF acquires a full 2-d image each frame, with successive frames viewed through successive filters. In this case the light is not dispersed, but only a narrow bandpass is transmitted to the detector array. An IFTS acquires a full 2-d image per frame, with successive frames associated with different positions of a moving mirror in an interferometer. The raw data cube for an IFTS consists of the interferogram in 1-d (time) and the 2-d scene in the 2 orthogonal directions of the detector. Fourier transformation of the interferograms for each pixel produces the spectral dimension of the data cube. The number of spectral channels practical with an imaging DS is limited by the number of pixels of the focal plane image in one dimension. In practice the number of spectral channels is at most a few thousand. Imaging TF systems are limited in practice to a few tens to hundreds of discrete filter elements. Basic Principles of IFTS Fourier transform spectrometers are based on the Michelson interferometer, with one fixed mirror and one moving mirror. The light transmitted through the interferometer is measured as a function of the displacement of the moving mirror from the zero phase difference (ZPD) position. The Fourier transform of this interferogram yields the spectrum. An imaging spectrometer is obtained by viewing the scence through the interferometer with a camera, and constructing the Fourier transform of the variations in light intensity at each pixel as a function of the position of the moving mirror. Typical rays emerging from two representative points are drawn. One point is located on the optical axis. The second point is displaced by a distance y from the optical axis. The object plane is located at a distance equal to the focal length f of the collimating lens. The object plane may be the focal plane of a telescope. The camera lens, with focal length f' produces an image with magnification (or reduction) factor f/f'. The depth of field of the focusing system should accommodate twice the maximum travel distance of the moving mirror in the interferometer in order to produce a sharp focus in the image plane. The beam splitter transmits a fraction T of the incident light and reflects a fraction R. These coefficients in general depend on polarization, the angle y' relative to the intensity emerging from the object point y is proportional to x for the moving mirror with respect to its zero phase difference (ZPD) point, and the angle For a non-monochromatic point source, the observed light intensity I(x,y') in the focal plane is a function of both mirror position x and the distance from the optical axis y'. For a spectral distribution with intensity for wave numbers between k and k+dk given by S(k)dk, and a detection efficiency E, the observed light intensity in the focal plane is given by The light intensity at any given radius y' is thus simply related to the Fourier transform of the spectrum. Introducing the modified frequency and the modified spectrum Here is a simulation of the fringe pattern predicted by Eq (4). By Fourier transforming I(x,y') as a function of x, the spectrum S'(k') can be recovered, and thus S(k) itself. Ordinarily, a range of values of y' are integrated, which has the effect of broadening and shifting a monochromatic line in the spectrum which is recovered from the Fourier transform of I(x). This is the origin of the Jacquinot limit on the resolution of an FTS. For a Jacquinot stop of radius r, to first order in r/f, the total width of the Jacquinot blurring is For points off the optical axis, this blurring varies linearly with distance from the optical axis. In terms of the angular position Except for extremely high resolution instruments, this limitation is small compared to the limit on the resolution from the total travel distance of the moving mirror. Reference: Bennett, C. L., Carter, M. R., Fields, D. J. & Hernandez, J. 1993, Proc. SPIE, 1937, 191
{"url":"http://astro.berkeley.edu/~jrg/ngst/tutorial/tut1.html","timestamp":"2014-04-17T18:25:31Z","content_type":null,"content_length":"8551","record_id":"<urn:uuid:27d99a0d-2b06-4d58-8ba1-75a712b0c180>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the Domain of a Function - Problem 1 Let’s find the domain of a function. We got f(x) equals the quantity 1 plus root x times the quantity 4 minus root 9 minus x. For this function to be defined, we need both of these radicals to be defined. And this will be defined if x is greater than or equal to 0. This one will be defined if 9 minus x is greater than or equal to 0. And we need both of those to be true. What does this mean? I can subtract 9 from both sides and get minus x is greater than or equal minus 9. Then I can multiply by -1. And of course doing so reverses the direction of the inequality. I get x is less or equal to 9 and I’ll reverse this; 0 is less than or equal to x. Whenever you have 2 inequalities like this, they can be combined into a compound inequality. 0 is less than or equal to x is less than or equal to 9. And that means the domain of this function is the interval from 0 to 9. functions domain inequalities compound inequalities interval notation
{"url":"https://www.brightstorm.com/math/precalculus/introduction-to-functions/finding-the-domain-of-a-function-problem-1/","timestamp":"2014-04-18T08:07:45Z","content_type":null,"content_length":"59825","record_id":"<urn:uuid:e23d5bc3-1088-4bf9-ab09-47c3d3c66dfb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Draw me a Hyperbola Author Draw me a Hyperbola Ranch Hand I can draw an ellipse with Ellipse2D, a parabola with QuadCurve2D Joined: Dec but to make my conic section collection complete I'm trying to draw a hyperbola. 27, 2002 Posts: 84 I've googled it to death, java docs have an aversion to using geometric nomenclature--in QuadCurve2D not one word about parabola !!! Some applets bring up the curve but don't betray how they got it. Shall I write my own code??? Just a class name would make my holidays. Ranch Hand Joined: Dec Oh what the heck. 27, 2002 I've found that about six inches of hyperbola can be effectively created with an array of 50 Line2D objects. Posts: 84 Really wasn't hard at all. Joined: Oct 14, 2005 Looking at the API docs for QuadCurve makes me think of having to fill in a 47-page contract just to get a train ticket. So how did you use that to draw a parabola? I like... Ranch Hand Doing the parabola was the easiest of the three. Joined: Dec 27, 2002 You need three points on the canvas p1, p2 and ptcontrol to depict it with QuadCurve2D. Posts: 84 a line from p1 to ptcontrol and a line from p2 to ptcontrol are two opposing tangents on the parabola. It makes an isoscles triangle. And it turns out that if the focus is at (0,0) and p1 is at, just for fun, (-3, +4) then I put the p2 point at (-3, -4). This will put the p1 point at a distance of 5 from the focus ( a [3, 4, 5] right triangle.) Now the real fun starts : if you make a line from the focus to p1 and then make a line depicting the tangent, then let the angle between the two lines be "F". A line from p1 to ptcontrol also (!) makes the angle F with the x axis. This is because the distance from the focus to the ptcontrol is also 5 !!! Same as the distance from p1 to the focus. So there is another isoscles triangle. And the best of all is that this angle F is half of the angle formed with the x axis by the [3, 4, 5] triangle. Arctan 3/5 = 2F. Further points along the parabola can be predicted because the distance from the focus to the vertex (tip of the curve) is = d = (5 - 3)/2 and also the distance from the vertex to the directerix line. And the distance from the directerix to the ptcontrol is 3 (from the 3, 4, 5 triangle). So if p1 is (-3, 4) and p2 is (-3, -4) then ptcontrol is (-3 + 2d + 2*3, 0) = (5, From this I have the formula for the parabola x = -(y**2)/4d + d; It is symetrical along the x axis and opens to the left. If you differentiate the formula for the parabola then you have dy/dx and there is the slope of the tangent. It all hangs together. And thereby one can also determine if a "tilted" parabola is being depicted, and determine the angle of tilt. I have this in an app where the satellite is at point p1, the gravity source is at the focus and the velocity vector is of course, the tangent. I start with the distance to the gravity source known (here 5), the angle between the satellite to gravity source line and the velocity vector is also known (here F). And the rest falls into place nicely. With a hyperbola it's kinkier. With a hyperbola you always have twins. So which one is the curve? I'm almost there. Paul Clapham wrote:Looking at the API docs for QuadCurve makes me think of having to fill in a 47-page contract just to get a train ticket. So how did you use that to draw a Joined: May parabola? 03, 2008 Posts: 4522 5 Dave Elwood wrote:[Everything you always wanted to know about drawing a parabola but were afraid to ask]. I like... On a more serious note, Dave: luck, db There are no new questions, but there may be new answers. Ranch Hand Joined: Dec Thanks Darryl. 27, 2002 Posts: 84 I'm a 61 year old English teacher in Germany. Have you ever met a teacher who wasn't ready to 'show you the way?' Joined: May 03, 2008 Dave Elwood wrote:Thanks Darryl. Posts: 4522 I'm a 61 year old English teacher in Germany. Have you ever met a teacher who wasn't ready to 'show you the way?' I'm a year younger than you, a retired Electrical engineer in India, and in the habit of posting hints that are as brief as possible I like... subject: Draw me a Hyperbola
{"url":"http://www.coderanch.com/t/562525/GUI/java/Draw-Hyperbola","timestamp":"2014-04-17T04:54:57Z","content_type":null,"content_length":"36390","record_id":"<urn:uuid:79820b85-3a35-4727-b4d0-27e6286f2e99>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Misconceptions about rand() Misconceptions about rand(). © Copyright 2004-2008 by Paul Hsieh I was recently alerted to a very sad state of affairs regarding the comp.lang.c FAQ (it has since been improved from when I first looked at it, no doubt in part thanks to me, however it still misses the point and says inaccurate things). For those that care to know these things, I hope people know that rand() is just not very good, and alternatives should be used. Explaining this to a lazy programmer who just wants a reasonable solution is probably a bit much. For someone just looking for an answer that is "good enough" we need something quick and dirty, but reasonable in practice. The Travesty The point is that x = rand () % RANGE; /* return a random number in [0,RANGE) */ is not good enough. There are three major problems with using the above: 1. rand() returns a value in [0, RAND_MAX]. So if RANGE does not divide evenly into RAND_MAX+1, the distribution is blatantly wrong. Specifically the probability of choosing x in [(RAND_MAX % RANGE) , RANGE) is less than choosing x in [0, (RAND_MAX % RANGE)). 2. The quality of rand() on most systems is not that good; in particular the low bits can follow a short cyclical pattern, or there can be dependencies between the bits. 3. What happens if RAND_MAX+1 < RANGE? For the typical programmer concerned with robustness, the primary concern will be for fixing the first condition above. Programmers that don't actually care about this, are essentially happy with "loaded dice" and can stick with the solution given above. The comp.lang.c FAQ attempts to solve the second but just ignores the first condition totally. They recommend the following: /* WARNING!! Don't use this, its a bad partial fix of rand () */ x = rand() / (RAND_MAX / RANGE + 1); /* Bad advice from C.L.C. FAQ */ Or in other forms: /* Using floating point like this doesn't help in any way. */ #define ranrange(a, b) (int)((a) + rand()/(RAND_MAX + 1.0) * ((b) - (a) + 1)) Ok, the point is that this solution doesn't do anything about the bias that's created by trying (and failing) to evenly divide the original number of potential results amongst the desired number of potential results. The FAQ then tries to excuse itself by suggesting that this solution is only good if RANGE is much smaller than RAND_MAX when the bias will be smaller. The problem is that the bias will be easily detected (the point where you are 99% sure that its following its incorrect distribution, rather than the one you want) after about 1000 * (RAND_MAX / RANGE) samples regardless (the bias still exists for a smaller number of samples, its just harder to detect). A More Reasonable Answer So does that mean that fixing rand() is infeasible and that we really should be trying to push everyone to use some of the more sophisticated generators from Marsaglia or the Mersenne Twister? Not necessarily -- first of all those generators are not always portable, they are not always as fast as rand() and include details that steepens their learning curves. If you can get over these hurdles, I whole heartedly recommend you study them and pick the one you find most appropriate. But the much simpler goal of getting an even distribution is actually very achievable and also applies to those more sophisticated random number generators: /* This actually works correctly. */ #define RAND_INV_RANGE(r) ((int) ((RAND_MAX + 1) / (r))) do { x = rand(); } while (x >= RANGE * RAND_INV_RANGE (RANGE)); x /= RAND_INV_RANGE (RANGE); The idea is very simple. Just reject a small number of values ((RAND_MAX % RANGE)) off the top of the range of output of rand (); forcing a retry if such values are encountered. The output will be an exactly equally distributed choice from the number range of [0, RANGE) for any value of RANGE that is less or equal to RAND_MAX. This does nothing about the 3rd problem above. For the moment we are going to ignore this. Now, of course, one might be concerned that this may have poor running time since it may have to call rand() multiple times before producing a single result. So lets do some calculations to see how bad this really can be in practice. Let p = (RAND_MAX % RANGE) / (RAND_MAX + 1.0). This is the probability that any given call to rand() will require a retry. Note that this value is maximized when RANGE*2 = RAND_MAX+3, and which will yield a value of p roughly equal to 1/2. 1. The average number of times that rand() will be called is 1/(1-p) (with a worst case of about 2). 2. The probability that at least n calls to rand() will be required beyond the first one is p^n. So for most values of p which will be much less than 1/32 say, performance should not be a concern at all. The question is, then, what should be done about values of p which are too large? Well, if its because RAND_MAX is too small and in fact RAND_MAX * RAND_MAX < INT_MAX (a very typical, and sad, situation) then the simplest solution is to just contruct a better rand() function: #define XRAND_MAX (RAND_MAX*(RAND_MAX + 2)) unsigned int xrand (void) { return rand () * (RAND_MAX + 1) + rand (); Otherwise, multiprecision arithmetic would be required (at which point one might as well pick up one of the alternative random number generators.) We can see that this alternative will help with the 3rd problem above by expanding the range. However, RANGE may still be too large. Another Alternative The other obvious alternative to fixing the rand() function is to construct a random floating point number from it, so that range fixing of it will be more straight forward: #include <stdlib.h> #define RS_SCALE (1.0 / (1.0 + RAND_MAX)) double drand (void) { double d; do { d = (((rand () * RS_SCALE) + rand ()) * RS_SCALE + rand ()) * RS_SCALE; } while (d >= 1); /* Round off */ return d; #define irand(x) ((unsigned int) ((x) * drand ())) The result returned by drand() is a double precision floating point number in the range of [0,1). The irand(r) macro will return a random integer in the range [0,r). This has the advantage, that if RAND_MAX = 32767 (which is very typical) and your platform has a double with a 53 bit mantissa (also very typical), then this actually produces a bit-faithful random number with 45 bits of precision. 45 bits will be large enough for pretty much any practical RANGE, however, it is still not perfect (so technically problem 3 is still not fully addressed). The theoretical disadvantages are two-fold: 1) overflows (which actually result from RAND_MAX being too large, or the platform's double mantissa being too small) in the mantissa will create roundings that will introduce a slight bias on the order of 1 ULP (Unit in Last Place); notice that the extremely unlikely case of erroneous overflows are shielded by a do-while() loop. 2) The conversion back to integer can introduce a bias of about 1 ULP. A bias of 1 ULP is typically so small that it is not even realistically feasible to test for its existence from a statistical point of view. The more tangible disadvantages are that 1) rand () is definitely called 3 times (which is worse than the worst case for the average expected running time of the earlier solution), and that 2) rand () is usually actually quite terrible at guaranteeing that successive calls to it behave with apparent independence. Generalizing to a real range One of the advantages of the drand() function given above is that it extends to use of floating point probability ranges trivially. So if you want an indicator function on a random sample with a bias of x, then it is simply drand() < x . To get the probability precisely, we can use rand() to give us a ranged slot and see if it falls entirely below or entirely above x. If the slot straddles x, then we refine the choice within that slot (to a sub-slot) and repeat: #include <stdlib.h> #define RS_SCALE (1.0 / (1.0 + RAND_MAX)) int randbiased (double x) { for (;;) { double p = rand () * RS_SCALE; if (p >= x) return 0; if (p+RS_SCALE <= x) return 1; /* p < x < p+RS_SCALE */ x = (x - p) * (1.0 + RAND_MAX); Although this function also technically can loop an unbounded number of times, this time around the probabilities of successive loops drop off extremely quickly; the expected number of iterations is 1+1.0/RAND_MAX. I.e., its not worth pursuing a fix for this. Sampling from an arbitrary discrete distribution Continuing with the interval idea from above for a boolean distribution, we can now proceed to implement a generalized discrete distribution (i.e., one of a finite number of outcomes, but with arbitrary probabilities.) We start by creating a sorted sequence which represents the cumulative distribution. I.e., we create the mapping i => [ slot[i-1], slot[i] ), where slot[-1] = 0, slot[n-1] = 1 and slot[i-1] <= slot[i]. So the probability of choosing the ith entry is slot[i] - slot[i-1]. This requires an array of n-1 (not n) entries to be constructed in which the -1 and n-1 indexes are omitted. The function will return with a value of 0 to n-1 inclusive: #include <stdlib.h> #define RS_SCALE (1.0 / (1.0 + RAND_MAX)) /* A non-overflowing average function */ #define average2scomplement(x,y) ((x) & (y)) + (((x) ^ (y))/2) size_t randslot (const double slots[/* n-1 */], size_t n) { double xhi; /* Select a random range [x,x+RS_SCALE) */ double x = rand () * RS_SCALE; /* Perform binary search to find the intersecting slot */ size_t hi = n-2, lo = 0, mi, li; while (hi > lo) { mi = average2scomplement (lo, hi); if (x >= slots[mi]) lo = mi+1; else hi = mi; /* Taking slots[-1]=0.0, this is now true: slots[lo-1] <= x < slots[lo] */ /* If slots[lo-1] <= x < x+RS_SCALE <= slots[lo] then any point in [x,x+RS_SCALE) is in [slots[lo-1],slots[lo]) */ if ((xhi = x+RS_SCALE) <= slots[lo]) return lo; /* Otherwise x < slots[lo] < x+RS_SCALE */ for (;;) { /* x < slots[lo] < xhi */ if (randbiased ((slots[lo] - x) / (xhi - x))) return lo; x = slots[lo]; if (lo >= n-1) return n-1; if (xhi <= slots[lo]) { /* slots[lo-1] = x <= xhi <= slots[lo] */ return lo; Now by this point, you might think that this sort of thing is excessive. However, we can immediately see the justification for this function by considering the distribution: [0, 0.3 / RAND_MAX), [0.3 / RAND_MAX, 1.0 / RAND_MAX), [1.0 / RAND_MAX, 1.0) It would be difficult to sample from this distribution accurately without a solution similar to the above. Ok, and now we see we can finally address the 3rd problem completely. If we replace slots[x] above with x/(double)n then we will finally have a solution for what we were originally looking for. But we can simplify this massively: #include <stdlib.h> #include <math.h> #define RS_SCALE (1.0 / (1.0 + RAND_MAX)) size_t randrange (size_t n) { double xhi; double resolution = n * RS_SCALE; double x = resolution * rand (); /* x in [0,n) */ size_t lo = (size_t) floor (x); xhi = x + resolution; for (;;) { if (lo >= xhi || randbiased ((lo - x) / (xhi - x))) return lo-1; x = lo; Seeding the random number generator The standard typical method for seeding the random number generator is to do the following: #include <stdlib.h> #include <time.h> srand (time (NULL)); This is fine so long as one does not perform this seeding operation at any rate higher than once per second. So the question is, can we somehow increase the number of times we can reseed and expect variance in the actual seeds? The obvious idea is to try to get multiple sources of entropy and iterate through them slowly: #include <stdlib.h> #include <limits.h> #include <time.h> static struct { int which; time_t t; clock_t c; int counter; } entropy = { 0, (time_t) 0, (clock_t) 0, 0 }; static unsigned char * p = (unsigned char *) (&entropy + 1); static int accSeed = 0; int reseed (void) { if (p == ((unsigned char *) (&entropy + 1))) { switch (entropy.which) { case 0: entropy.t += time (NULL); accSeed ^= entropy.t; case 1: entropy.c += clock(); case 2: entropy.which = (entropy.which + 1) % 3; p = (unsigned char *) &entropy.t; accSeed = ((accSeed * (UCHAR_MAX+2U)) | 1) + (int) *p; srand (accSeed); return accSeed; So we are using time(NULL), clock() and an incrementing counter as the sources of entropy. Obviously there's a high degree of dependency, amongst all these sources. But the point of an entropic sequences is not for it to be a pure source of random numbers, by itself (that's what the PRNG is for). The only requirement we have is some degree of non-deterministic variability, which we should expect. The formula for accSeed will tend to create at least a sort of pseudo random sequence but which is deterministically perturbed by the entropic sources. On a 32 bit system each entropy source is fetched, on average, once every 36 times that reseed() is called. If more entropy sources are added this will increase the time between each repeated fetch from each source. Examples of entropy sources that might be appropriate are 1) an on-disk counter, 2) the current process-ID, 3) processor clock counter (the TSC MSR on the x86 for example) values when certain events (such as network response, mouse movement, keypresses, etc) occurr. Download sources for the above. Going beyond rand() The discussion so far has focussed on trying to make sure that the aggregate number of each possible output from the random number selection is a single constant. In mathematical terms we are trying to find a random number generator that satisfies: E(|{X=x}^n|) = np (the C.L.C. FAQ solution fails simply by virtue of choosing a large enough n whenever 1/p is not a divisor of RAND_MAX) as well as hoping for a couple other properties such as E(|{X+αX=x (mod RAND_MAX+1)}^n|) = np (since just an incrementing counter satisfies the first test alone). Generators such as the various Marsaglia generators or the Mersenne Twister are primarily concerned with testing the following: P(X^k = (x[0], x[1], ... , x[k-1])) = p^k. In non-math speak, it means that every plausible sub-sequence of sequentially fetched random numbers is equally probable. Or that successively generated random numbers appear independent from the ones generated immediately before them. One might, very roughly, quantify how good these generators are by how high of a value of k they can satisfy (how many outputs you can produce before you can, with sufficient computing resources, determine that not every possible sequence of that length is equally represented). For example, the Mersenne Twister (which is portable to systems with 32 bit integers) has a k-value of roughly 600 (corresponding to a cycle of length 2^19937-1) and Marsaglia's CMWC generator (which, unfortunately, requires assembly language support) has a k-value of roughly 1000 (corresponding to a cycle length of about 10^10007). Real world applications (even lotteries) would hardly ever need a k-value of more than about 16 or so and more typically can get away with values less than 5. Cryptographers have entirely different needs for random numbers. For example, the requirement of choosing very large prime numbers is necessary for public key based cryptography. For cryptographers, the state of the random number generator must not be deducible from the examination of any number of outputs with any amount of computing power. The Fortuna random number generator by Schneier at al (which is an improvment over the Yarrow random number generator) hides the output using non-reversible hash functions (like SHA256) and uses multiple sources of entropy. The idea is that even if the complete state of the generator is known (itself highly unlikely), it will very quickly become unknown. So the sequence of the numbers output are not determinable with anything short of complete state knowledge and control over the entropy. As with anything having to do with cryptography, it pays to listen to what the experts say before attempting to roll a home grown solution. One final note should be spoken with regard to multithreading systems. We must remember Knuth's warning from The Art of Computer Programming, that randomly constructed algorithms don't necessarily lead to a good source of random numbers. The same comment needs to apply to race conditions affecting random number servers. Although it would seem that allowing race conditions would be harmless for PRNGs, this is not the case. First of all many random number generators are susceptible to "0-sticking"; that is if their internal seeds all end up going to 0 simultaneously, they would proceed to issue constant-only output. Secondly, a race condition could cause the internal state to skip an update meaning that the same number could be output more than once in a row soley because of an occurring race condition. Third, the mechanisms of a PRNG carry it from state to state along a maximal path -- unpredictably perturbing it cannot increase the path but in fact only decrease it, thus decreasing the length of the PRNG's cycle. So in multithreaded systems, PRNGs should be properly mutexed, just like malloc() is.
{"url":"http://www.azillionmonkeys.com/qed/random.html","timestamp":"2014-04-20T03:22:12Z","content_type":null,"content_length":"23725","record_id":"<urn:uuid:84f5eba3-561c-49f9-a73f-21a0e310bc82>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Page:United States Statutes at Large Volume 34 Part 1.djvu/553 This page needs to be FIFTY-NINTH CONGRESS. Sess. I. Ch. 3561. 1906. 523 Two cooks, four hundred and thirty-two dollars; Eight corporals, one thousand four hundred and fort * dollars; 5 h One huncclired and fifty-seven privates, twenty-four thousand four undred an ninety-two dollars; Additional av for len th of service eleven thousand one hundred Y •. g 7 and twelve do lars; Clothing on discharge, three thousand nine hundred and sixty-six dollars· Interest on deposits of enlisted men, one thousand and twelve dollars; For travel allowances due enlisted men on discharge, one hundred and thirty-two dollars- For y of cavalry detachment: One first ser cant, three hundred m°“"“1'Y d°‘°°h‘ d ll ve . g ent. 0 ars; Five sergeants, one thousand and eighty dollars; Two cooks, four hundred and thirty-two dollars; Five corporals, nine hundred dollars; Two trumpeters, three hundred and twelve dollars; ‘ Two farriers and blacézsmiths, three hundred and sixty dollars; One saddler one hun red and eighty dollars· One wagonei, one hundred and sixty-eight dhllars; hEighty-one] privates (cavalry), twelve thousand six hundred and t irty-six dollars- V Additional pay,for length of service, two thousand one hundred and eighty dollars· Clothing on, discharge, one thousand eight hundred dollars; " Traveling allowances to enlisted men on discharge, eight hundred and twenty dollars; Interest on deposits to enlisted men, one hundred dollars; d For pay of artillery detachment: One first sergeant, three hundred mQ,ftfm°’Y "°“*°*'· 0 lars· . Fiveisergeauts, one thousand and eighty dollars; One cook, two hundred and sixteen dollars; FOU1' corporals, seven hundred and twenty dollars; One farrier and blacksimith, cane hlpndged and eighty dollars; One saddler, one hun re an eighty 0 lars; One wagoner, one hundred and sixty-eight dollars; Two trumpeters, three hundred and twe ve dollars; Fifty-nine privates, nine thousand two hundred and four dollars; For additional pay for enlisted men of the Military Academy detachment of field artillery found dulv qualified as iirst—¢-lass gunners. at two dollars per mont each, two hundred and forty dollars; For additional pay for enlisted men of the Military Academy detachment of field arti lery found dulv qualified as second~class gunners., at one dollar per month each, one hundred and twenty dollars; Additional pay for length of service. one thousand five hundred dollars· Clothing on discharge, one thousand two hundred dollars; Interest on deposits due enlisted men, one hundred and fifty dollars; Travel allowances to enlisted men on discharge, seven hundred and iift dollars· Iyer extm’pay of two enlisted men employed as clerks in the office Exm psy. euliswd of the adjutantl, Ugiteél St(¤;.tesxtMilitar§ lacaderny, at fifty cents each- m"' r da three un re an s` y-five dollars; p6For {extra pay of two enlisted men employed as clerks in the office of the commandant of cadets, at fifty cents each per day, three hundred and sixt -five dollars· For eictra pay of yfour enlisted men as printers, at headgauarters United States Military Academy, at fifty cents each per y, six hundred and twenty-six dollars;
{"url":"http://en.wikisource.org/wiki/Page:United_States_Statutes_at_Large_Volume_34_Part_1.djvu/553","timestamp":"2014-04-17T13:36:39Z","content_type":null,"content_length":"25822","record_id":"<urn:uuid:8e7a9fad-0480-441b-bdbd-d647b18f1a96>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
Argument, principle of the From Encyclopedia of Mathematics argument principle A geometric principle in the theory of functions of a complex variable. It is formulated as follows: Let The principle of the argument is used in the proofs of various statements on the zeros of holomorphic functions (such as the fundamental theorem of algebra on polynomials, the theorem of Hurwitz on zeros, etc.). From the principle of the argument follow many other important geometric principles of function theory, e.g. the principle of invariance of domain (cf. Invariance, principle of), the maximum-modulus principle and the theorem on the local inverse of a holomorphic function. In many questions the principle of the argument is used implicitly, in the form of its corollary: the Rouché There are generalizations of the principle of the argument. The condition that Logarithmic residue). For this reason, the following statement is sometimes called the generalized principle of the argument. If holds, where the first sum extends over all zeros and the second sum extends over all poles of An analogue of the principle of the argument for functions of several complex variables is, for example, the following theorem: Let [1] M.A. [M.A. Lavrent'ev] Lawrentjew, B.V. Shabat, "Methoden der komplexen Funktionentheorie" , Deutsch. Verlag Wissenschaft. (1967) (Translated from Russian) [2] B.V. Shabat, "Introduction of complex analysis" , 2 , Moscow (1976) (In Russian) How to Cite This Entry: Argument, principle of the. E.M. Chirka (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Argument,_principle_of_the&oldid=15915 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Argument,_principle_of_the","timestamp":"2014-04-21T04:38:18Z","content_type":null,"content_length":"23338","record_id":"<urn:uuid:a98607ed-914f-4f2a-8143-3c5265786c46>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
│Updates│Syllabus│Course Policy │Links│Projects│ In mathematics you don't understand things. You just get used to them. -- John von Neumann, Nobel Prize winner. ● Time and Place: 5:00-6:15 pm MW at 509 Smith Hall. ● Instructor: Peter Saveliev (call me Peter) ● Office: 778F Smith Hall (in the alley) ● Office Hours: TR 3-6 (tentative), or by appointment, or any time I am there (usually in the afternoon) ● Office Phone: x4639 ● Home Phone: 697-7827 (9 a.m. - 8 p.m.) ● E-mail: saveliev@marshall.edu (put Math 491 as the subject) ● Class Web-Page: http://users.marshall.edu/~saveliev/m491.htm and Introductory algebraic topology. ● Prerequisites: Good knowledge of linear algebra as well as some fundamentals of group theory (Math 450), in particular, quotient groups, properties of Abelian groups, and examples. ● Text: Computational Homology by Tomasz Kaczynski, et al ● Short Description: Cubical sets and cubical homology. Simplicial complexes and simplicial homology. Homology of maps. Degree theory. Fixed points. Applications: Image processing and recognition; Structure of proteins; Topology of data structures; Topology of networks; Computational topology. Handout ● Grade Breakdown: - weekly assignments: 30% - midterm: 20% - final presentation: 20% - final exam/essay/project: 30% ● Letter Grades: A: 90-100, B: 80-89, C: 70-79, D: 60-69, F: <60
{"url":"https://mubert.marshall.edu/bert/syllabi/318620050212730859486672.html","timestamp":"2014-04-20T11:03:43Z","content_type":null,"content_length":"12110","record_id":"<urn:uuid:ed5bccfc-820f-4646-9a6a-9aa55a9d7953>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
Dehn surgery on handlebody up vote 5 down vote favorite Assume $V$ is a handlebody and $C$ be a simple closed curve contained in the interior of $V$. As Sam said, there exists some simple closed curve such that every dehn surgery along it produces a handlebody. So I assume $C$ can not be isotopic to a simple closed curve in $\partial V$. Obviously, trivial dehn surgery along $C$ produces a handlebody. So My question is: Is there a different dehn surgery along $C$ which produces a handlebody? Can we classify all the dehn surgery along $C$ which produce handlebodies? gt.geometric-topology 3-manifolds at.algebraic-topology With respect, the third line is unclear and needs to be rewritten. Also, it would help to have a definition of unknotted. – Sam Nead Dec 28 '11 at 10:58 1 I can point out that if $C$ is a core curve (meets some essential disk in a single point and is also isotopic into the boundary) then every Dehn filling on $C$ gives a handlebody. – Sam Nead Dec 28 '11 at 11:01 @Sam. Thank you for your advice. I have rewritten it. – yanqing Dec 28 '11 at 11:44 1 @Yanqing - There are many pairs $(V, C)$ where we may embed $V$ into three-space and $V$ is a standard handlebody, $C$ is an unknot in three-space, and yet $C$ is very complicated inside of $V$. So, with your current definition, I see no hope for a reasonable answer to your question. That said, you might be interested in the ideas of "cosmetic Dehn surgery" and "Property R". best – Sam Nead Dec 28 '11 at 12:29 @Sam. Lots of thanks. – yanqing Dec 28 '11 at 13:31 add comment 1 Answer active oldest votes There's an extensive literature on this and more general questions. First, let's consider a more precise formulation of the question. Let $H$ be a handlebody, and let $K\subset H$ be a knot. Let's assume that $\partial H \subset H-K$ is incompressible. Otherwise, $H=H_0 \natural H_1$ the boundary connect sum of two handlebodies, such that $K\subset H_0$, and $\partial H_0 \subset H_0-K$ is incompressible. Then a surgery on $K$ makes $H$ a handlebody if and only if the corresponding surgery makes $H_0$ a handlebody. Thus, one may reduce to considering the case that $K$ is diskbusting. With this reformulation of the question, the case that $K$ is a core curve corresponds to when $H$ is a solid torus, and $K$ is a core curve (since this is the only case in which a core curve is diskbusting). So assume that $K$ is not a core curve. If $K$ is isotopic into $\partial H$, then there are infinitely many surgeries which yield $H$. In this case, there is an annulus going between the knot and $\partial H$. The surgery slopes which intersect the annulus slope once give back $H$. Surgery along the annular slope gives a manifold containing an incompressible surface, by Jaco's lemma. If the intersection with the annulus slope is $>1$, then the manifold has incompressible boundary by Theorem 2.4.3 of the cyclic surgery paper. up vote 13 down vote A result of Wu implies that for knots $K$ which are not isotopic into the boundary, the distance between boundary-reducible surgeries is at most one, and therefore there are at most two accepted non-trivial surgeries which may yield a handlebody. In the case that $H$ is a solid torus, it was proved by Berge and Gabai that $K$ must be a 1-bridge braid, and a complete description was given. There is a famous example (the Berge link) which yields 3 solid torus surgeries, which shows that Wu's estimate is sharp (there is a 3-fold symmetry permuting the slopes). Wu has further results on the case of 1-bridge knots in handlebodies. Added: Frigerio, Martelli and Petronio show that there are 1-bridge knots in handlebodies with three handlebody fillings (including the trivial one), generalizing the example of the Berge link to higher genus boundary, and showing that Wu's theorem is sharp in general. It seems to be an open question whether all such examples are 1-bridge (see discussion in another of Wu's papers). In a joint paper with Martelli and Petronio (Dehn filling of cusped hyperbolic 3-manifolds with geodesic boundary, J. Diff. Geom. 64 (2003), 425-456), for every g>1 we gave examples 3 of knots in the genus-g handlebody with the following properties: the complement of each such knot is hyperbolic (with one cusp and a geodesic boundary component), so in particular each such knot is not isotopic to the boundary of the handlebody; each of them has exactly three surgeries giving back the handlebody. Every knot in our family is 1-bridge. – Roberto Frigerio Dec 28 '11 at 22:58 Thanks for the reference Roberto, sorry for overlooking your result! – Ian Agol Dec 29 '11 at 0:17 @Agol. In the last second paragraph, $H$ should be a handlebody ? – yanqing Dec 29 '11 at 2:56 With this reformulation of the question, the case that K is a core curve corresponds to when H is a solid torus, and K is a core curve. Are two $K$ same above? – yanqing Dec 29 '11 at 1 @Agol: R. Sean Bowman constructed knots in a genus 2 handle body that have a non-trivial handle body filling but are not 1-bridge. His results are here: arxiv.org/pdf/1206.1959v1.pdf. – Neil Hoffman Dec 19 '12 at 13:40 show 1 more comment Not the answer you're looking for? Browse other questions tagged gt.geometric-topology 3-manifolds at.algebraic-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/84440/dehn-surgery-on-handlebody/84463","timestamp":"2014-04-18T21:01:27Z","content_type":null,"content_length":"66052","record_id":"<urn:uuid:1e7686c9-65fa-4e46-8f8b-f5a4efc87690>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Sadri Hassani, “Mathematical Methods: For Students of Physics and Related Fields” (repost) Jun 24, 2011 | Comments 2 Sadri Hassani, “Mathematical Methods: For Students of Physics and Related Fields” Publisher: Springer | ISBN: 0387095039 | edition 2008 | PDF | 831 pages | 12.5 mb Intended to follow the usual introductory physics courses, this book has the unique feature of addressing the mathematical needs of sophomores and juniors in physics, engineering and other related fields. Many original, lucid, and relevant examples from the physical sciences, problems at the ends of chapters, and boxes to emphasize important concepts help guide the student through the Beginning with reviews of vector algebra and differential and integral calculus, the book continues with infinite series, vector analysis, complex algebra and analysis, ordinary and partial differential equations. Discussions of numerical analysis, nonlinear dynamics and chaos, and the Dirac delta function provide an introduction to modern topics in mathematical physics. This new edition has been made more user-friendly through organization into convenient, shorter chapters. Also, it includes an entirely new section on Probability and plenty of new material on tensors and integral transforms. My Links NO MIRRORS according to the rules How to download: Free register to download UseNet downloader and install, then search book title and start downloading. You can DOWNLOAD 300GB for free! Register and Download NOW! Other mirror download link Category: Science Comments (2)
{"url":"http://ebooksfreedownload.org/2011/06/sadri-hassani-%E2%80%9Cmathematical-methods-for-students-of-physics-and-related-fields%E2%80%9D-repost.html","timestamp":"2014-04-16T21:54:23Z","content_type":null,"content_length":"64802","record_id":"<urn:uuid:c248415b-bccc-48d6-a9ed-9d431038256a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry of a Modulational Instability Jared Bronski, Urbana-Champaign We present a long-wavelength theory for the stability of periodic traveling wave solutions to equations of KdV type. u_t + (f(u))_x = u_{xxx} for essentially arbitrary $f$. Some examples are $f(u) = u^3 + a u^2$ (MKdV), which governs internal waves, and $f(u) = u^2 + a u^{3/2}$, which arises in plasmas. The stability theory for solitary waves is well-developed but the analogous periodic problem is much less well understood. We give a rigorous construction for the spectrum of the linearized operator in a neighborhood of the origin in the spectral plane, and construct two stability indices. The first of these detects instability to perturbations of the same period, while the second detects instability to long-wavelength perturbations (a modulational instability). These stability indices can be expressed in terms of Jacobians of the map between the constants of integration of the traveling wave ODE and the conserved quantities of the PDE. This is, in essence, a rigorous Whitham modulation theory for the spectrum of the linearized operator. This is joint work with Mathew Johnson.
{"url":"http://cims.nyu.edu/ams/abstracts/bronski.html","timestamp":"2014-04-20T20:56:08Z","content_type":null,"content_length":"2399","record_id":"<urn:uuid:bc12e1b9-e928-4ea2-a01e-35e36645e509>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
triple integral 1. March 16th 2013, 12:12 PM #1 2. March 16th 2013, 12:32 PM #2 Re: triple integral you just integrate it with respect to x? so it becomes: $\sin{\sqrt{y^z}}x^\frac{-3}{2}$, integrate that to give: $-\frac{2}{3}\sin{\sqrt{y^z}}x^{-1/2}$ so the $\sin{\sqrt{y^z}}$ is just a yay, the LaTex worked first time! lol im a noob Last edited by iMaths; March 16th 2013 at 12:40 PM. 3. March 16th 2013, 04:01 PM #3 Re: triple integral My eyes are bad. I thought there was an 'xz' within the sin function. I was trying to do an integration by parts, but its much simpler. Similar Math Help Forum Discussions Search Tags
{"url":"http://mathhelpforum.com/calculus/214894-triple-integral.html","timestamp":"2014-04-17T21:02:49Z","content_type":null,"content_length":"35232","record_id":"<urn:uuid:6d6b59c4-3f97-4b4a-bc1d-0b7ea3a8195f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mathematical Experience From Wikipedia, the free encyclopedia The Mathematical Experience (1981) is a book by Philip J. Davis and Reuben Hersh that discusses the practice of modern mathematics from a historical and philosophical perspective. Its first paperback edition won a U.S. National Book Award in Science.^1^a It is frequently cited by mathematicians as a book that was influential in their decision to continue their studies in graduate school and has been hailed as a classic of mathematical literature.^2 In accordance with its title, it attempts to describe, in light of the turbulent history and philosophy of mathematics, the experience of being a mathematician. It focuses on the proof, without going fully into the rigorous how-to details, gives examples of some highly interesting and famous proofs, as well as the outstanding problems of mathematics (the Riemann hypothesis, etc.), and goes on to speculate on what a proof really means, in relationship to actual truth. Other topics include mathematics in education and some (obviously-outdated, but still mostly relevant) computer mathematics. The book was generally well-received but drew a critical review from Martin Gardner, who disagreed with some of the authors' philosophical opinions.^3 A new edition, published in 1998, includes exercises and problems, making the book more suitable for classrooms. There is also The Companion Guide to The Mathematical Experience, Study Edition. Both were co-authored with Elena A. Marchisotto. The authors wrote a follow-up book, Descartes' Dream: The World According to Mathematics (Harcourt, 1986), and each has written other books with related themes, such as Mathematics And Common Sense: A Case of Creative Tension by Davis and What is Mathematics, Really? by Hersh. 1. ^ This was the 1983 award for paperback Science. From 1980 to 1983 in National Book Award history there were dual hardcover and paperback awards in most categories, and several nonfiction subcategories including General Nonfiction. Most of the paperback award-winners were reprints, including this one. 1. ^ "National Book Awards – 1983". National Book Foundation. Retrieved 2012-03-07. 2. ^ jkauzlar (perhaps James Joseph Kauzlarich?) (18 September 20002). "MathForge.net--Power Tools for Online Mathematics". Archived from the original on 2006-10-022. "One of the classics of mathematical literature,The Mathematical Experience, by Philip J Davis and Rueben Hersh, remains pertinent and fulfills its lofty ambitions even 20 years past its 1981 publication." 3. ^ Gardner, Martin (August 13, 1981). "Is Mathematics for Real?". New York Review of Books: 37–40. External links you can learn more and teach others about The Mathematical Experience This article about a mathematical publication is a stub. You can help Wikipedia by expanding it.
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=The_Mathematical_Experience","timestamp":"2014-04-17T04:18:37Z","content_type":null,"content_length":"76809","record_id":"<urn:uuid:bba968c0-21e2-4b78-9cb4-0f3d306593e9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by jane on Saturday, June 13, 2009 at 7:28pm. A 0.400 kg ball is shot directly upward at initial speed 40.0 m/s. What is its angular momentum about a point 2.0 m horizontal from the launch point, when the ball is a) at maximum height and b) halfway back to the ground? What is the torque on the ball about the point 2m from the horizontal launch point due to the gravitational force when the ball is c) at maximum height and d) halfway back to the ground? I know that because max height is v=0, there is 0 angular momentum, but i can't figure out the rest • physics - Damon, Saturday, June 13, 2009 at 7:57pm Do the second question first. (It is worded incorrectly I think because there is in fact no torque on the ball since the force goes right through its center. There is however a torque about that point 2 meters from the launch point) The torque is the same at the top and halfway down because the perpendicular distance from the force vector to the launch point is always 2 meters. The magnitude of the torque = m g (2) Well, we better figure out how fast it is going when it is halfway down. You already answered for at the top, zero because velocity is zero. a = - 9.8 v = 40 - 9.8 t h = 0 + 40 t - 4.9 t^2 when h = 20 20 = 40 t - 4.9 t^2 4.9 t^2 - 40 t + 20 = 0 t = (1/9.8)[ 40 +/- sqrt(1600 -392)] t = (1/9.8) [ 40 +/- 34.8 ] t = 7.63 on the way down (the plus sign) v = 40 - 9.8(7.63) = -34.8 m/s angular momentum = m V x R = m (-34.8)(horizontal distance which is 2) = -69.6 * mass • physics - jane, Saturday, June 13, 2009 at 8:28pm THANK YOU SO MUCH! Related Questions Physics - A ball is thrown straight upward. At 6.30 m above its launch point, ... Physics - A ball is thrown straight upward and rises to a maximum height of 23 m... Physics - A ball is thrown straight upward and rises to a maximum height of 20 m... physics - A ball is thrown straight upward and rises to a maximum height of 14.6... Physics - A ball is thrown straight upward and rises to a maximum height of 16 m... physics - In the absence of air resistance two balls are thrown upward from the ... phyics - A ball is thrown straight upward. At 5.00 m above its launch point, the... physics - Figure 8-31 shows a ball with mass m = 1.0 kg attached to the end of a... Physics - A ball (mass = 250 g) on the end of an ideal string is moving in ... physics - A PARTICLE OF MASS M IS SHOT WITH an initial velocity v making an ...
{"url":"http://www.jiskha.com/display.cgi?id=1244935723","timestamp":"2014-04-21T13:53:37Z","content_type":null,"content_length":"9716","record_id":"<urn:uuid:ae02ad71-69f3-46a3-b3b0-892e1dfcdf3e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
[R-sig-phylo] phangorn package; pml tree Klaus Schliep klaus.schliep at gmail.com Tue Jul 20 14:23:43 CEST 2010 Dear Erwan, there are two different functions 'pml' which estimates the likelihood for the given parameters and returns an object of class 'pml' and 'optim.pml' which performs optimisation on an object of class 'pml'. In your case you first to construct an object of class 'pml' assume you have the base frequencies given in a vector bf like bf = c(0.1, 0.2, 0.3, 0.4) fit = pml(mytree, mydata, k=4, bf=bf) fit #tells you a little bit about your model so far. # if you want to use a Gamma model you need to specify the # number of rate classes via k (here k=4) # update.pml(fit) is a convenience function if you want to change just # one or two parameters e.g. fit = update(fit, bf = c(.25, .25, .25, .25)) # changes the base frequencies of your 'pml' object #The next step is to optimise the model: fit1 = optim.pml(fit, optNni=TRUE, optGamma=TRUE, model='GTR') # this optimises a GTR model (base frequencies and rate matrix), # performs NNI rearrangements and optimises the shape parameter. # If you want to have the base frequencies fixed you would use fit2 = optim.pml(fit, optNni=TRUE, optQ=TRUE, optGamma=TRUE) # or by calling the appropriate model: 'SYM' # fit2 = optim.pml(fit, optNni=TRUE, optGamma=TRUE, model='SYM') anova(fit2, fit1) I hope this helped you a bit. If you have further question, do not bother to send me another mail. On 7/19/10, Erwan DELRIEU-TROTTIN <erwan.delrieu-trottin at etu.upmc.fr> wrote: > Hi everyone, > Phangorn package allows to perform ML trees after having chosen a model. > I've run jmodeltest which has told me that a model GTR+G would be > appropriate for my data. > I would like to enter the different parameters in the pml function in > order to draw a tree. > There's an example for pml but I don't fully understand it. > To perform a JC + Gamma + I - model, they do: > fitJC_GI <- update(fitJC, k=4, inv=.2) > I is the the proportion of invariable site, but I don't understand why > .2 is chosen in that exemple. > What would it be if you wanted to perform GTR+G? or +F? What is an GTR + F model?? > We can fix bf, the base frequencies, and I would like to, but I do not > find how to code it: should I put the different values between brackets? > I hope that my approach is appropriate. > Many thanks in advance for your help. > Erwan Delrieu-Trottin > -- > Erwan Delrieu-Trottin, PhD student > USR 3278 CNRS - EPHE > Centre de Biologie et Ecologie Tropicale et Méditerranéenne > Université de Perpignan, 52 Av. Paul Alduy > 66860 Perpignan cedex, France > e-mail : erwan.delrieu-trottin at etu.upmc.fr > _______________________________________________ > R-sig-phylo mailing list > R-sig-phylo at r-project.org > https://stat.ethz.ch/mailman/listinfo/r-sig-phylo Dr. Klaus Schliep Postdoctoral Fellow Université Paris 6 (Pierre et Marie Curie) 9, Quai Saint-Bernard, 75005 Paris More information about the R-sig-phylo mailing list
{"url":"https://stat.ethz.ch/pipermail/r-sig-phylo/2010-July/000711.html","timestamp":"2014-04-19T05:26:20Z","content_type":null,"content_length":"6294","record_id":"<urn:uuid:59e8e467-ffdb-426a-804d-6c4270783c8a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Stability in neutral nonlinear differential equations with functional delays using fixed-point theory. (English) Zbl 1083.34536 The paper deals with the stability of the zero solution of the scalar neutral differential equation where $a$, $b$, $g$ and $q$ are continuous functions of their arguments. Noting that the construction of a Lyapunov functional solving this problem is an open problem (the difficulties that arise are illustrated by the case $q\equiv 0$), the author gets sufficient conditions for the stability of the zero solution on the base of the contraction mapping principle applied to the equivalent Volterra-type integral equation. Both bounded and unbounded delays are considered and the obtained results are illustrated by examples. 34K20 Stability theory of functional-differential equations 34K40 Neutral functional-differential equations
{"url":"http://zbmath.org/?q=an:1083.34536","timestamp":"2014-04-16T22:32:32Z","content_type":null,"content_length":"22105","record_id":"<urn:uuid:a0eea40e-1ac5-4586-8492-98e66b72af76>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
April 16th 2007, 10:23 PM #1 Feb 2007 Hi, I need help on this weird looking integral: int_{x^3}^{x^2} dt/(1 + t^2)^3 How would you start this problem when the limits of integration are not values but rather functions? The same way you do any other integral. It's just that, instead of a constant, the Reimann integral will now be a function of x. April 17th 2007, 03:59 AM #2 April 17th 2007, 07:44 AM #3
{"url":"http://mathhelpforum.com/calculus/13816-integration.html","timestamp":"2014-04-17T08:27:47Z","content_type":null,"content_length":"36635","record_id":"<urn:uuid:52677e70-e49e-4185-9ae3-b782d29745f5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Cutting Tool Engineering | March 2013 | Shop Operations | Generating spherical surfaces A unique manual milling method is available for generating geometrically true spherical surfaces. This technique can be used to machine convex and concave spherical surfaces. Other than the milling machine, the only tools needed are a boring head and a rotary table. If you have a CNC lathe or mill, this is really just an academic exercise. The technique is interesting in that it is self-correcting and self-proving, which is not true of CNC equipment. If you don’t have any CNC machines, you can add a neat trick to your toolbox. I learned this technique years ago from my old toolmaker friend Charlie. When he first told me about it, I was skeptical until I tried it. If you have a computer drafting program, you can make short work of the math and setup angle. This method is far superior for forming tools and beats the pants off swinging arc fixtures because the spherical surface is a true geometric generation. The spherical form is limited only by the accuracy of the machine spindle and the rotary table—two intersecting circular paths that produce a true spherical surface. Courtesy of All images: T. Lipton Imagine a cutting tool that only cuts a hollow circle, kind of like a hole saw. When the tool is set at an angle other than the axis of the rotary table and the part is rotated under the tool, a spherical surface is generated. The boring head is tipped at an angle that represents the chord of the desired spherical segment. A single-point cutting tool is applied and, depending on whether the form is concave or convex, the cutting edge is reversed. For convex surfaces, the cutting edge faces inward. For concave surfaces, the cutting edge faces outward, as it would in normal boring head work. As the tool is advanced into the workpiece, the rotary table is rotated through 360°. The rotary table is also fed into the tool along the X-axis. When you first try this method, use plastic so you can quickly see exactly what is happening before you try it on important parts. There are three variables you must understand to get controllable results. The first involves basic calculations. The second is the setup and the third is the execution—actually doing it. A single-point cutting tool sweeps through a circle that has no thickness on one side of the cutting edge. If you think about how a ring of any size smaller than the spherical surface can lay in full contact with the sphere, you can visualize how the cutting action takes place. The material that projects into the ring is cut as the part rotates under the cutting tool. This leaves a spherical surface the size of the ring. Any plane that cuts through a sphere produces a true circle, no matter what the angle. In the hemisphere illustration, we can see the basic graphical setup for cutting a full hemisphere 2 " in diameter. The chord in this case is 1.414 ". This is the diameter the boring head would be set at (1.414 ") or a little larger to cut the 2 " diameter. The spindle would be tilted 45° relative to the rotary table axis to cut a full hemisphere. You can see from the drawing that no other angle would produce a full hemisphere. This axis of the spindle must be perpendicular to the segment chord. The spindle centerline is the midpoint of the chord. The chord is also the hypotenuse of the right triangle, which is the maximum rise of the radius and the distance from the centerline to the endpoint of the arc. For other radii and partial segments, a little math is required to get the chord and the angle. We can use our drawing example to illustrate the math. There is no official name for the diameter the boring head is set to, so I call it the “swept diameter” (SD). For OD work, the SD should be set at the chord size or larger. For ID work, the SD should be set smaller or the same as the chord. Angles less than 45° produce less than a full hemisphere. Angles greater than 45° produce greater and greater portions of the sphere until you reach a maximum of 180° for a full sphere. Once you go past 45°, the boring head must be set accurately to the chord length before you reach the finish diameter. You can adjust this as you rough the part, taking measurements as you go. In actual practice, you can’t cut a full sphere in one setup. You still have to hold the part and rotate it somehow. To produce a full sphere, you must use two separate holding setups. CTE About the Author: Tom Lipton is a career metalworker who has worked at various job shops that produce parts for the consumer product development, laboratory equipment, medical services and custom machinery design industries. He has received six U.S. patents and lives in Alamo, Calif. Lipton’s column is adapted from information in his book “Metalworking Sink or Swim: Tips and Tricks for Machinists, Welders, and Fabricators,” published by Industrial Press Inc., New York. The publisher can be reached by calling (888) 528-7852 or visiting www.industrialpress.com. By indicating the code CTE-2013 when ordering, CTE readers will receive a 20 percent discount off the book’s list price of $44.95.
{"url":"http://www.ctemag.com/aa_pages/2013/130310-ShopOps.html","timestamp":"2014-04-17T13:28:25Z","content_type":null,"content_length":"8548","record_id":"<urn:uuid:06588de6-5049-4544-b9ec-ab439bce0d58>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
A New Process Model for Functions - Proc. 4th International Workshop on Graph Grammars , 1991 "... This paper gives some examples of how computation in a number of languages may be described as graph rewriting, giving the Dactl notation for the examples shown. It goes on to present the Dactl model more formally before giving a formal definition of the syntax and semantics of the language. 2 Examp ..." Cited by 34 (7 self) Add to MetaCart This paper gives some examples of how computation in a number of languages may be described as graph rewriting, giving the Dactl notation for the examples shown. It goes on to present the Dactl model more formally before giving a formal definition of the syntax and semantics of the language. 2 Examples of Computation by Graph Rewriting , 1995 "... In first approximation Core Facile is a simply typed -calculus enriched with parallel composition, dynamic channel generation, and input-output synchronous communication primitives. In this paper we explore the (dynamic) semantics of core Facile programs. This should be taken as a basis for the def ..." Cited by 20 (2 self) Add to MetaCart In first approximation Core Facile is a simply typed -calculus enriched with parallel composition, dynamic channel generation, and input-output synchronous communication primitives. In this paper we explore the (dynamic) semantics of core Facile programs. This should be taken as a basis for the definition of abstract machines, the transformation of programs, and the development of modal specification languages. We claim two main contributions. We introduce a new semantics based on the notion of barbed bisimulation. We argue that the derived equivalence provides a more satisfying treatment of restriction, in particular by proving the adequacy of a natural translation of Facile into ß-calculus we suggest that our approach is in good harmony with previous research on the semantics of sub-calculi of Core Facile such as Chocs and ß-calculus. We illustrate at an abstract level various aspects of Facile compilation. In particular we introduce an `asynchronous' version of the Facile language... - Proc. CONCUR '95, volume 962 of Lecture Notes in Computer Science , 1995 "... This paper introduces an operational semantics for call-by-need reduction in terms of Milner's ß-calculus. The functional programming interest lies in the use of ß-calculus as an abstract yet realistic target language. The practical value of the encoding is demonstrated with an outline for a paralle ..." Cited by 6 (1 self) Add to MetaCart This paper introduces an operational semantics for call-by-need reduction in terms of Milner's ß-calculus. The functional programming interest lies in the use of ß-calculus as an abstract yet realistic target language. The practical value of the encoding is demonstrated with an outline for a parallel code generator. From a theoretical perspective, the ß-calculus representation of computational strategies with shared reductions is novel and solves a problem posed by Milner [13]. The compactness of the process calculus presentation makes it interesting as an alternative definition of call-by-need. Correctness of the encoding is proved with respect to the call-by-need -calculus of Ariola et al. [3]. 1 Introduction Graph reduction of extended -calculi has become a mature field of applied research. The efficiency of the implementations is due in great measure to a technique known as `sharing', whereby argument values are computed (at most) once and then memoized for future reference. Both...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3270413","timestamp":"2014-04-25T02:51:29Z","content_type":null,"content_length":"18432","record_id":"<urn:uuid:08877adb-c1eb-4450-b55d-3115e592672a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00423-ip-10-147-4-33.ec2.internal.warc.gz"}
Some assorted questions (includes implicit differentiation, inverse trig funcs, etc) July 10th 2012, 08:38 PM Some assorted questions (includes implicit differentiation, inverse trig funcs, etc) Im sure this is easy for most of you math wizards, but its really difficult for me to identify which laws and stuff to use and when. For example: if I want to differentiate that, do I have to use multiplication rule? what would that differentiate out to? I also have a lot of trouble with radians and points that contain pi. I've never been able to wrap my head around it. if I have the point (-pi/4, 1) and need to plug that point into an equation after ive solved for dy/dx how does it work? if i plug (-pi/4) into say cosin(x) what does it come out to? and whats an easy way to understand this so i can apply it to all trig functions? I just dont get how pi works in this context. 2) How do I find the extreme values (abs and local) of a function? especially one that uses pi in the given range. is there a good online tutorial you could point me to? i dont really want the problem simply done for me i want to try and understand it Thanks for any help. I really appreciate it as I am struggling here July 11th 2012, 03:53 AM Re: Some assorted questions (includes implicit differentiation, inverse trig funcs, e Assuming y is a function of x and you want to differentiate with respect to x, then the product rule yes, but also the chain rule because arctan y is a composite function where y is the inner function of x. Just in case a picture helps... ... where (key in spoiler) ... Pauls Online Notes : Calculus I __________________________________________________ __________ Don't integrate - balloontegrate! Balloon Calculus; standard integrals, derivatives and methods Balloon Calculus Drawing with LaTeX and Asymptote!
{"url":"http://mathhelpforum.com/calculus/200846-some-assorted-questions-includes-implicit-differentiation-inverse-trig-funcs-etc-print.html","timestamp":"2014-04-19T22:42:26Z","content_type":null,"content_length":"8576","record_id":"<urn:uuid:b1b725fd-a611-4254-a867-11a2e6e3f4dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
CEOI 1996 example solution This is a discussion on CEOI 1996 example solution within the Tech Board forums, part of the Community Boards category; I would like to now the solution. I wrote a heuristic algorithm, but it is not write out the optimal ... I would like to now the solution. I wrote a heuristic algorithm, but it is not write out the optimal solutions. Please help. Here is the example: Click! Cutting rectangle. You are given a rectangle whose side lengths are integer numbers. You want to cut the rectangle into the smallest number of squares, whose side lengths are also integer numbers. Cutting can be performed with a cutting machine that can cut only from side to side across, parallel with one side of the rectangle. Obtained rectangles are cut separately. Input Data The input file contains two positive integers in the first line: the lengths of the sides of the rectangle. Each side of the rectangle is at least 1 and at most 100. Output Data The output file consist of one line on which your program should write the number of squares resulting from an optimal cutting. Example CUTS.IN 5 6 CUTS.OUT 5 Apart from the fact that the question is ambiguous, what's your problem? > Here is the example: Nevermind someone elses answer, where is your attempt? If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. My first solution is divide with the little side of the rectangle, but that solution is not the optimal result. I think i have ti make it with dynamic programming with an array, but I don't know how to start it. I would like to know tha recursive algorithm which I have to use to fill the array with the solution. Than I search the optimal solution in the array... here is the heuristic algorithm (but this is not the optima solutionl) i wrote it in Java: i use an array to store the squares size and number of pieces. Code: import java.util.*; import java.io.*; public class feladat{ private static int A; private static int B; private static int F; private static int C=1; private static int[] D; private static int[] E; private static void ReadIn() throws IOException{ BufferedReader in = new BufferedReader(new FileReader("in.txt")); StringTokenizer line; line = new StringTokenizer(in.readLine()); A = Integer.parseInt(line.nextToken()); B = Integer.parseInt(line.nextToken()); in.close (); D=new int[100]; E=new int[100]; } private static void Divide(int x, int y){ if(x>y){F=x; x=y; y=F;} if(x==y){D[C]=1; E[C]=x;} else if((y%x) != 0){ D[C]=y/x; E[C]=x; C++; Divide(y%x,x); } else {D [C]=y/x; E[C]=x; } } private static void POut() throws IOException{ PrintWriter out=new PrintWriter(new BufferedWriter(new FileWriter("out.txt"))); int m=0; for(int i=1; i<=C; i++){ m +=D[i]; } ki.println(m); for(int i=0; i<C; i++){ ki.println(E[C-i]+" "+D[C-i]); } out.close(); } public static void main(String[] args)throws Exception{ ReadIn(); DivideA,B); POut(); } } import java.util.*; import java.io.*; public class feladat{ private static int A; private static int B; private static int F; private static int C=1; private static int[] D; private static int[] E; private static void ReadIn() throws IOException{ BufferedReader in = new BufferedReader(new FileReader("in.txt")); StringTokenizer line; line = new StringTokenizer(in.readLine()); A = Integer.parseInt(line.nextToken()); B = Integer.parseInt(line.nextToken()); in.close(); D=new int[100]; E=new int[100]; } private static void Divide(int x, int y){ if(x>y){F=x; x=y; y=F;} if(x==y){D [C]=1; E[C]=x;} else if((y%x) != 0){ D[C]=y/x; E[C]=x; C++; Divide(y%x,x); } else {D[C]=y/x; E[C]=x; } } private static void POut() throws IOException{ PrintWriter out=new PrintWriter(new BufferedWriter(new FileWriter("out.txt"))); int m=0; for(int i=1; i<=C; i++){ m +=D[i]; } ki.println(m); for(int i=0; i<C; i++){ ki.println(E[C-i]+" "+D[C-i]); } out.close(); } public static void main(String[] args)throws Exception{ ReadIn(); DivideA,B); POut(); } } So why are you posting Java code on a C++ board? Moved.
{"url":"http://cboard.cprogramming.com/tech-board/84809-ceoi-1996-example-solution.html","timestamp":"2014-04-17T05:02:07Z","content_type":null,"content_length":"56600","record_id":"<urn:uuid:6653da04-81be-4f16-87e8-13a106615d06>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
hard diophantine equation up vote 10 down vote favorite Hi everyone. Does the equation x^3+y^5=z^7 has a solution (x,y,z) with x,y,z positive integers and (x,y)=1? In his book H. Cohen (Number theory,2007) said "[...] seems presently out of reach". I couldn't find any suggestion beyond Cohen's book. Thanks in advance, Montanari Fabio department of math university of bologna italy e-mail montana@dm.unibo.it 1 It would be worthwhile to add x^3+y^5=z^7 to the title of this question. – j.c. Feb 23 '10 at 21:49 add comment 4 Answers active oldest votes There is no claim in my cv or elsewhere that me and Sander have solved the equation x^3+y^5+z^7=0. All my cv claims is that we're writing a paper on it! That's not the same thing. up vote 9 down vote All the best, Samir 1 Ha! Good point:) – David Zureick-Brown♦ Feb 22 '10 at 22:09 add comment I'm surprised that Bjorn Poonen hasn't chimed in yet. However, you can read about the approaches to solving equations like this in Frits Beuker's lectures "The generalized Fermat Equation" up vote 3 down vote Added later: You should also look at Siksek's "Edingurgh lectures" for a good outline of how he might have approached this equation: http://www.warwick.ac.uk/~maseap/papers/edinburgh3.pdf From what little I can understand, the 2 papers above deal with the problem over Z. Can one generalize the distinction among hyperbolic (finite solution sets), euclidean (elliptic curves) and spherical (polynomial parametrization) cases to any number field? As an aside, one small "hyperbolic" example stands out in Z[i]: (-2+i)^3+(-2-i)^3=(1+i)^4. – Yaakov Baruch Feb 21 '10 at 12:32 As far as I know the only difference is that over a general number field the elliptic curves showing up might have positive rank, so you might get infinitely many solutions in an elliptic case. I have forgotten Darmon's argument that in the hyperbolic case the number of solutions are finite, but I do remember that he reduces to Faltings' theorem so surely the argument will work in the general number field case...(famous last words) – Kevin Buzzard Feb 21 '10 at 12:44 I haven't looked at the Darmon/Granville argument lately, but, from what I remember, they first show, in the hyperbolic case, that the primitive solutions must lie on a finite set of curves of genus 2, and then invoke Falting's theorem. However, I'm not sure if the first applies over general number fields. There might be some subtlety with units (i.e. if there is an infinite unit group). But, I would think that you might be able to figure this out from Beukers' paper above. – Victor Miller Feb 21 '10 at 14:29 One point about the equation over number fields: the most natural way of defining a solution to be primitive would be $\min(\text{ord}_{\pi}(a),\text{ord}_{\pi}(b),\text{ord}_{\pi}(c)) = 0$, for all primes $\pi$, but then you could always multiply such a solution by suitable units and get another, so you'd need a stronger definition of "primitive". – Victor Miller Feb 21 '10 at 14:48 2 I took the trouble to look at the Darmon/Granville paper. It says (on the version on Darmon's web page) near the bottom of p16 "It is easy to see that the proof extends to...arbitrary number fields". They take care to formulate the statement in such a way in the number field case that units don't mess them up. – Kevin Buzzard Feb 22 '10 at 11:01 show 1 more comment Sander Dahmen and Samir Siksek are [Edit] writing a paper [end Edit] about this (according to Samir's cv, under papers in preparation), but there is no draft on his web page. up vote 3 down vote Do they find a solution to the equation? – Petya Feb 20 '10 at 0:33 add comment I may be misunderstanding the question, but I do not believe that it has any integer solutions. At the very least, none are known to exist at the moment. Any solutions would be counterexamples to the Fermat-Catalan conjecture with {m,n,k} = {3,5,7} (since 1/3 + 1/5 + 1/7 = 71/105 < 1). The most I can tell you is that, for coprime {x,y,z}, there are finitely many up vote 2 solutions to your equation. I think your (x,y) = 1 means that they're coprime, anyway, so it follows that z must be coprime. Therefore, any solution at all would disprove the related Beal's down vote conjecture. The Fermat-Catalan conjecture says that there are only finitely many solutions to this Diophantine equation, not that there are none. – Michael Lugo Feb 23 '10 at 21:17 I said it would be a counterexample, not that it would disprove. However, this particular solution, since the exponents are all higher than 2, WOULD be a counterexample to Beal's conjecture. – Gabriel Benamy Feb 23 '10 at 21:24 @Gabriel: "I said it would be a counterexample, not that it would disprove.". I think that in common mathematical parlance, if you find a counterexample to something, you've disproved it. And the issue is not really whether there are no solutions, but whether mathematics in its current state has enough techniques to prove this. To give you some background---these sorts of equations (explicit m,n,k the sum of whose reciprocals is only just less than one) are slowly being picked off by experts in the area nowadays. No-one is close to a general argument though. – Kevin Buzzard Feb 24 '10 at 7:43 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/15844/hard-diophantine-equation/15965","timestamp":"2014-04-16T20:02:36Z","content_type":null,"content_length":"74043","record_id":"<urn:uuid:d29e43a0-9697-4828-9ea6-02de1d433c80>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry for Elementary School/Some impossible constructions In the previous chapters, we discussed several construction procedures. In this chapter, we will number some problems for which there is no construction using only ruler and compass. The problems were introduced by the Greek and since then mathematicians tried to find constructions for them. Only in 1882, it was proven that there is no construction for the problems. Note that the problems have no construction when we restrict ourself to constructions using ruler and compass. The problems can be solved when allowing the use of other tools or operations, for example, if we use Origami. The mathematics involved in proving that the constructions are impossible are too advanced for this book. Therefore, we only name the problems and give reference to the proof of their impossibility at the further reading section. Impossible constructionsEdit Squaring the circleEdit The problem is to find a construction procedure that in a finite number of steps, to make a square with the same area as a given circle. Doubling the cubeEdit To "double the cube" means to be given a cube of some side length s and volume V, and to construct a new cube, larger than the first, with volume 2V and therefore side length ³√2s. Trisecting the angleEdit The problem is to find a construction procedure that in a finite number of steps, constructs an angle that is one-third of a given arbitrary angle. Further readingEdit Last modified on 15 April 2014, at 20:17
{"url":"https://en.m.wikibooks.org/wiki/Geometry_for_Elementary_School/Some_impossible_constructions","timestamp":"2014-04-20T01:32:45Z","content_type":null,"content_length":"18650","record_id":"<urn:uuid:36fb6bab-768e-46c8-8ae1-9df056458535>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
A better algorithm November 29th, 2013, 08:07 AM #1 Junior Member Join Date Nov 2013 A better algorithm Sorry if this question has been asked before but I need some help on this as I am stuck on this for more than one year. The problem is as follows - I have three arrays A,B,C of same size let say n and I need to find out all possible (i,j,k) such that A[i]+B[j]=C[k]. And we need to find this in O(n^2) complexity for which I found one solution like Step 1: Add all A[i]+B[j] as the key and (i,j) as values in a hash table.This gives n^2 no of keys and values. Step 2: Traverse for all C[k] and probe the hash table.If mach found (i,j,k) is one pair and so on. This looks like an O(n^2) solution.But can it be done better or at least O(n^2) in some different approach ? Re: A better algorithm This looks like an O(n^2) solution. Say you instead insert the C values into the hash table. Then you generate the A+B sums in a pair of nested loops and check which sums are in the table. That's also O(n^2) but with a much smaller hash table. And it has the additional advantage that you can now eat away on the O(n^2) complexity somewhat by executing the outermost loop in parallel. With n cores the complexity would reduce down to O(n). Maybe sorting of the arrays could lower complexity but I'm not certain. At least it's a lead. Last edited by razzle; November 30th, 2013 at 05:45 PM. Re: A better algorithm Thanks for your reply and a nice suggestion to insert C values in hash table as this will require smaller hash table. But by better I mean less than O(n^2) let say O(n*logn) as your solution is also O(n^2) Re: A better algorithm If you are allowed to sort A, B and C then yes, you can maybe find this in fewer steps by eliminating impossible values in A, B and C. Technically possible without sorting as well but the elimination phase will be more complex. Additionally, by having tables sorted, you can early out on each possible step. This'll be more complex, so it will only pay off for sufficiently large values of n. Re: A better algorithm as your solution is also O(n^2) I know that and I said so. Still my solution gives a much smaller hash table and is much easier to parallelize, and that's not too bad. I've given this another thought and considered sorting of the arrays as a way to reduce complexity but unfortunately I couldn't come up with anything. Sorting may improve speed but how much will be data dependent. It will dependend on the values stored in A, B and C so it's not a complexity reduction. In principle sorting can accomplish that all A+B values that lie outside the range of the C values can be discarded in advance. This takes O(n * log n) to set up. But if all A+B values fall within the C range nothing is gained. So the algorithm still is O(n*n) and to me it seems very much like that's the best that can be accomplished. I'll have to leave it at that. Last edited by razzle; December 5th, 2013 at 09:36 PM. Re: A better algorithm One more advantage of inserting C[k] in hash table is also there will be no collision as none of the array contains duplicate on the other hand a[i] + b[j] for different (i,j) pair might produce same value and result into collision while inserting in hash table.So nice improvement. Thanks razzle. Re: A better algorithm Last edited by razzle; December 9th, 2013 at 07:45 AM. Re: A better algorithm regarding the optimal algorithm complexity, clearly you can always find a worst case input where the number of (i,j,k) such as ck=ai+bj scales as n^2, so, their mere enumeration implies an algorithm no better then O(n^2); regarding the avarage case instead, you should specify better the probability distribution of the input in order to give an answer. anyway, assuming A and B sorted, non negative and without duplicates, an algorithm that doesn't require a hash table consists in traversing, for every k and starting from the biggest A[i]<=C[k], the strip of A[i]+B[j] closest to C[k]; this can be done in O(N) hence giving an overall complexity again of O(N^2). BTW, if you're just interested in knowing, for every k, how many the (i,j) such as ck=ai+bj are, and you're willing to trade space for speed, then this can be computed via a convolution that, in turn, can be computed in O(nlogn) with the help of the FFT. This information could also be used to speed up the full O(N^2) algorithm when N becomes very big ... Re: A better algorithm regarding the avarage case instead, you should specify better the probability distribution of the input in order to give an answer. That's not how you establish the average complexity of an algorithm. You do that by assuming average input. Re: A better algorithm That's not how you establish the average complexity of an algorithm. You do that by assuming average input. ... and how do you define the "avarage input" ? whether it's done explicitly or not, it always boils down to the choice of a family of prior probability distributions. Re: A better algorithm You can reduce O(n²) to O((n-m)²) (with m being [0..n]) with a setup stage. With a large value of n, even a small value of m can make a huge difference in runtime. the problem can't be further complexity reduced, but you can increase performance to less than the flat squareroot of n (or n-m) Technically, since you can't guarantee a reduction is even possible the technical difficulty of this problem remains the original O(n²). Just because a problem is O(anything) doesn't mean it's runtime is necessarily linear to that 'anything'. It gives you a rough guide/estimate to worst case execution time. Re: A better algorithm ... and how do you define the "avarage input" ? whether it's done explicitly or not, it always boils down to the choice of a family of prior probability distributions. Your approach will measure the average performance of the algorithm for a certain input. It will not determine the average-case complexity of the algorithm in general. Last edited by razzle; December 14th, 2013 at 03:06 AM. Re: A better algorithm You approach will measure the average performance of the algorithm for a certain input. It will not determine the average-case complexity of the algorithm in general. No no no no no.... O(n) complexity means that you can calculate/guess a WORST CASE performance for n-input. (this isn't the same as the complexity for a worst case dataset !). the O-complexity typically gives a "average case complexity" (=average datasets) but for many algorithms you get a best case/average case/worst case complexity notation. Take quicksort. THis has a O(n²) worst case complexity, and thus, assuming the worst possible datasets: if you can sort 10 items in 100 (10²) seconds, you can "guestimate" 100 items to sort in no more than 10000 (100²)seconds. it's entirely possible that some internal optimisation or shortcut makes it run considerably faster, but it shouldn't however take significantly more than 10000. The "average case" scenario for quicksort is O(n log(n)) again, this means that if we have an "average dataset" we can guestimate a worst case runtime for an n-input set. Optimsations that can "early out"/"shortcut out" of specific data conditions don't/can't influence the complexity formula because they're data dependant, but can make code run in considerably less time than the O-complexity would indicate. Re: A better algorithm No no no no no.... You seem to agree with me in principle so would you please be more specific about what you are opposed to. In short my point is that there are two kinds of complexity averages. One is in response to one specific dataset (say a certain probability distribution) and the other is in response to any possible dataset (any length, any distribution, any everything). They may coinside but one cannot assume that. In my view it's the second kind that's the proper average-case complexity measure. It's because it measures the inherent rather than the relative complexity of an algorithm. Re: A better algorithm One is in response to one specific dataset (say a certain probability distribution) and the other is in response to any possible dataset (any length, any distribution, any everything). an avarage "with respect to everything" is either non-sense or it can be reduced to the choice of a ( possibly degenerate and/or a family of ) probability space. Yes, one can define probability theory in terms of avarages only ( this can be even very useful, both technically and conceptually, C* algebras used in quantum mechanics being notable examples ) but, conceptually, probabilities and avarages are intrinsically bound concepts ( probabilities are avarages of "counting" observables, avarages are functionals on random variables ( aka, measureable functions between probability spaces ) ). Of course, in order to define an O() asymptotics one needs to tell apart an indipendent variable "n" somewhere with respect to which probabilities are parametrized, but there are many ways of doing such a thing, and certainly none of them qualifies as "the inherent avarage complexity" of an algorithm ( unless we're speaking of a randomized algorithm, but this is a different story (*) ) ... (*) BTW, speaking of quicksort and randomized algorithms, in some academic level algorithm books, authors even refuse to compute a loosely defined avarage complexity ( stressing its poor usefulness in real world scenarios ), they directly compute the complexity of its randomized version where the input is randomly permuted before the non-randomized algorithm is invoked, hence allowing to compute an avarage for each input size 'n'. But, this is not the "inherent" avarage complexity, it's just a way of presenting the result with respect to ( a still restricted ) family of input distributions in disguise. November 30th, 2013, 02:04 AM #2 Join Date Jul 2013 December 1st, 2013, 11:41 PM #3 Junior Member Join Date Nov 2013 December 4th, 2013, 07:13 AM #4 Elite Member Join Date Apr 2000 Belgium (Europe) December 5th, 2013, 09:32 PM #5 Join Date Jul 2013 December 9th, 2013, 06:34 AM #6 Junior Member Join Date Nov 2013 December 9th, 2013, 07:32 AM #7 Join Date Jul 2013 December 9th, 2013, 09:46 AM #8 Senior Member Join Date Oct 2008 December 9th, 2013, 04:42 PM #9 Join Date Jul 2013 December 10th, 2013, 02:03 AM #10 Senior Member Join Date Oct 2008 December 10th, 2013, 08:11 AM #11 Elite Member Join Date Apr 2000 Belgium (Europe) December 12th, 2013, 05:35 AM #12 Join Date Jul 2013 December 12th, 2013, 07:47 AM #13 Elite Member Join Date Apr 2000 Belgium (Europe) December 14th, 2013, 05:32 AM #14 Join Date Jul 2013 December 14th, 2013, 08:01 AM #15 Senior Member Join Date Oct 2008
{"url":"http://forums.codeguru.com/showthread.php?542553-Algorithm-for-Data-Structure-Supply-and-Demand&goto=nextoldest","timestamp":"2014-04-18T09:50:19Z","content_type":null,"content_length":"161001","record_id":"<urn:uuid:54421246-469f-4161-aa3d-4f56b755d8b8>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: miniaturization Stephen G Simpson simpson at math.psu.edu Thu Sep 16 20:08:48 EDT 1999 Dear Jan, First I'll reply to some of your individual points, then I'll comment on the general issue that you have raised. Jan Mycielski 15 Sep 1999 16:38:37 writes: > the separation between those who know enough logic to undestand > Con(T) and those who do not seems too subjective to motivate any > mathematical work. If the distinction between Con(T) and other mathematical statments cannot motivate mathematical work, then what is the motivation for your work on S(T)? I know that your work is somehow motivated by finitism, but aren't Con(T) and S(T) equally finitistic? The first question above is rhetorical or ironical. The reality is that there is a large ``understandability gap'', because core mathematicians and other scientists do not understand Con(T) except in the relatively rare cases where they have studied mathematical logic up through G"odel's incompleteness theorem. And even when they understand Con(T), there is still an ``appreciation gap'', because they do not consider it to be ``natural'' from the viewpoint of core So, we logicians have a problem. The problem is that, while we logicians appreciate the crucial importance and general intellectual interest of statements like Con(PA) and Con(ZFC), there is no obvious way to convey this appreciation to our colleagues in core math and other scientific disciplines. I believe your work on S(T) and the work of Paris-Harrington and Friedman on finite combinatorial independence results are both motivated in part by the twin problems of the ``understandability gap'' and the ``appreciation gap''. But they approach the problem in very different ways and with very different outcomes. There is no a priori reason to think that either approach should replace or supercede the other. I mention this now, because in your original ``miniaturization'' posting of September 14, you seemed to suggest that your work ought to render that of Paris-Harrington and Friedman obsolete or irrelevant. In order to evaluate your suggestion, I spelled out the Paris-Harrington statement in complete detail, and then I asked you: > > Now, what is your statement S(PA) exactly? After you spell out > > S(PA) in complete detail here on the FOM list, we can judge > > whether it is as mathematically natural and appealing as P-H. You replied: > JM: As told above I did it in JSL 51 (1986), pp. 59 - 60, and it > took only 17 lines (for any T, and not only for PA). But copying > those lines her without the availability of subscripts and Greek > letters would be too ugly. Please consult JSL 51. OK, I will consult JSL 51 for the precise statement of S(PA). But even before consulting JSL 51, your description of S(PA) seems to indicate that S(PA) must be mathematically much more cumbersome and less appealing than P-H. If S(PA) is 17 lines long and cannot be written without the use of subscripts and Greek letters, then that sounds ugly indeed! By contrast, P-H is only 5 lines long and needs only ordinary Roman letters. :-) Incidentally, my earlier statement of P-H contained a typographical error. Here it is again, with the typo corrected. For all k, l, m there exists n so large that, if you color the k-element subsets of {1,...,n} with l colors, then there will be a subset X of cardinality at least m all of whose k-elements subsets have the same color, and such that the cardinality of X is greater than the smallest element of X. See? No Greek letters, no subscripts, 5 lines, mathematically Incidentally, Harvey has some statements that imply Con(PA) yet are even more mathematically appealing than P-H from some points of view. One such set of statements comes out of the Kruskal stuff. For the most recent developments in this vein, see Harvey's FOM posting of 20 Oct 1998 10:13:42. By the way, when you say that S(PA) is 17 lines long, does that include the statement of the axioms of PA? If not, then we had better add those axioms, in order to make the statement of S(PA) mathematically self-contained, like the statement of P-H above. So now we could be up to around 25 lines. Is this correct? I was hoping I could get you to take your best shot at writing S(PA) here on the FOM list, so that we can compare and contrast it to P-H. But now I am beginning to see why you don't want to do that. > JM: If you do it once in my way you have it for any theory T and > moreover the statement is equivalent to Con(T). Yes. S(T) is defined uniformly in terms of T, and it is provably equivalent to Con(T) over some weak base theory. But Con(T) itself also has these same nice features. How is S(T) better than Con(T)? Is this question adequately answered in JSL 51 or Lavine's book? I will have a look .... Also, if S(T) can be explained only in terms of the axioms of T, then that would seem to automatically move S(T) away from mathematical naturalness. For instance, the axioms of ZFC are not directly connected to the normal working context of core mathematicians. Now let's get back to the larger issue that you raised: What is the point of P-H and Friedman's statements? Or, as you put it: > Could somebody explain to me why H. Friedman is building various > statements of finite (or infinite) combinatorics equiconsistent > with various large cardinal axioms? I will answer your question this way. Friedman's goal is to find necessary uses of strong axioms for deriving finite/discrete mathematical statements which are ``natural'' according to the current standards of naturalness employed by present-day ``working mathematicians''. And when he says ``working mathematicians'' he definitely does not mean logicians. He means core mathematicians. Or maybe mathematicians working in finite/discrete mathematics: Graph theorists, combinatorists, people like that. You said: > I know that you do not mean that Harvey's work in this area is > meant only for those who do not know enough logic. Yes, that is correct. One of Harvey's heuristic goals is to make his statements as appealing as possible to mathematicians who know no logic. Presenting Harvey's statements to mathematicians who know no logic is a good test of the fundamental point made by his work in this Would presenting S(T) to people to know no logic be a good test of the fundamental point made by your work on S(T)? If you think about this last question, I think it will become clear to you that the goals and outcomes of Friedman's work are quite different from the goals and outcomes of your work on pages 59-62 of JSL 51. Best regards, -- Steve More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1999-September/003383.html","timestamp":"2014-04-19T22:22:10Z","content_type":null,"content_length":"8921","record_id":"<urn:uuid:95fc5b8c-5e5f-4266-91a8-5fab41990066>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
First order integrating factors September 24th 2012, 04:48 PM #1 Sep 2012 First order integrating factors $ty'+ 5y= t^2 - t + 6$ , with initial: $y(1)=5, t>0$ How do I find the integrating factor from this equation? I am not sure if the right side follows the standard form but it doesn't look like it. $<font color=$y'+p(x)= q(x)" alt="y'+p(x)= q(x)" /> If it does follow the form, I think that the integrating factor is $<font color=$e^\int 5t" alt="e^\int 5t" /> Also how do I change so that it doesn't say "font color"? Re: First order integrating factors Divide both sides by $\displaystyle t$ then the integrating factor becomes $\displaystyle e^{\int \frac{5}{t}~dt}$ Re: First order integrating factors Dividing by t gives me : $ty'/t +5y/t=t+6$ integrating factor then from $e^/int 5/t dt$ becomes $5t$ multiply both sides by $5t$ : $(5t)tdy/t+5y/t=t+6(5t) dt$ $\int 5t+5y/t = \int 5t^2 + 30t dt$ Did I set up my integration correctly? Re: First order integrating factors No, it isn't. What is the integral of 5/t? What is the exponential of that? multiply both sides by $5t$ : $(5t)tdy/t+5y/t=t+6(5t) dt$ $\int 5t+5y/t = \int 5t^2 + 30t dt$ What happened to "dy"?? And you should understand that you cannot integrate a function of y and t with respect to y. Did I set up my integration correctly? No, you haven't. Are you clear on what the point of an integrating factor is? Your original equation was Re: First order integrating factors No, it isn't. What is the integral of 5/t? What is the exponential of that? integral of 5/t is ln5t and the exponent of ln5t is 5t ? Sorry my mistake Should be exp^[5 *ln(t)] which = t ? What happened to "dy"?? And you should understand that you cannot integrate a function of y and t with respect to y. Forgot to include the dy in here No, you haven't. Are you clear on what the point of an integrating factor is? I don't know much about integrating factors, but I do know it is to help make an equation that would normally be unsolvable, solvable. I would like to learn more about them and how to solve equations like these. I cannot integrate a function of y and t with respect to y. Do I separate all the variables, y on the left and t on the right? But then wouldn't the integrating factor still bring me a function of t on both sides??? Last edited by terrygrada; September 24th 2012 at 06:57 PM. Re: First order integrating factors The point of an integrating factor is to make the left side of the linear equation the product of a differentiation. Given a linear equation in standard form: We compute the integrating factor: $\mu(x)=e^{\int P(x)\,dx}$ Multiplying the ODE by this factor, we get: $e^{\int P(x)\,dx}\frac{dy}{dx}+P(x)e^{\int P(x)\,dx}y=e^{\int P(x)\,dx}Q(x)$ $\frac{d}{dx}\left(e^{\int P(x)\,dx}y \right)=e^{\int P(x)\,dx}Q(x)$ $\int\,d\left(e^{\int P(x)\,dx}y \right)=\int e^{\int P(x)\,dx}Q(x)\,dx$ $e^{\int P(x)\,dx}y=\int e^{\int P(x)\,dx}Q(x)\,dx$ $y(x)=e^{-\int P(x)\,dx}\int e^{\int P(x)\,dx}Q(x)\.dx$ Re: First order integrating factors integral of 5/t is ln5t and the exponent of ln5t is 5t ? Sorry my mistake Should be exp^[5 *ln(t)] which = t ? Forgot to include the dy in here I don't know much about integrating factors, but I do know it is to help make an equation that would normally be unsolvable, solvable. I would like to learn more about them and how to solve equations like these. I cannot integrate a function of y and t with respect to y. Do I separate all the variables, y on the left and t on the right? But then wouldn't the integrating factor still bring me a function of t on both sides??? No, \displaystyle \begin{align*} \int{\frac{5}{t}\,dt} = 5\ln{t} \end{align*}, so the integrating factor is \displaystyle \begin{align*} e^{\int{\frac{5}{t}\,dt}} = e^{5\ln{t}} = e^{\ln{\left(t^5\right)}} = t^5 \end{align*} September 24th 2012, 05:00 PM #2 September 24th 2012, 05:19 PM #3 Sep 2012 September 24th 2012, 05:35 PM #4 MHF Contributor Apr 2005 September 24th 2012, 05:55 PM #5 Sep 2012 September 24th 2012, 07:01 PM #6 September 24th 2012, 07:20 PM #7
{"url":"http://mathhelpforum.com/differential-equations/204013-first-order-integrating-factors.html","timestamp":"2014-04-19T10:14:44Z","content_type":null,"content_length":"58551","record_id":"<urn:uuid:3c584568-f13e-42eb-a288-ee2ee4e34996>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Gilbert Ames Bliss Gilbert Ames Bliss was a mathematician and educator known for his work on the calculus of variations. He received his B.S. degree in 1897 from the University of Chicago and remained to study mathematical astronomy under F.R. Moulton. He received his M.S. degree in 1898 and two years later his doctorate. Dr. Bliss immediately went into teaching as an assistant professor of mathematics at the University of Minnesota from 1900 to 1902, followed by a two-year assistantship at the University of Chicago, a year at the University of Missouri, and three years (1905–1908) as preceptor at Princeton University—a period in which he also served as an editor of the Annals of Mathematics. In 1908 Bliss returned to the faculty at the University of Chicago as an associate professor; he was named professor five years later. He became department chairman in 1927 and served until his retirement in Bliss applied his knowledge of calculus to the field of ballistics during the latter days of World War I, when he designed an improved set of firing tables for artillery. His book Mathematics for Exterior Ballistics (1944) was based on this work. His research in algebraic functions led to his paper “Algebraic Functions and Their Divisors,” and Bliss expanded on this work in his book Algebraic Functions (1933). Bliss's extensive study of the calculations of extreme values of an integral or function culminated in 1946 in his major work, Lectures on the Calculus of Variations. Bliss served as president of the American Mathematical Society from 1921 to 1922. #00002 Thomas Bliss and Dorothy Wheatlie of England and Rehoboth, MA #00021 Jonathan Bliss and Miriam Harmon of Rehoboth, MA #00067 Samuel BLiss and Mary Kendrick of Rehoboth, MA #00167 Capt. Nathaniel Bliss and Mehibitable Whittaker of Rehoboth, MA #00474 Timothy Bliss and Anne Hale Kingsley of Royalston, MA #01333 Aaron Bliss and Mary Woodbury of Royalston, MA #02952 Stephen Bliss and Esther Wait of Royalston and Orange, MA
{"url":"http://www.usgennet.org/family/bliss/bios/il/gilbert.htm","timestamp":"2014-04-17T03:56:23Z","content_type":null,"content_length":"9892","record_id":"<urn:uuid:af9d1d73-b83f-4a2b-8fcf-ff64e68b8afb>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
find all solutions to 2p + 1 = n^2, for n natural, p prime 10-30-2008, 12:39 PM #11 Re: find all solutions to 2p + 1 = n^2, for n natural, p prime prime number- divided evenly only by itself and 1 the only even prime number is 2 2p will be even product of (n-1)*(n+1) is even If you look at the column that I labeled 2p in my table above, then you will see that this product is not always even. so... one could factor out a 2 from the original , correct? This statement is not worded correctly, but you're on the right track when n is odd ... p= (1/2) * (n+1) * (n-1) ! i feel like I am getting really close with this... are the solutions for this 1 and 2? "English is the most ambiguous language in the world." ~ Yours Truly, 1969 Re: find all solutions to 2p + 1 = n^2, for n natural, p prime aah I am lost again, just when i thought I had it. can anyone point me in the right direction? I am so close I can taste it.... and smell it a bit too.... Re: find all solutions to 2p + 1 = n^2, for n natural, p prime 2p is even when n is odd and odd when n is even...right? Re: find all solutions to 2p + 1 = n^2, for n natural, p prime ... can anyone point me in the right direction? ... I'm not sure that I know the answer to this question. We know that 2p is the product of n - 1 and n + 1, for natural numbers n. These two factors will always be two consecutive odd integers OR two consecutive even integers, dependent upon a specific n. What happens when you multiply two even numbers? What happens when you multiply two odd numbers? "English is the most ambiguous language in the world." ~ Yours Truly, 1969 Re: find all solutions to 2p + 1 = n^2, for n natural, p prime 2p is even when n is odd and odd when n is even...right? "English is the most ambiguous language in the world." ~ Yours Truly, 1969 Re: find all solutions to 2p + 1 = n^2, for n natural, p prime YES! ... was this an enthusiastic "you're on the right track" yes or an exasperated why-are-you-stating-the-obvious yes? even * even = even odd * odd = odd Re: find all solutions to 2p + 1 = n^2, for n natural, p prime YES! ... was this an enthusiastic "you're on the right track" yes or an exasperated why-are-you-stating-the-obvious yes? even * even = even odd * odd = odd Look at the equation [tex]p \, = \, \frac{1}{2}\cdot (n+1)\cdot (n-1)[/tex] p has been factorized - is it possible with a prime number? “... mathematics is only the art of saying the same thing in different words” - B. Russell Re: find all solutions to 2p + 1 = n^2, for n natural, p prime ... [is] this an enthusiastic "you're on the right track" yes or an exasperated why-are-you-stating-the-obvious yes? The former. Since you now realize that the product of n - 1 and n + 1 is even for some values of n, can any of those values of n lead to a product which is twice some prime number? "English is the most ambiguous language in the world." ~ Yours Truly, 1969 Re: find all solutions to 2p + 1 = n^2, for n natural, p prime prime numbers cant be factorized right? can any values of n lead to a product which is twice some prime number? since twice some prime number will be even, then only odd n values would work right? Re: find all solutions to 2p + 1 = n^2, for n natural, p prime ... since twice some prime number will be even ... 2p = EVEN 2p = (n - 1) * (n + 1) Therefore, (n - 1) * (n + 1) must also be EVEN. This is enough information to eliminate half of the natural numbers from being part of any solution. Please tell me which natural numbers cannot possibly be part of any solution. "English is the most ambiguous language in the world." ~ Yours Truly, 1969 10-30-2008, 12:42 PM #12 New Member Join Date Oct 2008 10-30-2008, 12:45 PM #13 New Member Join Date Oct 2008 10-30-2008, 12:47 PM #14 10-30-2008, 12:48 PM #15 10-30-2008, 01:00 PM #16 New Member Join Date Oct 2008 10-30-2008, 01:04 PM #17 Elite Member Join Date Jun 2007 10-30-2008, 01:06 PM #18 10-30-2008, 01:15 PM #19 New Member Join Date Oct 2008 10-30-2008, 01:24 PM #20
{"url":"http://www.freemathhelp.com/forum/threads/58370-find-all-solutions-to-2p-1-n-2-for-n-natural-p-prime/page2","timestamp":"2014-04-17T07:23:50Z","content_type":null,"content_length":"78709","record_id":"<urn:uuid:b2f77727-ab71-44a4-8867-3eb70910d5d8>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] OT: not the halting problem, but what is it? From: hendrik at topoi.pooq.com (hendrik at topoi.pooq.com) Date: Tue May 4 10:04:32 EDT 2010 On Tue, May 04, 2010 at 09:37:46AM -0400, Prabhakar Ragde wrote: > What you are doing is diagonalization, which as you point out is at the > heart of Cantor's power set proof, and at the heart of the proof of the > halting problem. The definition in step 2 ensures that the constructed > function differs from any enumerated function on the argument which is > its encoding. You've shown that there is no enumeration of total > functions int->bool (and thus no programming language can express all > such functions, even if the computational model guaranteed termination). Actually, he's proved that no programming language that guarantees termination can express all such functions. -- hendrik Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2010-May/039381.html","timestamp":"2014-04-16T16:04:51Z","content_type":null,"content_length":"6228","record_id":"<urn:uuid:2016c9ce-c385-4aac-9588-0fa0d8c65d2a>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Floating point numbers Real numbers are a very important part of real life and of programming too. Hardly any computer language doesn’t have data types for them. Most of the time, they come in the form of (binary) floating point datatypes, since those are directly supported by most processors. But these computerized representations of real numbers are often badly understood. This can lead to bad assumptions, mistakes and errors as well as reports like: "The compiler has a bug, this always shows ‘not equal’" 1 var 2 F: Single; 3 begin 4 F := 0.1; 5 if F = 0.1 then 6 ShowMessage('equal') 7 else 8 ShowMessage('not equal'); 9 end; Experienced floating point users will know that this can be expected, but many people using floating point numbers use them rather naively, and they don’t really know how they “work”, what their limitations are, and why certain errors are likely to happen or how they can be avoided. Anyone using them should know a little bit about them. This article explains them from my point of view, i.e. things I found out the hard way. It may be slightly inaccurate, and probably incomplete, but it should help in understanding floating point, its uses and its limitations. It does not use any complicated formulas or higher scientific explanations. Floating point types in Delphi Floating point is the internal format in which “real” numbers, like 0.0745 or 3.141592 are stored. Unlike fixed point representations, which are simply integers scaled by a fixed amount — an example is Delphi’s Currency type — they can represent very large and very tiny values in the same format. While Delphi knows several types with differing precision, the principles behind them are (almost) the same. The types Single, Double and Extended are supported by the hardware (by the FPU — floating point unit) of most current computers and follow the IEEE 754 binary format specs. The type Real, which is a relict of old Pascal, now maps to Double by default, but, if you set {$REALCOMPATIBILITY ON}, it maps to Real48 type, which is not an IEEE type and used to be managed by the runtime system, that is, in software, and not by hardware. There is also a Comp type, but this is in fact not a floating point type, it is an Int64 which is supported and calculated by the FPU. The Real48 type is pretty obsolete, and should only be used if it is absolutely necessary, e.g. to read in files that contain them. Even then it is probably best to convert them to, say, Double, store those in a new file and discard the old file. While Real types used to be managed in software, for computers that did not have an FPU (which was not uncommon in the earlier days of Turbo Pascal), this is not the case for current systems, which have an FPU. The runtime converts Real48 to Extended, uses that to do the required calculations and then converts the result back to Real48. This constant conversion makes the type pretty slow, so you should really, really avoid it. Note that the above does not apply to Real, if it is mapped to Double, which is the default setting. It only applies to the 6-byte Real48 type. Real numbers The real-number system is a continuum containing real values from minus infinity (−∞) to plus infinity(+∞). But in a computer, where they are only represented in a very limited amount of bytes (Extended, the largest floating point type in Delphi, has no more than 80 bits and the smallest, Single, only 32!), you can only store a limited amount of discrete values, so it is not nearly a continuum. Most real numbers can only (roughly) be approximated by floating point types. Everyone using them should always be aware of this. There are several ways in which real numbers can be represented. In written form, the usual way is to represent them as a string of digits, and the decimal point is represented by a ‘.’, e.g. 12345.678 or 0.0000345. Another way is to use scientific notation, which means that the number is scaled by powers of 10 to, usually, a number between 1 and 10, e.g. 12345.678 is represented as 1.2345678 × 10^4 or, in short form (the one Delphi uses), as 1.2345678e4. The way such "real" numbers are represented internally differs a bit from the written notation. The fixed point type Currency is simply stored as a 64 bit integer, but by convention its decimal point is said to be 4 places from the right, i.e. you must divide the integer by 10000 to get the value it is supposed to represent. So the number 3.76 is internally stored as 37600. The type was meant to be used for currencies, but that the type only has 4 decimals means that calculations other than addition or subtraction can cause inaccuracies that are often not tolerable. The floating point types used in Delphi have an internal representation that is much more like scientific notation. There is an unsigned integer (its size in bit depends on the type) that represents the digits of the number, the mantissa, and a number that represents the scale, in our case in powers of 2 instead of 10, the exponent. There is also a separate sign bit, which is 1 if the number is negative. So in floating point, a number can be represented as: value = (−1)^sign × (mantissa / 2^len−1) × 2^exp where sign is the value of the sign bit, mantissa is the mantissa as unsigned integer (more about this later), len is the length of the mantissa in bits, and exp is the exponent. The mantissa (The IEEE calls it “significand”, but this is a neologism which means something like "which is to be signified", and in my opinion, that doesn’t make any sense) can be viewed in two ways. Let’s disregard the exponent for the moment, and assume that its value is thus that the number 1.75 is represented by the mantissa. Many texts will tell you that the implicit binary point is viewed to be directly right of the topmost bit of the mantissa, i.e. that the topmost bit represents 2^0, the one below that 2^−1, etc., so a mantissa of binary 1.1100 0000 0000 000 represents 1.0 & plus; 0.5 &plus; 0.25 = 1.75. Other, but not so many texts, simply treat the mantissa as an unsigned integer, scaled by 2^len−1, where len is the size of the mantissa in bits. In other words, a mantissa of 1110 0000 0000 0000 binary or 57344 in decimal is scaled by 2^15 = 32768 to give you 57344 / 32768 = 1.75 too. As you see, it doesn’t really matter how you approach it, the result is the same. The exponent is the power of 2 by which the mantissa must be multiplied to get the number that is represented. Internally, the exponent is often "biased", i.e. it is not stored as a signed number, it is stored as unsigned, and the extremes often have special meanings for the number. This means that, to get the actual value of the exponent, you must subtract a constant value from the stored exponent. For instance, the bias for Single is 127. The value of the bias depends on the size of the exponent in bits and is chosen thus, that the smallest normalized value (more about that later) can be reciprocated without overflow. There are also floating point systems that have a decimal based exponent, i.e. where the value of the exponent represents powers of 10. Examples are the Decimal type used in certain databases and the — slightly incompatible — Decimal type used in Microsoft .NET. The latter uses a 96 bit integer to represent the digits, 1 bit to represent the sign (+ or -) and 5 bits to represent a negative power of 10 (0 up to 28). The number 123.45678 is represented as 12345678 × 10^−5. I have written an almost exact native copy of the Decimal type to be used by Delphi. It is a little faster than the original .NET type, but not nearly as fast as the hardware supported types. This article mainly discusses the floating point types used in Delphi, to know Single, Double and Extended, which are all floating binary point types. Floating decimal point types like Decimal are not supported by the hardware or by Delphi. So if, in this article, I speak of "floating point" I mean the floating binary point types. Sign bit The sign bit is quite simple. If the bit is 1, the number is negative, otherwise it is positive. It is totally independent of the mantissa, so there is no need for a two’s complement representation for negative numbers. Zero has a special representation, and you can actually even have −0 and +0 values. Normalization and the hidden bit The hidden bit is not present in all formats. What does it mean? Let’s take the number 0.375. This can be calculated as 2^−2 + 2^−3 (0.25 + 0.125), or, in a mantissa, 0.011[bin] (disregarding the trailing zeroes), i.e. 0.375 × 2^0. But this is not how floating point numbers are usually stored. The exponent is adjusted thus, that the mantissa always has its top bit set, except for some special numbers, like 0 or the so called “tiny” values. So the mantissa becomes 1.1[bin] and the exponent is decremented by 2. This number still represents the value 0.375, but now as 1.5 × 2^−2. This process is called normalization. It ensures that 1.0 <= mantissa < 2.0. But if the top bit is always 1, it doesn’t have to be stored, and in Single and Double it isn’t. That is why this is called the hidden bit. To calculate the value of such a floating point type, you must mentally put the implicit “1.[bin]” in front of the stored bits of the mantissa. There is some confusion about how to denote the size (or length, as it is often called) of the mantissa of a type with a hidden bit. Some will use the actually stored length in bits, while others also count the hidden bit. For instance, a Single has 23 bits of storage reserved for the mantissa. Some will call the length of the mantissa 23, while others will count the hidden bit too and call it a length of 24. Denormalized values Sometimes, after an operation, the exponent can not be decremented far enough to represent the result. In that case, the exponent is set to a special value, and the mantissa is not normalized anymore, i.e. the top bit is not 1 and the mantissa is interpreted as something like 0.000fff…fff[bin], i.e. it has one or more leading zeroes followed by as many significant bits as will fit. Such values are called denormalized or tiny values. Because of the leading zeroes, the precision is lower than for normalized values of the same type. Other special values Not every bit combination represents a number. Some represent &plus;/− infinity, and some are invalid. The latter are called NaN — Not a Number. The rules for which bit combinations represent what are described in the Delphi help, and in the Delphi DocWiki: Single, Double and Extended. I will not repeat that information here. But the Math unit contains a few constants and functions that can help you check or assign some of these values: 1 const 2 NaN = 0.0 / 0.0; 3 Infinity = 1.0 / 0.0; 4 NegInfinity = -1.0 / 0.0; 6 function IsNan(const AValue: Double): Boolean; overload; 7 function IsNan(const AValue: Single): Boolean; overload; 8 function IsNan(const AValue: Extended): Boolean; overload; 10 function IsZero(const A: Extended; Epsilon: Extended = 0): Boolean; overload; 11 function IsZero(const A: Double; Epsilon: Double = 0): Boolean; overload; 12 function IsZero(const A: Single; Epsilon: Single = 0): Boolean; overload; IEEE types The IEEE types used in Delphi are Type Mantissa Exponent Sign bit Smallest value Biggest value Single 0-22 23-30 31 1.5 × 10^−45 3.4 × 10^+38 Double 0-51 52-62 63 5.0 × 10^−324 1.7 × 10^+308 Extended 0-63 64-78 79 3.4 × 10^−4951 1.1 × 10^&plus;4932 no hidden bit The following diagram shows a simple representation of these types: Using floating point numbers In the following, I am using the terms small and large. I mean values that have a very low or a very high exponent, respectively, regardless of their sign. In other words, I am addressing their magnitude, not their signs. As you can see in the diagram, the different types have quite a different precision. Internally, for calculations, Delphi always uses Extended. Literals, like 0.1 are also stored as Extended. That is why the little code snippet at the beginning of this article produced False, since it was converted from Extended to Single, losing a few bits of precision, and for the comparison, it was converted back to Extended. The loss of precision caused the difference, so the result of the comparison was False. There are many such traps, caused by the limitations of how the infinite range of real numbers must be represented in a finite number of bits. Some of these traps are discussed in the following After calculations, e.g. multiplications or additions, the result can contain more significant bits than the type can hold, so the FPU must round the values to make them fit and normalized again, which means that a number of bits gets lost. How this rounding is done is governed by IEEE rules. But this means that there will be additional tiny inaccuracies. An example: 1 program Project1; 3 {$APPTYPE CONSOLE} 5 var 6 S1, S2: Single; 7 begin 8 S1 := 0.1; 9 Writeln(S1:20:18); 10 S1 := S1 * S1; 11 S2 := 0.01; 12 Writeln(S1:20:18); 13 Writeln(S2:20:18); 14 Readln; 15 end. The output is As you can see, the closest possible representation for 0.1 in a Single is 0.10000000149011612. If this is squared and then rounded, you get 0.01000000070780516, but the closest representation for 0.01 is 0.00999999977648258. So, in other words, Single(0.1) * Single(0.1) <> Single(0.01). Doing multiple calculations like this will slowly add up the errors, and they do not necessarily even each other out. It is very important that you take such errors into consideration and do no more calculations than necessary. It is always a good idea to simplify your expressions and to use professional libraries that know how to avoid too many calculations for the purpose. As in so many programming problems, the choice of algorithm and of the used types is also very important. Rounding modes and tie-breaking rules Rounding is generally done to the nearest more significant digit available. But sometimes there is a tie, if the value to be rounded is exactly between the two nearest digits. In that case, a tie-breaking rule is required and one very common rule is called banker’s rounding (although banks are not known to use or having used it), which says that a tie is rounded to the nearest even more significant digit. This means that 24.05 is rounded to 24.0, but 24.15 to 24.2. Other commonly used tie-breaking rules are: • Truncating (towards 0) — This means that 24.05 is rounded to 24.0, and −24.05 to −24.0. In fact, the less significant digits are simply dropped. • Rounding up (towards +∞) — This means that 24.05 is rounded to 24.1, but −24.05 to −24.0. This mode is taught in many schools. • Rounding down (towards −∞) — This means that 24.05 is rounded to 24.0, and −24.05 to −24.1. • Rounding away from 0 — This means that 24.05 is rounded to 24.1, and −24.05 to −24.1. This mode is taught in many schools too, but is not an IEEE approved method. Note that there are other rounding modes that do not round to the nearest more significant digit, but round to the more significant digit that is either above (closer to +∞), below (closer to −∞) or closer to 0. RoundTo and SimpleRoundTo Unit math contains a few nice functions to round a floating point value (Extended) to a set number of digits: 1 type 2 TRoundToRange = -37..37; 3 TRoundToEXRangeExtended = -20..20; 5 function RoundTo(const AValue: Extended; 6 const ADigit: TRoundToEXRangeExtended): Extended; 8 { This variation of the RoundTo function follows the asymmetric arithmetic 9 rounding algorithm (if Frac(X) < .5 then return X else return X + 1). This 10 function defaults to rounding to the hundredth's place (cents). } 11 function SimpleRoundTo(const AValue: Extended; const ADigit: TRoundToRange = -2): Extended; RoundTo is probably a little more accurate and faster, but SimpleRoundTo allows a bigger range of digits and uses a slightly different rounding algorithm. For better decimal rounding than these rather simple approaches, take a look at John Herbster’s DecimalRounding_JH1 unit on Embarcadero’s CodeCentral. It uses a more sophisticated algorithm which produces better results. It implements all the rounding modes I discussed in the Rounding modes and tie-breaking rules section above. The x87 Floating Point Unit The x87 FPU knows 4 rounding modes (see the FPU control word section of this article). So how does the FPU round? Say an operation on a Single produced an intermediate result that has some extra low bits. The extended mantissa looks like this: 1.0001 1100 0100 1100 1001 011%{text-decoration: underline}1% The underlined bit is the bit to be rounded. There are two possible values this can be rounded to, the value directly below and the value directly above: 1.0001 1100 0100 1100 1001 011 1.0001 1100 0100 1100 1001 100 Now what happens depends on the rounding mode. If the rounding mode is the default — round to nearest "even" — it will get rounded to the value that has a 0 as least significant bit. You can probably guess which of the two values is chosen for the other rounding modes. Measuring rounding errors There are ways to measure accumulated rounding errors. The most common methods used are ULP and relative or approximation error. Discussing them is outside the realm of this article, so I have to refer you to Wikipedia and the articles mentioned in the References section of this article. It is never a good idea to write code that requires a lot of conversions, for instance code that must convert between several floating point types, since each conversion, especially to a less precise type, can mean the loss of a few bits and therefore increases the inaccuracy. If space or speed are not as important as accuracy, use the Extended type throughout, because Delphi also uses it internally in most system functions. An example: 1 program Project2; 3 {$APPTYPE CONSOLE} 5 var 6 S: Single; 7 E: Extended; 8 begin 9 E := 0.1; 10 Writeln(E:20:18); 11 S := E; 12 E := S; 13 Writeln(E:20:18); 14 Readln; 15 end. The output is: In source code, we use decimal numbers. But floating point types are stored as binary. For integers, this is not a big problem, but as soon as fractions are involved, there is one. Not every number that can be represented exactly in decimal can be represented exactly in binary, just like certain numbers, e.g. 1/3 or π can not be represented exactly in decimal format. In binary, only numbers that are sums of powers of 2 can be represented exactly in a binary floating point type (e.g. 3.625 = 2 0.5 + 0.125). A number like 0.1 can not be composed of such powers. The compiler will try to get the best approximation that is possible, but there will always be a small difference. Comparing values The above shows that it is never a good idea to compare floating point values directly. Conversions and rounding cause tiny inaccuracies. These errors can add up, the more calculations you do. To accomodate for these inaccuraries, it is a good idea to always use a small error value in comparisons. In Delphi’s Math unit, there are a number of of overloaded functions that can help you do 1 function CompareValue(const A, B: Extended; Epsilon: Extended = 0): TValueRelationship; overload; 2 function CompareValue(const A, B: Double; Epsilon: Double = 0): TValueRelationship; overload; 3 function CompareValue(const A, B: Single; Epsilon: Single = 0): TValueRelationship; overload; 5 function SameValue(const A, B: Extended; Epsilon: Extended = 0): Boolean; overload; 6 function SameValue(const A, B: Double; Epsilon: Double = 0): Boolean; overload; 7 function SameValue(const A, B: Single; Epsilon: Single = 0): Boolean; overload; An ε (epsilon) value is a small value you can use as an error range. These functions either take an ε you provide, or if you pass nothing or 0, they will calculate an ε that takes the magnitude of the operands you are comparing into consideration. So it is usually best only to pass the operands, and not a specific ε, unless you have a really good reason to force one upon the function. An example follows: 1 program Project3; 3 {$APPTYPE CONSOLE} 5 uses 6 SysUtils, Math; 8 var 9 S1, S2: Single; 10 begin 11 S1 := 0.3; 12 S2 := 0.1; 13 S2 := S2 / 10.0; // should be 0.01 14 S2 := S2 * 10.0; // should be 0.1 again 15 S2 := S2 + S2 + S2; // should be 0.3 17 if S1 = S2 then 18 Writeln('True') 19 else 20 Writeln('False'); 22 if SameValue(S1, S2) then 23 Writeln('True') 24 else 25 Writeln('False'); 27 Readln; 28 end. The output is Subtracting (almost) equal values If almost equal values are subtracted (or two values with differing sign but otherwise almost equal values are added), the result is a value that is tiny, compared to the values. This tiny value can well be in the range of the inaccuracies mentioned before, so it can’t be trusted. It is another situation you should avoid. An example follows: 1 procedure Test; 2 var 3 E1, E2, E3: Extended; 4 begin 5 E1 := 16.000000000000000001; 6 E2 := 16.000000000000000000; 7 E3 := E1 - E2; 8 Writeln(E1:22:18, E2:22:18, E3); 9 end; The output from Delphi 2010 is: 16.000000000000000000 16.000000000000000000 1.73472347597681E-0018 One would expect the difference to be 1.0 × 10^−18, but the value you get is 1.735 × 10^−18. Also note that the output doesn’t display the decimal 1 in E1, which shows you can’t always trust the accuracy of your output either. Greatly differing magnitude If two values differ greatly in magnitude, the smaller of the two might be below the precision of the larger one. So adding the tiny value to such a huge value (or subtracting it) will have no effect. That means that you should take care in which order you do such additions or subtractions. Take the following simple example: 1 program Project4; 3 {$APPTYPE CONSOLE} 5 var 6 S1, S2, S3: Extended; 7 begin 8 S1 := 1000000000000000000000000.0; 9 S2 := 0.1; 10 S3 := S1; 11 Writeln(S1 + S2 - S3:10:10); 12 Writeln(S1 - S3 + S2:10:10); 13 Readln; 14 end. The results shown are: The second result is what you would expect, but the first one is the result of the fact that S2 got swallowed by the precision of the large value in S1. Note that if you have many values to add, it makes sense to sort them in order of magnitude. A nice explanation is given to this StackOverflow question, by Steve Jessop. Be sure to read the comments It comes down to the fact that if you add a tiny number to a big one, the tiny one may not change the big one, but if you add all tiny ones first, they may accumulate to a value that can make a difference that is closer to the big one. The link gives some examples. Also note the answer recommending the Kahan summation algorithm, by Daniel Pryden. Functions requiring real values Not only are fractions like 0.1 not representable in binary floating point, there are also values that are not representable in any integral number base, like the irrational numbers π or Euler’s constant e, but also values like √2. Functions based on numbers like these are bound to be inaccurate, especially in a limited format like floating point and because they require multiple internal calculations, even if these are probably with greater precision. That is why functions calls like sin(π) do not deliver exact results. For Sin(Pi), Delphi returns −5.42101086242752 × 10^−20, instead of the expected 0. Avoiding the traps There are a few tips to avoid the many traps. • Never forget that Delphi’s floating point types store in binary, and that can’t always represent decimal values. • Choose the right precision for your application. • Do not mix several types of floating point. • Be aware of rounding errors and that they can add up. • Optimize and simplify your algorithms to avoid too many calculations. • Use professional libraries instead of cooking your own ones. • Do not add or subtract values of greatly differing magnitude. • Do not compare values directly, but use library functions like SameValue. So how does this look internally? In the following example I use a Single, since Singles have an overviewable mantissa and exponent. Let me show you how a number like 0.1 is stored in a Single. The number 0.1 is stored as $3DCCCCCD or (ordering the bits already) %(numeric)0 – 0111 1011 – 100 1100 1100 1100 1100 1101%, which means the sign bit is 0, the exponent is 123−127 = −4 and the mantissa is (after putting the hidden bit in front) 1100 1100 1100 1100 1100 1101 or $CCCCCD or 13421773. If we multiply 13421773 with 2^−4 (0.0625), we get 838860,8125. Now we only have to scale that by 2^23 = 8388608, and we get 0.10000000149011612, which is indeed pretty close to 0.1. The FPU control word The FPU control word is a word-size set of bits that control the behaviour of the FPU. The bits are set up as follows Bits Name Values Description Exception flag masks 0 IM 1 Invalid operation 1 DM 1 Denormalized operand 2 ZM 1 Zero divide 3 OM 1 Overflow 4 UM 1 Underflow 5 PM 1 Precision Precision bits 00 Single precision (24 bit) 8, 9 PC 10 Double precision (53 bit) 11 Extended precision (64 bit) 01 reserved Rounding mode 00 Round to nearest even (banker’s rounding) 10, 11 RM 01 Round down toward infinity 10 Round up toward infinity 11 Round toward zero (trunc) Infinity control Used for compatibility with 287 FPU 12 X 0 Projective 1 Affine 6, 7, 13, 14, 15 reserved and not used. $133F turns off all exceptions In Delphi, to control the FPU control word (in Delphi, it is called 8087CW), there are a few functions, mentioned in the help and the DocWiki entry for the FPU Control Word. An example of their use: 1 var 2 S1: Single; 3 S2: Single; 4 S3: Single; 5 begin 6 SetExceptionMask(GetExceptionMask + [exZeroDivide]); // default is: unmasked 7 try 8 S1 := 1.0; 9 S2 := 0.0; 10 S3 := S1 / S2; 11 except 12 on E: Exception do 13 Writeln(E.ClassName, ': ', E.Message); 14 end; 15 Writeln(S3); 16 end. There is no exception, since including exZeroDivide will mask the division by zero FPU exception, this means that dividing by zero will not cause such an exception anymore. The result is +∞ instead. Investigating floating point types If you want to investigate or (ab)use the internal formats of the floating point types a little more, you should look for the routines by John Herbster, former member of TeamB. Most of them can be found on Embarcadero’s CodeCentral. Basic conversions of floating point values… … to their composite parts There are a few basic functions that can be useful to examine the composing parts of a floating point value: Function Unit Output Int Math Returns the integral part (i.e. the part before the decimal point) of a floating point value as Extended. Frac System Returns the fractional part (i.e. the part after the decimal point) of a floating point value as Extended. Sign Math Returns the sign of a number value as TValueSign. Frexp Math Procedure that returns the mantissa and the exponent of a Double value as Extended and Integer, respectively. FloatToDecimal SysUtils Procedure that returns the composing parts of a floating point value in a TFloatRec as data that can be used for formatting. … to integers To convert floating point values to Integers, there are a few system functions which each convert their numbers a little differently. Function Unit Output Trunc System Rounds a floating point value to the Int64 value nearest to zero (i.e. it truncates toward 0). Round System Rounds a floating point value to the nearest Int64 value, or when it is exactly halfway, uses “Banker’s rounding”. Floor Math Rounds a floating point value to the highest Int64 value that is less than or equal to it (i.e., it truncates toward −∞). Ceil Math Rounds a floating point value to the lowest Int64 value that is greater than or equal to it (i.e., it truncates toward +∞). These functions generally issue an EInvalidOp exception if the result would be outside the Int64 range. … to text To display a floating point number, the runtime must convert them from binary back to decimal. Also here, inaccuracies can creep in. It is also important what kind of format you choose. The specific output may depend on the format settings for the current locale, too. The runtime library, especially the SysUtils unit, provides you with some convenient functions to format such numbers, like Format, FormatFloat, FloatToStrF, FloatToText and FloatToTextFmt. Take a look at FloatToDecimal as well. Floating point types are useful, but one must be aware of their limitations. I hope this article helped you understand them a little better. But there are certainly things I forgot to mention, or which are incorrect. I am grateful for any constructive remark, criticism, objection, etc. You can contact me by e-mail to tell me what you think of this. Rudy Velthuis References and further reading
{"url":"http://rvelthuis.de/articles/articles-floats.html","timestamp":"2014-04-20T00:38:44Z","content_type":null,"content_length":"69698","record_id":"<urn:uuid:f91505bc-4e9b-4c70-99bb-d2f1cc5e8a25>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00197-ip-10-147-4-33.ec2.internal.warc.gz"}
mplifier Circuit Tutorial using BJT and Opamp Differential Amplifier Circuit Tutorial using BJT and Opamp Differential amplifier In this post, differential amplifier using BJT and differential amplifier using op-amps are explained in detail. Please go through both of them to get a better understanding. The circuit diagrams and detailed equations are provided along with the article. Please go through them. Differential Amplifier using Transistor A differential amplifier is designed to give the difference between two input signals. The circuit is shown below. As shown in the circuit diagram above there are two inputs, I/P1 and I/P2 and two outputs V1OUT and V2OUT. I/P1 is applied to the base of the transistor TI and IP2 is applied to the base of the transistor T2. The emitters of both T1 and T2 are connected to a common emitter resistor so that the two output terminals V1OUT and V2OUT gets affected by the two input signals I/P1 and I/P2. V[CC ] and V[EE] are the two supply voltages for the circuit. The circuit will also work fine using just a single voltage supply. You may have also noted that there is no ground terminal indicated in the circuit. Hence it must be automatically understood that the opposite points of both the positive and negative voltage supplies are understood to be connected to the ground. Working of a Differential Amplifier When a differential amplifier is driven at one of the inputs, the output appears at both the collector outputs. This is explained with a diagram below. When input signal I/P1 is applied to the transistor T1, there will be a high voltage drop across the collector resistance R[COL1] , and thus the collector of T1 will be less positive. When I/P1 is negative T1 is turned OFF, and the voltage drop across R[COL1] becomes very low and thus the collector of T1 will be more positive. Thus we can conclude than an inserted output appears at T1’s collector for applying signal at I/P1. When T1 is turned ON by the positive value of I/P1 , the current through the emitter resistance R[EM] increases as the emitter current is almost equal to the collector current (I[E]I[C]). Thus the voltage drop across R[EM] increases and makes the emitter of both transistors going in a positive direction. Making T2’s emitter positive is the same as making the base of T2 negative. In such a condition the transistor T2 will conduct less current which in turn will cause less voltage drop in RCOL2 and thus the collector of T2 will go in a positive direction for positive input signal. Thus we can conclude that the non-inverting output appears at the collector of transistor T2 for input at base of T1. The amplification can be driven differentially by taking output between the collector of T1 and T2. As shown in the figure above, if the transistor T1 and T2 are assumed to be identical in all characteristics, and if the voltages are equal (V[BASE1] = V[BASE2]), then the emitter current can also be said to be eequal I[EM1] = I[EM2] Total Emitter Current, IE = IEM1 + IEM2 V[EM] = V[BASE] – V[BASE EM] I[EM] = (V[BASE] – V[BASE EM])/R[EM] The emitter current I[EM] remains virtually constant regardless of the hfe value of the transistors. Since ICOL1 IEM1, and ICOL2 IEM2, ICOL1 ICOL2 Also, V[COL1] = V[COL2] = V[CC] – I[COL] R[COL], assuming collector resistance R[COL1] = R[COL2] = R[COL]. Differential amplifier is a closed loop amplifier circuit which amplifies the difference between two signals. Such a circuit is very useful in instrumentation systems. Differential amplifiers have high common mode rejection ratio (CMRR) and high input impedance. Differential amplifiers can be made using one opamp or two opamps. Both of these configurations are explained here. The circuit diagram of a differential amplifier using one opamp is shown below. R1 and R2 are the input resistors, Rf is the feedback resistor and RL is the load resistor. Derivation for voltage gain. Equation for the voltage gain of the differential amplifier using one opamp can be derived as follows. The circuit is just a combination of an inverting and non inverting amplifier. Finding the output voltages s of these two configurations separately and then summing them will result in the overall output voltage. If Vb is made zero, the circuit becomes an inverting amplifier. The output voltage Voa due to Va alone can be expressed using the following equation. When Va is made zero the circuit becomes a non inverting amplifier. Let V1 be the voltage at the non inverting input pin. Relation between Vb and V1 can be expressed using the following equation. Output voltage Vob due to Vb alone is according to the equation Then overall output voltage is Differential amplifier using two opamps. Circuit diagram of a differential amplifier using two opamps is shown below. Main advantage of differential amplifier with two opamps is that it has increased overall gain. R1 is the input resistor for IC1 and R3 is the input resistor for IC2. Rf is the feedback resistor. Va and Vb are the two input voltages and they are applied to the non inverting inputs of IC2 and IC1 respectively. RL is the load resistor. V+ and V- are the positive and negative supply voltages. Derivation of voltage gain. The equation for the output voltage V1 of the first opamp (IC1) is as follows. V1 and Va are the inputs for the second stage (IC2). Output voltage due to Va alone is. Output voltage due to Vb alone is Overall output voltage Vo = Voa + Vob Let R1 = R2 and Rf =R1, then we have Therefore overall voltage gain Av can be expressed using the equation Practical differential amplifier. A practical differential amplifier using uA741 opamp is shown below. With used components the amplifier has a gain of around 5. Remember the equation Av = -Rf/R1. Here Rf = 10K and R1 =2.2K, -Rf/R1 = -10/2.2 = -4.54 = ~-5. Negative sign represents phase inversion. Use +/-12V DC dual supply for powering the circuit. uA 741 must be mounted on a holder. 3 Responses to “Differential Amplifier Circuit Tutorial using BJT and Opamp” • CORRECTION OF OWN COMMENT: In the derivation for the two opamp version, where you state “Let R1 = R2 and Rf =R1, then we have” I think it should be “Let R1 = R2 and Rf =R3, then we have” Thanks for the excellent text otherwise! • HI! I’m real hapy to your atticle,hopping I can use 741 in my project; currently I’m completting my circuit but the problem I face is how connect 4 sensor (weight sensor) before feeding them to circuit you described above (differential amplifier) and the output of 741 (opamp) will be inputed to my PIC16F84A I took those sensors from a digital weight scale.because the output signal from the scale is very low that’s why I wanted to employ opamp to rise signal. Having said those i hope you will help me to complte this task. Please send information via my email (nziku99@yahoo.com) the scale used 3v as power supply and my project eses 5v PLEASE HELP!!!!!! • This shows real expertise. Thanks for the awensr.
{"url":"http://www.circuitstoday.com/differential-amplifier","timestamp":"2014-04-21T01:59:31Z","content_type":null,"content_length":"65147","record_id":"<urn:uuid:7ddaa178-6f60-4b0a-9cd1-6513e9ce160d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] float128 in fact float80 Charles R Harris charlesr.harris@gmail.... Sun Oct 16 18:29:24 CDT 2011 On Sun, Oct 16, 2011 at 4:16 PM, Nathaniel Smith <njs@pobox.com> wrote: > On Sun, Oct 16, 2011 at 3:04 PM, Matthew Brett <matthew.brett@gmail.com> > wrote: > > If we agree that float128 is a bad name for something that isn't IEEE > > binary128, and there is already a longdouble type (thanks for pointing > > that out), then what about: > > > > Deprecating float128 / float96 as names > > Preferring longdouble for cross-platform == fairly big float of some sort > +1 > I understand the argument that you don't want to call it "float80" > because not all machines support a float80 type. But I don't > understand why we would solve that problem by making up two *more* > names (float96, float128) that describe types that *no* machines > actually support... this is incredibly confusing. Well, float128 and float96 aren't interchangeable across architectures because of the different alignments, C long double isn't portable either, and float80 doesn't seem to be available anywhere. What concerns me is the difference between extended and quad precision, both of which can occupy 128 bits. I've complained about that for several years now, but as to extended precision, just don't use it. It will never be portable. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20111016/f12f6b7f/attachment-0001.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-October/058807.html","timestamp":"2014-04-17T21:52:49Z","content_type":null,"content_length":"4471","record_id":"<urn:uuid:4647f8b4-f02a-46ac-b399-5e29e476e5f3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Guess the Plot 11 It has been a while since the last time I posted the last riddle of this series. It was fun though, so upon seeing the graph below I immediately decided I would use it here, to let you guess what it is about. Please use the comments thread to provide your input: what does the graph represent ? What are the different coloured lines ? Why the funny behaviour ? What is on the x axis ? And on the y axis ? Of course it is virtually impossible to answer all the above without being given some hint. I can tell you it has to do with LHC searches, and that is all the help I am going to give you! x-axis: mass of a Higgs in GeV y-axis: a certain branching ratio in % (obviiously with the drop at 2 mtop) Colors: variation of a parameter such as tan_beta Cheers, Sven Sven (not verified) | 01/21/13 | 15:44 PM I'd switch "Higgs" for some SUSY particle but have no idea which one. I'm a bit suspicious of the branching ratio interpretation. Would percent be used? JollyJoker (not verified) | 01/22/13 | 11:41 AM Okay - this was not fair altogether: Sven is the one who got the closest to it, but Sven is probably Sven Heinemeyer, who turns out to be one of the authors of FeynHiggs, the program which computes cross sections and branching fraction of SUSY Higgs particles in the MSSM... And the plot is, in fact, the sum of several different outputs of FeynHiggs: it represents the cross section times branching fraction of the production of an A (the CP-odd neutral Higgs boson of two-doublet models) and its decay into Zh, where Z is the regular standard model Z boson, and h is the lightest of the two CP-even Higgs bosons. The various curves represent the effective cross section for different values of tan(beta), one of the crucial parameters of two-doublet MSSM models. From top to bottom you can see curves relative to different values from 1 to 10 of tan(beta). In any case I see that the riddle has not been too popular - only two answers! Maybe the figure was really too difficult to guess ? I doubt that is the case; rather, un-inspiring. To me it is the opposite though: the various features of the curve are absolutely non-trivial, and each has some physics motivation. For instance the "peaks" at 350 GeV, where the A production is enhanced, but then the branching fraction into things other than top-antitop pairs is dampened. Or the "turn-on" at masses above the sum of Z and h. Of course the h mass is not 125 GeV here: it is whatever it is, once you specify the A mass (on the x axis) and tan(beta). The graph has been produced by a Ph.D. student from my analysis group, Alberto Zucchetta (I refrained from calling him "my" student since he is working at several different things at the moment, and none of them directly with me! - but I hope we will put together a search for these A particles in Zh final states: it should be fun! Tommaso Dorigo | 01/22/13 | 14:47 PM
{"url":"http://www.science20.com/quantum_diaries_survivor/blog/guess_plot_11-101336","timestamp":"2014-04-18T19:23:42Z","content_type":null,"content_length":"46336","record_id":"<urn:uuid:ddae269f-eea8-4811-8dee-a0206a2e9c0f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
equation vs. expression Expression - Phrase, Equation - Sentence If you've searched the topic "equation vs. expression" on the net & found this page, you will probably read the following four items as mostly the same. 1. Ten is five less than a number. 2. A number is less than five. 3. a number less than five 4. five less than a number Please now reconsider the above items and determine which are sentences (subject, verb, complete idea) and which are phrases (no subject, no verb). For assistance from an English point of view, see http://www.richmond.edu/~writing/wweb/fragment.html The mathematical term expression is equivalent to an English phrase. The most common mathematical statements or sentences, are called equations and inequalities. The reader is asked to review each of the links in this paragraph before reading further and to pay particular attention to the mathematical terms relation and relation symbol, the "verbs of mathematical statements," as they relate to each of the other words. Expression vs. Equation The algebra student, or algebraically able individual, is expected to know the difference between an expression and a statement because each serves a different purpose and each is handled in a certain way. Equation Expression Ex. Ex. 1. Ten is five less than a number. 3. a number less than five 10 = x - 5 x 2. A number is less than five. 4. five less than a number x < 5 x - 5 Equation, Not Expression Computation Errors Expression, Not Equation Computation Errors
{"url":"http://www.mathnstuff.com/math/algebra/aequex.htm","timestamp":"2014-04-20T03:24:25Z","content_type":null,"content_length":"7386","record_id":"<urn:uuid:7fb63dd1-c0d6-4e1e-af94-0534c8514c77>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Regents Exam When do I take the Algebra 2/Trigonometry Regents Exam? The first administration of the Algebra 2/Trigonometry Regents exam was in June 2010, after which it will be administered in January and June of every school year. Most students will take this exam after completing a one-year high school level Algebra 2/Trigonometry course. Click here to see the latest NYS Regents exam schedule. How is the Algebra 2/Trigonometry Regents Exam set up? The Algebra 2/Trigonometry Regents exam is divided into four parts with a total of 39 questions. All of the questions in each of the four parts must be answered. You will be allowed a maximum of 3 hours in which to complete the test. │ │Number of Questions │Point Value│Total Points │ │Part│ │ │ │ │ I │ 27 multiple choice │ 2-credit │ 27 x 2 = 54 │ │ II │ 8 open ended │ 2-credit │ 8 x 2 = 16 │ │III │ 3 open ended │ 4-credit │ 3 x 4 = 12 │ │ IV │ 1 open ended │ 6-credit │ 1 x 6 = 6 │ Test = 39 Questions Test = 88 Points ● Part I consists of 27 standard multiple-choice questions with four possible answers labeled (1), (2), (3), and (4). ● Parts II, III, and IV contain eight, three, and one question(s), respectively. The answers and the accompanying work for the questions in these three parts must be written directly in the question booklet. You must show or explain how you arrived at each answer by indicating the necessary steps, including appropriate formula substitutions, diagrams, graphs, and charts. If you use a guess-and-check strategy to arrive at a numerical answer for a problem, you must indicate your method and show the work for at least three guesses. ● All questions in each of the four parts of the test must be answered. Where do I show my answers and work? ● Since scrap paper is not provided or permitted for any part of the exam, you must use the blank spaces in the question booklet as scrap paper. ● After you figure out the answer to each multiple choice question in Part I, you must write the numeral that precedes the correct choice in the space provided on the separate tear-off answer sheet for Part I found at the back of the question booklet. ● The answers and the work for the questions in Parts II, III and IV must be written directly in the question booklet in the space provided underneath the questions. All work should be written in pen except the graphs which should be drawn in pencil. ● If you need graph paper, it will be provided in the question booklet. What Type of Calculator Do I Need? Graphing calculators are required for the Algebra 2/Trigonometry Regents examination. During the administration of the Regents exam, schools are required to make a graphing calculator available for the exclusive use of each student. You will need to use your calculator to work with trigonometric functions of angles, evaluate roots and logarithms, and perform routine calculations. Knowing how to use a graphing calculator gives you an advantage when deciding how to solve a problem. Rather than solving a problem algebraically with pen and paper, it may be easier to solve the same problem using a graph or table created by a graphing calculator. A graphical or numerical solution found using a calculator can also be used to help confirm an answer obtained using standard algebraic methods. You can find additional information about how to use a graphing calculator in the "Graphing Calculator Skills" section in this book. How is your Algebra 2/Trigonometry Regents score determined? Your answers to the 27 multiple-choice questions in Part I are scored as either correct or incorrect. Each correct answer receives 2 points. The eight questions in Part II are worth 2 points each, the three questions in Part II are worth 4 points each, and the question in Part IV is worth 6 points. Solutions to the questions in Parts II, III, and IV that are not completely correct receive partial credit according to a special scoring guide provided by the New York State Education Department. The maximum total raw score for the Algebra 2/Trigonometry Regents exam is 88 points. A conversion table provided by the New York State Education Department is used to convert your raw score to a final test score that falls within the usual 0 to 100 scale. What is collected at the end of the Algebra 2/Trigonometry Regents exam? ● Any tool provided to you by your school such as a graphing calculator. ● The question booklet with your name and your school's name near the top of the first page. ● The signed Part I answer sheet confirming you did not receive unlawful assistance. If you fail to sign this declaration, your answer paper will not be accepted. Are Any Formulas Provided? The Algebra 2/Trigonometry Regents Examination test booklet will include a reference sheet containing the formulas. This formula sheet, however, does not necessarily include all of the formulas that you are expected to know. What topics are covered on the Algebra 2/Trigonometry Regents? The Core Curriculum is the official publication by the New York State Education Department that describes the topics and skills that are required by the Algebra 2/Trigonometry Regents examination. This examination can test you on a wide range of topics, which include: ● Operations with real and complex numbers; algebraic operations with fractions and radicals; factoring. ● Solving quadratic equations, including those with irrational and complex roots; solving systems of equations. ● Linear, quadratic, logarithmic, exponential, and trigonometric functions and their graphs. ● Transformations and functions. ● Statistics, including normal curve; fitting a line or curve to data using least squares regression; scatter plots; correlation coefficient. ● Probability, including counting methods and probability in two outcome experiments. ● Trigonometric equations and laws. ● Series and sequences. How do I review for the Algebra 2/Trigonometry Regents exam? Barron's Regents.com has everything you need to prepare for the Algebra 2/Trigonometry Regents exam. You can take complete practice tests, or select questions by date or by topic. With Regents.com you'll get immediate feedback, including answers to all questions with full explanations. Instant results pinpoint your strengths and weaknesses and let you know where you need to practice most. All information is saved on a personal database for future use and can be accessed from any computer with Internet connection. Subscribe now to start preparing for the New York State Algebra 2/Trigonometry Regents exam within minutes! One low fee will give you access to our entire database at Regents.com, plus an auto-grading system and score assessment tools. Where else would you go prep for your Algebra 2/Trigonometry Regents exam except to Barron’s Regents.com, the most trusted name for Regents test prep and review for more than 70 years? You're just a few clicks away from the easiest to use, most effective way to improve your test scores on the Algebra 2/Trigonometry Regents - subscribe now! Want to find out more? Try our New York State Regents Review demo!
{"url":"http://barronsregents.com/trigonometry-regents.html","timestamp":"2014-04-21T15:00:40Z","content_type":null,"content_length":"21940","record_id":"<urn:uuid:f022b883-2124-4f8e-9712-7ac5031875a2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Merry Christmas Math Problems Merry Christmas Math Problems 1. If candy canes cost .89 a dozen, how much would it cost to buy candy canes for a school with 400 students? 2. Macy's has hired 400 store Santas. If each Santa sees 125 children a day for 30 days, how many children are seen by Macy's Santas? 3. One group of carolers goes to every 6th house in a neighborhood, another goes to every 8th house. At which house will they first meet? 4. Mr. Green is putting lights around 8 windows; each window is 3 and 1/2 feet wide and 5 feet long. How many feet of lights does he need? 5. 10 elves each made 10 xylophones. Each xylophone had 10 keys. They did this for 10 days. How many keys were made by elves? Show in exponential form and solve. 6. Each batch of 48 cookies that Amy makes takes 20 minutes in the oven. If the oven is on for 3 hours and 55 minutes (15 minutes for preheating), how many cookies did Amy make? 7. Beth is making gingerbread men. She uses 2 raisins for eyes and 3 raisins for buttons for each gingerbread man. She buys 4 boxes of raisins, each with 120 raisins in it. How many dozens of gingerbread men can she make? 8. Brian wants a 7 and 1/2 foot Christmas tree. Which of these trees is a better buy? Explain. A.) All trees $42 B.) $6 a foot C.) First 4 feet $22; each additional foot $6 9. K-Mart is open Monday through Saturday from 8:00 AM to midnight and Sunday from 10:00 AM to 8:00 PM. How many hours is it open in one week? 10. Each large roll of ribbon has 20 yards. Each package takes 3 feet of ribbon. How many packages can be wrapped? Merry Christmas Math Solutions
{"url":"http://www.fi.edu/school/math/merry.html","timestamp":"2014-04-21T07:26:09Z","content_type":null,"content_length":"5942","record_id":"<urn:uuid:b8a96468-588f-4dba-bbee-ffeaebf096ae>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Riverdale Pk, MD Geometry Tutor Find a Riverdale Pk, MD Geometry Tutor ...I have also taught Physics and Electrical Engineering courses for both undergraduate and graduate students. These courses involved solving differential equations related to applications in physics and electrical engineering. As an undergraduate student in Electrical Engineering and Physics and as a graduate student, I took courses in mathematical methods for physics and 16 Subjects: including geometry, calculus, physics, statistics ...I've worked with students to set goals, timelines, or even simple routines to aid them in achieving their study and homework goals. I also incorporate these skills into my own life - that's what has enabled me to complete research for my master's degree in engineering, and continues to aide me i... 17 Subjects: including geometry, reading, writing, biology ...I enjoy teaching students at every skill level. I believe in teaching beyond the short cuts and introducing students to the satisfaction of finding solutions using problem-solving skills. I teach basic through advanced mathematics and sciences. 14 Subjects: including geometry, chemistry, precalculus, algebra 1 ...I tutored part-time throughout college, and have been tutoring full-time since 2012. Since then, I've branched out into similar subject areas such as ACT, GRE and SSAT. I love that my job allows me to help kids succeed. 41 Subjects: including geometry, reading, English, writing ...I currently teach math to students with severe disabilities. Some of my training was through NASA. Through tutoring, students will not only learn and understand the material, but they will become confident and independent learners. 9 Subjects: including geometry, algebra 1, algebra 2, GED Related Riverdale Pk, MD Tutors Riverdale Pk, MD Accounting Tutors Riverdale Pk, MD ACT Tutors Riverdale Pk, MD Algebra Tutors Riverdale Pk, MD Algebra 2 Tutors Riverdale Pk, MD Calculus Tutors Riverdale Pk, MD Geometry Tutors Riverdale Pk, MD Math Tutors Riverdale Pk, MD Prealgebra Tutors Riverdale Pk, MD Precalculus Tutors Riverdale Pk, MD SAT Tutors Riverdale Pk, MD SAT Math Tutors Riverdale Pk, MD Science Tutors Riverdale Pk, MD Statistics Tutors Riverdale Pk, MD Trigonometry Tutors Nearby Cities With geometry Tutor Bladensburg, MD geometry Tutors Brentwood, MD geometry Tutors Cheverly, MD geometry Tutors College Park geometry Tutors Edmonston, MD geometry Tutors Greenbelt geometry Tutors Hyattsville geometry Tutors Landover Hills, MD geometry Tutors Lanham Seabrook, MD geometry Tutors Mount Rainier geometry Tutors New Carrollton, MD geometry Tutors North Brentwood, MD geometry Tutors Riverdale Park, MD geometry Tutors Riverdale, MD geometry Tutors University Park, MD geometry Tutors
{"url":"http://www.purplemath.com/Riverdale_Pk_MD_Geometry_tutors.php","timestamp":"2014-04-20T14:02:17Z","content_type":null,"content_length":"24337","record_id":"<urn:uuid:cf86ad9e-1b1a-4c9c-877d-8c6537ca0167>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
please check if the differential equation is correct I am practising for my test. The question is to solve a differential equation dy/dx + y/x + 1 = 5x You mean y/(x+1). y(0) = 1. The answer that i have come up with is (xy+y)= 5x^3/3+5x^2/2+c by substituting the values x=0 and y=1 in to the general equation I get y(x+1)=5x^3/3 + 5x^2/2 +1 as the particular solution. Can you tell me how will the particular solution look like and why this particular solution exists? What exactly do you mean by "look like"? If you just mean "solve for y", divide both sides by x+ 1. The differential equation can be written [tex]\frac{dy}{dx}= 5x- \frac{y}{x+1}[/tex] The function on the right side is differentiable for all y and all x except -1 so by the "fundamental existence and uniqueness theorem" for initial value problems, a unique solution to this problem exist for all x larger than -1.
{"url":"http://www.physicsforums.com/showthread.php?p=3747295","timestamp":"2014-04-19T22:59:17Z","content_type":null,"content_length":"28564","record_id":"<urn:uuid:48b2f9aa-c146-4336-87d5-cc643b0df853>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Self-Optimizing and Pareto-Optimal Policies in General Environments Based on Bayes-Mixtures Find out how to access preview-only content Self-Optimizing and Pareto-Optimal Policies in General Environments Based on Bayes-Mixtures Purchase on Springer.com $29.95 / €24.95 / £19.95* * Final gross prices may vary according to local VAT. Get Access The problem of making sequential decisions in unknown probabilistic environments is studied. In cycle t action y [ t ] results in perception x [ t ] and reward r [ t ], where all quantities in general may depend on the complete history. The perception x [ t ] and reward r [ t ] are sampled from the (reactive) environmental probability distribution μ. This very general setting includes, but is not limited to, (partial observable, k-th order) Markov decision processes. Sequential decision theory tells us how to act in order to maximize the total expected reward, called value, if μ is known. Reinforcement learning is usually used if μ is unknown. In the Bayesian approach one defines a mixture distribution ξ as a weighted sum of distributions $ \mathcal{V} \in \mathcal{M} $ , where $ \mathcal{M} $ is any class of distributions including the true environment μ. We show that the Bayes-optimal policy p ^ξbased on the mixture ξ is self-optimizing in the sense that the average value converges asymptotically for all $ \mu \in \mathcal{M} $ to the optimal value achieved by the (infeasible) Bayes-optimal policy p ^μ which knows μ in advance. We show that the necessary condition that $ \mathcal{M} $ admits self-optimizing policies at all, is also sufficient. No other structural assumptions are made on $ \mathcal{M} $ . As an example application, we discuss ergodic Markov decision processes, which allow for self-optimizing policies. Furthermore, we show that p^λ is Pareto-optimal in the sense that there is no other policy yielding higher or equal value in all environments $ \mathcal{V} \in \mathcal{M} $ and a strictly higher value in at least one. 1. R. Bellman. Dynamic Programming. Princeton University Press, New Jersey, 1957. 2. D. P. Bertsekas. Dynamic Programming and Optimal Control, Vol. (I) and (II). Athena Scientific, Belmont, Massachusetts, 1995. Volumes 1 and 2. 3. R. I. Brafman and M. Tennenholtz. A near-optimal polynomial time algorithm for learning in certain classes of stochastic games. Artificial Intelligence, 121(1–2):31–47, 2000. CrossRef 4. J. L. Doob. Stochastic Processes. John Wiley & Sons, New York, 1953. 5. M. Hutter. A theory of universal artificial intelligence based on algorithmic complexity. Technical Report cs.AI/0004001, 62 pages, 2000. http://arxiv.org/abs/cs.AI/0004001. 6. M. Hutter. General loss bounds for universal sequence prediction. Proceedings of the 18 ^ th International Conference on Machine Learning (ICML-2001), pages 210–217, 2001. 7. L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: a survey. Journal of AI research, 4:237–285, 1996. 8. M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. In Proc. 15th International Conf. on Machine Learning, pages 260–268. Morgan Kaufmann, San Francisco, CA, 1998. 9. P. R. Kumar and P. P. Varaiya. Stochastic Systems: Estimation, Identification, and Adaptive Control. Prentice Hall, Englewood Cliffs, NJ, 1986. 10. M. Li and P. M. B. Vitányi. An introduction to Kolmogorov complexity and its applications. Springer, 2nd edition, 1997. 11. S. J. Russell and P. Norvig. Artificial Intelligence. A Modern Approach. Prentice-Hall, Englewood Cliffs, 1995. 12. R. Sutton and A. Barto. Reinforcement learning: An introduction. Cambridge, MA, MIT Press, 1998. 13. J. Schmidhuber. The Speed Prior: a new simplicity measure yielding near-optimal computable predictions. Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002), 14. R. J. Solomonoff. Complexity-based induction systems: comparisons and convergence theorems. IEEE Trans. Inform. Theory, IT-24:422–432, 1978. CrossRef Self-Optimizing and Pareto-Optimal Policies in General Environments Based on Bayes-Mixtures Book Title Book Subtitle 15th Annual Conference on Computational Learning Theory, COLT 2002 Sydney, Australia, July 8–10, 2002 Proceedings pp 364-379 Print ISBN Online ISBN Series Title Series Volume Series ISSN Springer Berlin Heidelberg Copyright Holder Springer-Verlag Berlin Heidelberg Additional Links Industry Sectors eBook Packages Editor Affiliations □ 1. Research School of Information Sciences and Engineering, Australian National University □ 2. Computer Science Department, University of Illinois at Chicago Author Affiliations □ 5. IDSIA, Galleria 2, CH-6928, Manno-Lugano, Switzerland Continue reading... To view the rest of this content please follow the download PDF link above.
{"url":"http://link.springer.com/chapter/10.1007%2F3-540-45435-7_25","timestamp":"2014-04-20T23:30:29Z","content_type":null,"content_length":"50591","record_id":"<urn:uuid:52555479-47f0-4cbd-b688-0b61a86c9fd3>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Melissa Calculus Tutor Find a Melissa Calculus Tutor ...I have prepared hundreds of presentations for executive briefings and company-wide training. My coworkers call me the MS Office master. I have helped all four of my kids with prealgebra. 48 Subjects: including calculus, chemistry, physics, ASVAB ...I played the violin and was an active member of my school orchestra since 6th grade. I was second violin, first chair all of my freshman year in high school. And continued to make all-city and all-region orchestras. 34 Subjects: including calculus, reading, geometry, biology ...I have taught college level courses in general chemistry I & II and organic chemistry for over 4 years. Currently, I work at Collin College as an associate faculty member in chemistry. I am scheduled to teach general chemistry I for summer I & III(CHEM 1411) this summer. 10 Subjects: including calculus, chemistry, geometry, algebra 1 ...Above all, I enjoy tutoring and always try to pass on that same passion (though admittedly not always successfully) to all my students.In addition to taking College Algebra, I have also studied and tutored higher level related mathematics such as Linear Algebra and Modern (Abstract) Algebra. It ... 41 Subjects: including calculus, chemistry, statistics, geometry ...Math is not hard - for someone who loves it. I also inject humor into my lessons, which children love. Sometimes when students get bogged down or saturated, I will ask an off the wall question to redirect them, like "What is air?", and then refocus them back on the math once their thoughts have had a chance to reset. 57 Subjects: including calculus, Spanish, English, reading Nearby Cities With calculus Tutor Anna, TX calculus Tutors Blue Ridge, TX calculus Tutors Celina, TX calculus Tutors Copeville calculus Tutors Fairview, TX calculus Tutors Farmersville, TX calculus Tutors Gunter calculus Tutors Heath, TX calculus Tutors Little Elm calculus Tutors Mckinney calculus Tutors Princeton, TX calculus Tutors Van Alstyne calculus Tutors Westminster, TX calculus Tutors Weston Lakes, TX calculus Tutors Weston, TX calculus Tutors
{"url":"http://www.purplemath.com/melissa_calculus_tutors.php","timestamp":"2014-04-18T19:17:45Z","content_type":null,"content_length":"23605","record_id":"<urn:uuid:81dd8144-9f72-4b39-8d53-517c1e482f07>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Which electrical engineering subdiscipline uses the most math? Whoa.. I didn't know that EE can get into that kind of math. I was going to go the E&M route and hopefully one day conquer a book like Jackson but maybe I'll peak into controls a bit. Neither did I until I started talking to my advisor! If you google search "geometric control theory", "nonlinear control", etc. you'll see what I mean. By the way, does anyone know of branches in EE that get into the theoretical stuff from linear algebra? Like vector spaces, inner products, orthogonality, etc. I like the linear algebra stuff too. I bet there are other examples of linear algebra in EE, but I do know "modern" control theory makes extensive use of linear algebra. From what I've learned in my first controls class, classical control theory is mainly using Laplace transforms, etc. However, modern control theory is formulated in the time domain. The "state" of dynamical systems are represented as a vector in a vector space. Instead of the transfer function model of classical control theory, dynamical systems are represented in "state space" using matrix equations. Systems are analyzed using linear algebra techniques- for instance, the location of eigenvalues of a certain system matrix in the complex plane determine the stability of the system. Another example of linear algebra computations used is using change of basis transformations to represent a system with a different set of dynamical variables as the basis vectors. Some intro university classes focus on classical control theory, but mine has been using both approaches.
{"url":"http://www.physicsforums.com/showthread.php?p=4160092","timestamp":"2014-04-16T19:10:45Z","content_type":null,"content_length":"77811","record_id":"<urn:uuid:da311720-f52a-42f1-bfd0-b6f338d2d32b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum Computing Since Democritus Lecture 11: Decoherence and Hidden Variables After a week of brainbreaking labor, here it is at last: My Grand Statement on the Interpretation of Quantum Mechanics. Granted, I don’t completely solve the mysteries of quantum mechanics in this lecture. I didn’t see any need to — since to judge from the quant-ph arXiv, those mysteries are solved at least twenty times a week. Instead I merely elucidate the mysteries, by examining two very different kinds of stories that people tell themselves to feel better about quantum mechanics: decoherence and hidden “But along the way,” you’re wondering, “will Scott also touch on the arrow of time, the Second Law of Thermodynamics, Bell’s Inequality, the Kochen-Specker Theorem, the preferred-basis problem, discrete vs. continuous Hilbert spaces, and even the Max-Flow/Min-Cut Theorem?” Man oh man, is someone in for a treat. I assume that, like Lecture 9, this will be one of the most loved and hated lectures of the course. So bring it on, commenters. You think I can’t handle you? Update (4/5): Peter Shor just posted a delightful comment that I thought I’d share here, in the hope of provoking more discussion. Interpretations of quantum mechanics, unlike Gods, are not jealous, and thus it is safe to believe in more than one at the same time. So if the many-worlds interpretation makes it easier to think about the research you’re doing in April, and the Copenhagen interpretation makes it easier to think about the research you’re doing in June, the Copenhagen interpretation is not going to smite you for praying to the many-worlds interpretation. At least I hope it won’t, because otherwise I’m in big trouble. Ryan Budney Says: Comment #1 April 3rd, 2007 at 8:02 am A guy named Michael Atiyah gave a rather nice talk on a rather vague idea for doing-away with quantum mechanics and/or reinterpreting it, at the Alain Connes birthday party conference. He intended the talk to be wildly speculative and as food for thought. I’m not going to be able to tell you everything, but one aspect of it caught me. The infinite-dimensionality of quantum mechanics — the state space. He wanted some kind of intuitive reason for why it should exist, in the spirit of the intuitive arguments Einstein gave for the Lorentz transforms. Why should quantization of mass or energy, space or time, etc, force such a construct on us? Vaguely speaking, if time or space is quantized, there would be gaps between objects and delays between cause and effect. He then brought in an analogy with control systems from electrical engineering, and foisted the idea that perhaps there is a 1st order delay differential equation (which he calls a retarded differential equation) that could perhaps put it all together. He gave an example like this, consider a DE f’(t) = kf(t-a) where a>0 is some constant. Then to determine the behavior of f(t) for t>a you need as an initial condition all the values of f in the interval [0,a]. Now you’re talking about an infinite-dimensional space. So a rather vague idea of quantizing time gives you a way to justify an infinite-dimensional state-space while retaining the He went on to give a rather nice talk and flesh out the idea some more, giving some credit to a guy named Raju. Here is a reference to one of Raju’s recent papers on the topic: The electrodynamic 2-body problem and the origin of quantum mechanics, Foundations of Physics 34, (June 2004), 937–962. Back to your regularly scheduled program… Johan Richter Says: Comment #2 April 3rd, 2007 at 11:08 am Hey, nice lecture! I have a question that I suppose you have already answered so feel free to tell me I am an idiot if you don’t want to answer it again. You talk about non-relativistic quantm mechanics Has a QFT computer been definied and if it has, what sort of computational power does it have compared to an ordinary quantum computer? Jim Harrington Says: Comment #3 April 3rd, 2007 at 11:22 am Kitaev, Freedman, and Wang showed that a quantum computer can efficiently simulate topological quantum field theories, which implies that a QFT computer would be no more powerful than an ordinary quantum computer. Scott Says: Comment #4 April 3rd, 2007 at 11:23 am Johan: Most people expect that QFT computers will be no more powerful than ordinary quantum computers, for the same reasons why classical computers don’t suddenly become more powerful when you throw in relativity. But the truth is that there aren’t many rigorous results here — partly, as I understand it, because of the difficulty of even defining QFT computers. (We’re dealing, after all, with a theory that’s valid only in a certain energy regime, and that’s known to break down if you go beyond it.) The one definitive result I can point you to is that of Freedman, Kitaev, and Wang, who showed that topological quantum field theories (a special class of (2+1)-dimensional QFT’s, where all the degrees of freedom are topological) yield no more power than ordinary quantum computers. See also Aharonov, Jones, and Landau, who reinterpreted Freedman et al.’s result in much more CS-friendly Scott Says: Comment #5 April 3rd, 2007 at 11:26 am Sorry, Jim — just missed you! As I wrote above, Freedman et al. applies only to the special case of TQFT’s. Pascal Koiran Says: Comment #6 April 3rd, 2007 at 11:58 am One more question on this issue of finite versus infinite dimensional Hilbert spaces: is it true that the uncertainty principle can only hold true in infinite-dimensional spaces? It’s one striking feature of quantum mechanics, but strangely enough never seems to play a role in quantum computation (and information ?). Scott Says: Comment #7 April 3rd, 2007 at 12:25 pm No, Pascal, there are finite-dimensional analogues of the uncertainty principle. A simple example is this: you can’t measure a qubit in both the {|0〉,|1〉} basis and the {|+〉,|-〉} basis. Indeed, the product of your uncertainties in the two bases must be at least a constant. John Sidles Says: Comment #8 April 3rd, 2007 at 12:30 pm Pascal Koiran asks: is it true that the uncertainty principle can only hold true in infinite-dimensional spaces? No. The coherent states that are the physical realization of minimum uncertainty states also exist in all finite-dimensional Hilbert spaces, and all the same uncertainty relations apply with minimal A good text is Perelomov’s Generalized Coherent States and Their Applications (see BibTeX below). Caveat: the same is true of coherent states that Serge Lang famously said of elliptic curves: “It is possible to write endlessly on elliptic curves (this is not a threat).” author = {A. Perelomov}, title = {Generalized Coherent States and Their Applications}, publisher = {Springer-Verlag}, year = 1986, } Nagesh Adluru Says: Comment #9 April 3rd, 2007 at 12:47 pm You lectures are simply amazing. I read them as an art viewer views paintings without bothering too much about the rigor. It’s like bungi jumping without bothering too much about dynamics of how it works. You show limits of human understanding yet show pure scientific spirit of exploration for truth that too in most promising directions. Thanks a lot for choosing to be in TCS!:) Stephen Says: Comment #10 April 3rd, 2007 at 3:34 pm In regard to your thought experiment, about experiencing different colors of dots you say: “Then what’s the probability that “you” (i.e. the quantum computer) would see the dot change color?” But it seems to me that the main point is, for the person to have a memory about his previous experiences, his brain must have more than just the one qubit you mention in your example (3/5 |R> + 4/5 |B> etc.) He would have to have some ancilla qubits which record his previous state. If he has only a one bit brain then he can’t know whether the dot changed color even classically. You could imagine that the quantum upload had some routine which every millisecond or whatever cnotted the content of the “what color dot am I looking at” qubit into some fresh |0> ancilla qubit (say |0> means blue |1> means red). Then after t timesteps he could check to see whether he has seen the dot change color by performing the following unitary operation: if all t memory qubits are in the | 0000….> state or the |1111…> state do nothing to an ancilla qubit initialized to zero, and otherwise apply a NOT to it. This ancilla qubit is then the “have I seen the dot change color” qubit. Now the conditional probability question has an answer, equal to the absolute square of the amplitude for the “have I seen the dot change color” qubit to be |1> (which I think is the same as the classical answer.) One might argue that this is essentially equivalent to John Preskill’s response since the person’s memory is decohering his “which color dot am I looking at qubit”, but I am reluctant to call this decoherence because the uploaded person still can coherently manipulate his memory qubits. Scott Says: Comment #11 April 3rd, 2007 at 3:56 pm Stephen, thanks for your interesting comment! Yes, it’s clear that you won’t remember the color change. If someone asks you afterward whether you saw the dot change color or not, the only honest answer will be that you have no idea. Even so, it’s slightly unsettling that, conditioned on what you’re seeing at time t[1], quantum mechanics can’t even give you a probabilistic prediction for what “you’ll” see at time t[2]! I have a lot of sympathy for your response to this problem, which I do see as basically equivalent to Preskill’s (since even if the “decoherence” is only temporary, it still completely changes the nature of the experiment, making it possible to talk about the past by reference to memories in the present). Sean Carroll Says: Comment #12 April 3rd, 2007 at 5:17 pm You have time running down in your diagrams! That makes no sense at all. Also, I understand that you were explaining the arrow of time in the context of decoherence, not proposing a theory for why there is an arrow of time in the first place. But from the perspective of this latter issue, the question is why we’re not in thermal equilibrium from the start; why did we begin in such a special state? Scott Says: Comment #13 April 3rd, 2007 at 5:29 pm Hi Sean, (1) In computer science, time flows down! (Just like trees grow down.) (2) I completely agree with you that the “deep” question about the arrow of time is why we started in such a low-entropy / unentangled state. There’s a reason I sidestepped that can-o-worms! (Still, I should at least mention it somewhere…) Moshe Says: Comment #14 April 3rd, 2007 at 6:38 pm But is it a separate question about the initial conditions, is there an additional aspects of the boundary conditions to be explained, or is decoherence explained in terms of the thermodynamic arrow of time? Carl Says: Comment #15 April 3rd, 2007 at 6:58 pm Well, we can suppose that the universe has two terminal conditions: The Big Bang on one side, and the Big Rip on the other. If we define time as moving “forward” in the condition that systemic entropy increases, then since the Big Bang represents the point of minimum entropy and the Big Rip (=newfangled heat death) is the point of maximum entropy. In between these two points, time doesn’t need to have a particular “direction” for individual events, but the accumulation of entropy allows us to nominally define a direction for time’s arrow. The blog On Philosophy has a decent explanation of this view. Pascal Koiran Says: Comment #16 April 4th, 2007 at 12:46 am I asked the question about the uncertainty principle & infinite-dimensional spaces because I vaguely remember reading somewhere the following (wrong ?) statement: there’s no uncertainty principle in finite-dimensional spaces because the relation AB-BA = Identity cannot hold for (finite dimensional) matrices. The linear algebra claim is certainly true (the left-hand side has trace 0 but not the right-hand side). However I do not really understand the connection to the uncertainty principle… Pascal Koiran Says: Comment #17 April 4th, 2007 at 12:52 am Another question: have you noticed that you can attach to the root of your multiverse tree another multiverse tree, with leaves pointing upwards instead of downwards? (That would for sure make Sean Carroll happy!) This suggests that before the big-bang, there existed perhaps another multiverse with a time arrow opposite to the time arrow of our multiverse… I think that Sean Carroll had a blog post on a similar idea, and even an actual physics paper! Perhaps he would care to comment on that? David Speyer Says: Comment #18 April 4th, 2007 at 4:31 am Pascal, I’m a mathematician, not a physicist, so take this with a grain of salt but… The general mathematical form of the uncertainty principle is the Robertson-Schroedinger relation: if A and B are two Hermitian operators on a Hilbert space and phi is any nonzero vector in the Hilbert space then Delta(A,phi) Delta(B,phi) >= (1/2) / where Delta(A,phi), the standard deviation of A in state phi, is given by Delta(A,phi)=( / )^{1/2} where A’ is A normalized to have expected value zero, i.e., In particular, the minimal possible product of the Deltas is norm of the smallest eigenvalue of [A,B]. The nice thing about taking [A,B]=Id, in this context, is that we know what the eigenvalues of the identity are, but we get a nontrivial bound whenever [A,B] is nondegenerate. David Speyer Says: Comment #19 April 4th, 2007 at 4:35 am > Close tag. Oh, gosh, HTML (or at least Safari) hates my inner products. Well, if that doesn’t close my tag then I’ll have to wait for someone cannier in the ways of blogs to clean up my mess. Sorry! I just checked, and all of my formulas are in Wikipedia’s entry for the Uncertainty Principle, except that they nromalize their state to have norm 1 from the beginning. So I’ll refer you there rather than trying to retype them in a way that will typeset correctly. John Sidles Says: Comment #20 April 4th, 2007 at 4:39 am Pascal Koiran: There’s no uncertainty principle in finite-dimensional spaces because the relation AB-BA = Identity cannot hold for (finite dimensional) matrices. That’s formally correct, but there is a very simple loophole, that quantum system engineers exploit to establish a connection between finite-dimensional coherent states and infinite-dimensional harmonic oscillator states. Consider spin operators {Sx,Sy,Sz} in the usual representation of 2j+1 (finite) dimensions, satisfying [Sx,Sy] = i S_z. Now restrict attention to states |&psi〉 that are near the “north pole” of the Hilbert space, i.e., such that Sz ≈ j. Define rescaled operators p=Sx/sqrt(j), q = Sy/sqrt(j). Then [p,q]≈i when acting on those “north pole” states. For quantum simulation purposes, this means that harmonic oscillators can be treated as large-j spins, provided that a control loop (or equivalently, a thermal bath) is present to restrict quantum trajectories to the neighborhood of the north pole. Then j need only be set to the (nondimensional) energy scale of the excitations of the system being simulated. It is very convenient to introduce an energy cut-off in this way, because all the “nice” algebraic properties of the spin operators are preserved. And, it is pleasant to write simulation codes in which “everything is a finite-dimensional spin” — this keeps life simple. On a more fundamental level, it would presumably be possible to use the above trick to cut-off high energy states in path integrals by treating x and p as operators rather than oscillator coordinates — it is not clear (to me) whether this is the approach that people are exploiting in what is called noncommutative geometry. There is a lot of literature on path integrals defined over Lie groups — most of this literature is not engineer-friendly. But I get the impression that fundamental research in this area is definitely not out of reach of graduate students. Perhaps someone can comment? Ghaith Says: Comment #21 April 4th, 2007 at 4:51 am Hi Scott, I had two questions, 1. How would a hidden variable theory explain the correlation between the two qubits after measuring the EPR pair? 2. Why would the tree “run out of room to expand” if we start in an infinite dimensional Hilbert space? John Sidles Says: Comment #22 April 4th, 2007 at 5:19 am I will remark that David Speyer’s post and my post are saying pretty much the same thing about uncertainty relations, each in our own idiom. Also Scott was right to foresee that “this will be one of the most loved and hated lectures of the course”, given that many people have formed strong opinions regarding the interface between classical and quantum reality. Barry Mazur coined an aphorism that explains why: “Utter confidence is the gift of ignorance.” Scott Says: Comment #23 April 4th, 2007 at 6:47 am (1) As the lecture explains, any hidden-variable theory has to invoke “instantaneous communication” between two entangled qubits in order to explain their correlations — that’s precisely the content of Bell’s theorem. This does not formally violate special relativity, since even though one’s description of the hidden variables involves nonlocality (usually in a preferred reference frame), one can’t exploit that to send signals faster than light. Of course, many people use this as an argument against hidden-variable theories anyway. (2) Even in an infinite-dimensional Hilbert space, any “branch” has to be thought of as having a finite width ε, just because of quantum fluctuations within that branch. And assuming (as I am) that we’re talking about a bounded region of spacetime, that means we can only have O(1/ε) branches. As a toy example, think of the possible positions of a particle along a 1-cm interval. Even though the particle’s position is a real number, if two positions differ by less than (say) 10^-33 cm then they can never be distinguished, and should therefore be thought of as belonging to the same branch. But that implies that there can be at most 10^33 branches. HN Says: Comment #24 April 4th, 2007 at 11:01 am Does working on QC make people seem crazier, or only seemingly crazier people work on QC? Anyhow, thanks for a great series of lectures! I have another lingering question which I’d love to hear comments from you and your readers: how do you guys manage your schedules? Is there a well-known algorithm to sort out the mess of reading, refereeing, teaching, grant writing, admin. work, networking, blogging, commenting on blogs (!), and a myriad of other stuff you do? If not a specific algorithm, is there a complexity class of such The brute-force algorithm “take things as they come” has been ok for me, I’m just wondering about a more efficient one I am not aware of. Scott Says: Comment #25 April 4th, 2007 at 11:21 am HN: Are you implying that I come across as less than sane? (“You may be right / I may be crazy / but it just might be a LOOOO-natic you’re lookin’ for…” Regarding your other question: if you ever figure out how to manage an academic schedule, please let me know! I’ve been wondering for years. HN Says: Comment #26 April 4th, 2007 at 12:01 pm Are you implying that I come across as less than sane? No, you’re too sane for the sake of your own good. Scott Says: Comment #27 April 4th, 2007 at 12:09 pm roland Says: Comment #28 April 4th, 2007 at 3:08 pm scott, do you actually believe in the many worlds interpretation? Scott Says: Comment #29 April 4th, 2007 at 3:28 pm I believe in every interpretation of quantum mechanics to the extent it points out the problem, and disbelieve in every interpretation to the extent it claims to have solved it. aravind Says: Comment #30 April 4th, 2007 at 4:20 pm re: HN and Scott’s comments on time-management in academia: Knuth suggests batch-processing — keep blocks of time for single tasks, don’t do too much task-swapping. I, personally, have lots of room for improvement in time-mgmt, but have found this technique very helpful whenever i have followed it .. aravind wolfgang Says: Comment #31 April 4th, 2007 at 5:11 pm I have two questions about your remark “if two positions differ by less than (say) 10-33 cm then they can never be distinguished, and should therefore be thought of as belonging to the same branch.” 1) are you all of a sudden a physicist ? 2) since you refer to the Planck length, which has a meaning only in quantum gravity, do you suggest that the interpretation of quantum theory depends on (has to wait for) quantum gravity? Scott Says: Comment #32 April 4th, 2007 at 5:34 pm Wolfgang: If you have to be a physicist to have any definite belief about the Planck scale, then sure — I’m a physicist. As for whether quantum gravity (and specifically, the holographic entropy bound) is relevant to the interpretation of quantum mechanics: sure it is, if people insist on talking about infinite-dimensional Hilbert spaces in the first place! In other words, if a believer in infinitely many branches objected to my quantum-gravity argument, on the grounds that I was bringing in something extraneous to quantum mechanics itself, my response would boil down to: “You started it!” Greg Kuperberg Says: Comment #33 April 4th, 2007 at 10:29 pm Pascal: Here is the generalized uncertainty principle as stated in my (alas, unfinised) notes: Let X and Y be real-valued quantum random variables and let Z = i[X,Y]; Z is another real-valued quantum random variable. Then Var[X]Var[Y] ≥ Ex[Z]^2/4 with respect to any given state of the system |psi> or rho. In the event that the commutator Z is constant, then the right side is easy to compute. Greg Kuperberg Says: Comment #34 April 4th, 2007 at 10:32 pm I believe in every interpretation of quantum mechanics to the extent it points out the problem, and disbelieve in every interpretation to the extent it claims to have solved it. Maybe I am the thick one on this topic, but: Do we know that there is any real “problem” other than that we humans have trouble believing the truth, i.e., non-commutative probability? Blake Stacey Says: Comment #35 April 5th, 2007 at 4:29 am “It has not yet become obvious to me that there’s no real problem. I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.” Scott Says: Comment #36 April 5th, 2007 at 5:47 am Greg, I see the measurement problem as a “hard problem” like consciousness or the existence of the universe, not an “easy problem” like … uh, y’know … quantum gravity or P vs. NP. In other words, I have no idea what an answer would look like, and I’m unwilling to say whether or not one exists. What I know is that, absent some insight all of us are missing, people will continue to pose the problem for as long as there’s quantum mechanics, just as other people will continue to say that it’s meaningless. This view is not incompatible with yours. John Sidles Says: Comment #37 April 5th, 2007 at 6:47 am So far, this is very enjoyable discussion that has the added virtue of being very friendly and thought-provoking for students. So thank you Scott, yet again. Just to mention, even if quantum mechanics had never been discovered, these same issues (determinism, computational complexity, origin and fate of the universe, etc.) would still be discussed, and surprisingly many of the same mathematical and philosophical arguments would apply. A student-friendly introduction to this literature is Lenore Blum’s highly readable Computing over the Reals: where Turing meets Newton (which includes many further references). One point of view is that quantum information theory is simply the most natural “complexification” of the real mathematics that Blum’s article discusses. As usual in mathematics, complexification introduces new invariances and new insights that make links to other branches of mathematics (and physics) easier to see. On this grounds, we can expect a glorious mathematical future for quantum information theory. An alternative point of view, which is particularly congenial to engineers, regards the fundamental equations of quantum mechanics as settled … maybe not perfectly, but well enough for practical work, in particular as the equations describe nonrelativistic dynamics and measurement. In other words, just accept Nielsen and Chuang Chapters 2 and 8 as gospel! From this engineering point of view, quantum equations of motion are rather like fluid equations of motion, and quantum information theory is rather like computational fluid dynamics (CFD). If we do a bit of historical digging, we find that biggest breakthroughs in CFD over the past twenty years have been partly associated with improved understanding of the CFD equations, but to an even greater extent, have been driven by better numerical techniques for solving them. Overset grid techniques in particular have assumed a central role—these techniques allow engineers to wrap numerical grids around the “lumpy” aerodynamic objects that arise in practice. The article Thirty years of development and application of CFD at Boeing reviews recent CFD history in a very student-friendly form. It is interesting that this review article, written in 2002 by Boeing’s own CFD experts, failed to foresee the further acceleration of CFD techniques to become the enabling technology behind the half-trillion dollar enterprise that is the Boeing 878 Dreamliner; this provides a not-too-common example of a technology that works far better in the real world than even its creators envisioned. In our own experience, the analog of overset grids in CFD appears to be Kähler manifolds in quantum simulation … a kind of all-purpose mathematical object on which it is particularly convenient to realize quantum equations of motion. It will be interesting to see how far these techniques can be pushed. To bring this post to a point, we discern in the above examples that quantum information theory helps humanity to create ideas that work, to create technologies that work, and most important (IMHO), to create communities that work—meaning peaceful communities that create resources and jobs. It is desirable for students, especially, to appreciate that quantum mechanics is a big elephant that can be embraced from many different directions and in service of many different objectives: all of which are wonderful. Note: I set out to post the least cynical, most cheerfully optimistic essay that I could .. others are encouraged to try too, and links to interesting literature are especially nice. Peter Shor Says: Comment #38 April 5th, 2007 at 8:19 am Scott says I believe in every interpretation of quantum mechanics to the extent it points out the problem, and disbelieve in every interpretation to the extent it claims to have solved it. Interpretations of quantum mechanics, unlike Gods, are not jealous, and thus it is safe to believe in more than one at the same time. So if the many-worlds interpretation makes it easier to think about the research you’re doing in April, and the Copenhagen interpretation makes it easier to think about the research you’re doing in June, the Copenhagen interpretation is not going to smite you for praying to the many-worlds interpretation. At least I hope it won’t, because otherwise I’m in big trouble. serafino Says: Comment #39 April 5th, 2007 at 9:43 am But if quantum mechanics is an ‘operating system’, or a ‘syntax’, does it make any sense to interpret quantum mechanics? Blake Stacey Says: Comment #40 April 5th, 2007 at 10:03 am I just stole Peter Shor’s comment for a blog post much less scientific than this one, linked in the URL field. Scott Says: Comment #41 April 5th, 2007 at 10:23 am I like that, serafino: “The Interpretation of Windows XP.” At the risk of sounding like some Continental philosopher smoking his pipe and uttering vacuous profundities: the basic goal with interpretations is start from the “syntax” of quantum mechanics, and connect it to the “semantics” of what we actually experience. Greg Kuperberg Says: Comment #42 April 5th, 2007 at 11:26 am I see the measurement problem as a “hard problem” like consciousness or the existence of the universe, not an “easy problem” like Why single out measurement from the rest of quantum probability, given that it is an unavoidable corollary of the theory? You know full well that if a quantum Alice entangles with i.i.d. quantum Bobs in repeated trials, then her state will concentrate at the perception that the Copenhagen interpretation is true. I have no idea what an answer would look like Like Feynman, I have no idea what the question should look like. Jonathan Vos Post Says: Comment #43 April 5th, 2007 at 11:54 am Cool comment from Shor. Now, if you can get Deutsch and Feynman to comment, as would be possible in some worlds… I’m deliberately NOT putting on my Science Fiction Author hat, which would tempt me to babble about the Multiverse. Would you be willing to comment on the below? It appeared in the past 2 days: arXiv:quant-ph/0407008 (cross-list from quant-ph) [ps, pdf, other] : Title: Classically-Controlled Quantum Computation Authors: Simon Perdrix, Philippe Jorrand Comments: 20 pages Quantum computations usually take place under the control of the classical world. We introduce a Classically-controlled Quantum Turing Machine (CQTM) which is a Turing Machine (TM) with a quantum tape for acting on quantum data, and a classical transition function for a formalized classical control. In CQTM, unitary transformations and measurements are allowed. We show that any classical TM is simulated by a CQTM without loss of efficiency. The gap between classical and quantum computations, already pointed out in the framework of measurement-based quantum computation is confirmed. To appreciate the similarity of programming classical TM and CQTM, examples are given. James Graber Says: Comment #44 April 5th, 2007 at 12:00 pm Another doofus question: Does measurement based quantum computing (MBQC) have anything to do with the problem of measurement? (I always thought it did.) In fact I thought that the existence of MBQC pretty much refuted the big idea behind measurement-free or collapse-free interpretations. Am I wrong? Dave Bacon Says: Comment #45 April 5th, 2007 at 12:05 pm In computer science, time flows down! (Just like trees grow down.) And in all sorts of crazy directions when you call a procedure or function. And then there are digital circuit diagrams where time flows in many different directions. Scott Says: Comment #46 April 5th, 2007 at 1:56 pm Would you be willing to comment on the below? Jonathan: The paper in question shows that you can have a universal quantum Turing machine with a classical tape head, or in other words that it’s only the tape symbols that need to be in superposition. This looks to me like a correct result, also easy and unsurprising. Scott Says: Comment #47 April 5th, 2007 at 2:11 pm James: Despite the claims we sometimes hear to the contrary, no interpretation of quantum mechanics can ever be “refuted” by measurement-based quantum computing or any other quantum-mechanical phenomenon. This is because, by definition, all interpretations lead to exactly the same predictions for all such phenomena. (If they don’t, then we should think of them not as interpretations but as rival physical theories.) Having said that, MBQC really is a beautiful discovery, and it’s reasonable to hope that understanding it better might clarify some of the issues in quantum foundations. Niel Says: Comment #48 April 5th, 2007 at 6:35 pm James: in addition to what Scott says above, the “magic” of MBQC has more to do with the interplay between measurement and entanglement. To get non-trivial computation out of measurements, you also need the quantum correlations + ‘classical’ feed-forward; and it is not unusual to see MBQC compared to teleportation. There’s nothing in MBQC that isn’t already contained in much more popular (or more frequently popularized) instances of the oddness of quantum info. Consider what a classical counterpart to MBQC would look like (with some fudging in order to get something which is only slightly trivial, instead of completely trivial). Replace the entanglement graph by bits which are either correlated between neighbors in a graph, or instructions to toggle your bit depending on an interaction with someone else. Just in establishing the correlations corresponding to the entanglement graph, you are performing a computation (with your measurements just corresponding to looking at the actual bit-values, rather than blindly copying or toggling the single bit you have). James Graber Says: Comment #49 April 5th, 2007 at 7:05 pm Thanks for answering my naïve questions. I absolutely agree with what you said re interpretations vs. rival theories. What this implies to me is that every interpretation must have something essentially equivalent to collapse, (perhaps disguised). For many-worlds this has always seemed obvious to me, they just call it splitting, but it waddles like a collapse and it quacks like a collapse, so it’s a duck. I guess for Bohm, the collapse-equivalent is choosing or making apparent the previous choice of one of many, perhaps infinitely many, nearby trajectories. (If there really is an equivalent to collapse in an interpretation, but one says not by choosing other words, that seems like a fraudulent sophistry to me.) In a parallel inference, if one interpretation, e.g. hidden variables, requires nonlocality, then they all do, no matter how much they try to disguise it. (Of course I recognize the distinction between Einstein-nonlocality and signal-nonlocality, although this is not easy to grasp, and even harder to believe in, sort of like the twin paradox.) It has always seemed to me that the nonlocality denying interpretations pretty much rely on “don’t ask, don’t tell” or “you can’t ask that question” which just means ducking the issue. (I must confess that I have never been able to comprehend any of the “you can have locality, but you must give up realism” positions. To me they all seem to require either a terrible misuse of language, or they go back to “don’t go there.”) I am eagerly hoping for something like the dialogue between “Axioms” and “You” in “Is P versus NP Formally Independent”, only between a skeptic and a believer in (advocate of?) this locality-yes, reality-no position. Maybe it would help me get my head around it. Incidentally, the same argument above would imply that if one interpretation requires “rolling the dice”, they all do, including Bohm. I’ll buy that. The so called determinism is just another form of rolling one big die at the beginning of the universe, instead of lots of little dice all the time. I will admit the trajectories are a neat implementation of this idea, however. Based on the above, I conclude that any reasonable interpretation of QM must include all three of randomness, Einstein–nonlocality and collapse. Strangely, this seems to be noncontroversial for randomness, but highly controversial for collapse and Einstein-nonlocality. On the other hand, just because I call something an interpretation rather than a rival theory doesn’t mean this is true. There is already quite a literature of authors accusing Bohm of being a rival theory, rather than an interpretation. I have always thought the mathematical evidence for Bohm being identical to SQM, and hence an interpretation, was pretty airtight, modulo one division by zero issue. But I was surprised by your proof that Bohm does not work in finite dimensional Hilbert spaces. (I had certainly never heard that one before.) If that is true of Bohm and not true of other interpretations, I think that would be a strong argument for the rival-theory position. I am tempted to ask what happens to Bohm if you try to coarse-grain it or project it down or in some other way reduce it to a finite dimension, but that sounds like too ill-formed a question even for me. Instead I will ask how Bohm deals with your counterexample if it is embedded in an infinite dimensional Hilbert space. My guess is that it merely boils down to choosing one from a set of nearby trajectories. Thanks for trying to help. I will study your reply some more. I don’t know where the magic comes from, but to me at least the weirdness of entanglement is covered by the Einstein-nonlocal aspect. Sorry for such a long post. I know this is not my blog. Greg Kuperberg Says: Comment #50 April 5th, 2007 at 8:43 pm At the risk of sounding like some Continental philosopher smoking his pipe and uttering vacuous profundities: the basic goal with interpretations is start from the “syntax” of quantum mechanics, and connect it to the “semantics” of what we actually experience. Yes, Scott, you’re at risk. I really don’t understand what the problem is. I understand that people who are less used to quantum probability would perceive a fundamental problem here. But once you get used to the later chapters of Nielsen and Chuang, for example, then what is the real mystery? You can learn that if you dephase a qubit, it becomes a c-bit. You can learn that if you start with qubit A, entangle it with qubit B, then dephase B, then B has become classical and, voila, has measured A. So there measurement appears, modelled as a quantum operation. Measurement is not some tacked-on extra thing; it appears inside the game with unitary operators if you combine them properly. Of course, a realistic observer is more complicated than a dephased qubit and a realistic measurement is more complicated than creating entanglement between qubits. But why should realistic measurement be fundamentally different from this fairly simple special case, which is after all important in quantum algorithms? I have the feeling that “the measurement problem” serves one of two purposes. It is either interesting for people who haven’t learned the above; or it is a “hook” to get people to study better questions in quantum probability. Otherwise, again, I just don’t know what people are trying to accomplish. I don’t want to accuse anyone of being a dolt, least of all any of the serious experts, but these discussions bother me more every time I see them. Certainly in my own notes (if I ever find time to revise them), I hope to debunk the measurement problem rather than philosophize over it. Scott Says: Comment #51 April 5th, 2007 at 9:50 pm James, it’s a pleasure to see you struggle intelligently with some of the issues all of us abyss-dwellers eventually face. To answer your questions: If you embed my counterexample into an infinite-dimensional Hilbert space, what you’ll get is a wavefunction with a discontinuity, which a Bohmian would reject as “pathological.” You can define hidden-variable theories in finite-dimensional Hilbert spaces that are close to Bohmian mechanics; the only problem is that they won’t be perfectly “deterministic” (in the restricted sense that Bohmian mechanics is). So perhaps one should say: while the idea of hidden-variable theories doesn’t lead to any physical predictions, Bohm’s specific hidden-variable theory does implicitly make a prediction — namely, that the right Hilbert space is that of the positions of point particles in R^3. And this is a prediction that, even in Bohm’s time, there were excellent reasons for thinking was wrong. Scott Says: Comment #52 April 5th, 2007 at 10:03 pm Greg, I’m tempted to tell you what others have told me about music: if a particular analysis of the measurement problem doesn’t “do it” for you (i.e. doesn’t give you any new insights about quantum mechanics), then you shouldn’t bother studying it. As for whether these analyses “do it” for anyone, the evidence we have is that many-worlds led Deutsch to quantum computing, Bohmian mechanics led Bell to Bell’s inequality, Copenhagen and many-worlds have helped Shor think about his research (as the man himself just told us), etc. etc. Greg Kuperberg Says: Comment #53 April 5th, 2007 at 10:24 pm Copenhagen and many-worlds have helped Shor think about his research Sure, I appreciate both of these for their intuitive or pedagogical value. I use them too. But I don’t think of them as answers to a problem. Granted, Copenhagen is an answer to a question: it’s the correct description of what an agent in a quantum world perceives. But that’s an empirical question, not a philosophical one. Okay, I concede that pedagogy and intuition count as a third purpose to the measurement “problem”. Scott Says: Comment #54 April 6th, 2007 at 12:11 am Here’s a question, Greg: is there anything that counts for you as a philosophical problem? serafino Says: Comment #55 April 6th, 2007 at 4:17 am About those philosophical problems, I remember that Dirac gave a speech in Rome (April 14, 1972), talking about the development of QM. He pointed out the crucial role of the present quantum formalism, that he thought wasn’t the ultimate and definitive formalism. Since I was there, with my tape recorder, I can quote his words precisely. “I must say that I also do not like indeterminism. I have to accept it because it is certainly the best that we can do with our present knowledge. One can always hope that there will be future developments which will lead to a drastically different theory from the present quantum mechanics and for which there may be a partial return of determinism. However, so long as one keeps to the present formalism, one has to have this indeterminism.” John Sidles Says: Comment #56 April 6th, 2007 at 5:30 am Serafino says: “I must say that I also do not like indeterminism. I have to accept it because it is certainly the best that we can do with our present knowledge.” Just to link the above statement to information theory, it is a characteristic prediction of quantum mechanics, confirmed by real-world experience, that experiments can have many possible outcomes. And this obvious and seemingly boring statement has profound mathematical consequence. Example: if we scatter 10^16 photons off a test mass (e.g., a LIGO or a nanoscale cantilever), and measure them with homodyne interferometry, then each photon yields a binary-valued data point, and therefore, each experiment yields a data record that is a binary number of 10^16 bits (note: the preceding examples are not abstract … they are the way that measurements are done in the real world). Now comes the mathematical point: almost all members of the ensemble of 2^10^16 possible data records are algorithmically incompressible. And this statement is nothing more than the Kolmogorov-Chaitin definition of randomness. So quantum mechanics is necessarily random, and so is any other theory that predicts sufficiently large ensembles of possible data records. The preceding is what we teach engineering students about the origin of quantum randomness. The main virtue of this approach is that it encourages students to move on to practical applications, rather then stopping to “solve the mystery of quantum randomness.” Would we really want to remove the randomness from quantum mechanics, if the price were that every data record would necessarily be algorithmically compressible? That would be much too high a price! Of course, the mystery of quantum mechanics can be rescued by introducing hidden variables as “God’s crib sheet.” But for engineering purposes there is not much point in doing this, so long as we are not allowed to look at the crib sheet. Of course, three quantum mysteries (or more) remain. Why is the quantum state space of the universe so large? Why is so little of this state space accessible to us, who are embedded within it? Why do the quantum equations of motion so carefully guard the classical-quantum boundary from direct observation? David Speyer Says: Comment #57 April 6th, 2007 at 5:49 am OK, a question now. Can anyone point me to a good description of one of these “you can have locality, but you must give up realism” interpretations? It seems to me that the real meaning of Bell’s theorem is that I am forced to give up on locality no matter what, so realism is kind of a red herring. (In terms of wave function like descriptions, Bell’s theorem says that I am really required to think of the wave function for two particles as a function on space^2, not two functions on space; in terms of Hilbert space descriptions it says that I can’t mimic an entangled state by a pure state in some larger Hilbert space.) I’d be curious to see how these non-realist local theories get around this. Thanks in advance! Scott Says: Comment #58 April 6th, 2007 at 6:13 am David, it depends what you mean by “local.” You can think of the Copenhagen interpretation as “local but not realistic” in the following sense: nothing Alice can do to her half of an EPR pair can possibly affect Bob’s density matrix, and in Copenhagen the density matrix is all there is. Of course, if you want a density matrix describing Alice’s and Bob’s systems jointly, then it has to be entangled. But maybe that’s not so bad, since even in the classical world, we know that a joint probability distribution over two systems in general has to be correlated. Greg Kuperberg Says: Comment #59 April 6th, 2007 at 7:29 am Here’s a question, Greg: is there anything that counts for you as a philosophical problem? No! I believe in philosophical implications, but not philosophical problems. I don’t want to get too impolitic about philosophers at the moment, but I can say this about the boundary between science and philosophy. Historically, philosophy has been a repository for confusions in science for which no one had an answer, or for which there can be no answer. Until Kepler and Newton and those guys, the motion of the planets was a good philosophy problem. Now it mostly isn’t, it’s physics. It’s no longer fun to debate why Mercury chases Venus; instead, you just learn Newton’s answers. I view quantum information theory, the whole arc of it from von Neumann to Holevo and Shor, as the same cure for the philosophy of quantum mechanics. It clears the air. I was thrilled at QIP 2004, because the whole conference made quantum philosophy trite. Even so, philosophical implications are a good thing. Actually I want to emphasize a specific point here. I am convinced that the much of the driving force of quantum philosophy is the old-fashioned language of separate unitary operators and measurements. If you only know the Copenhagen business as Max Born knew it, then it really is confusing and an invitation to philosophize. But once you get used to mixed states and quantum operations, and generally classical probability as a special case of quantum probability, then the measurement “problem” really seems like a pretense. Greg Kuperberg Says: Comment #60 April 6th, 2007 at 7:38 am Of course, if you want a density matrix describing Alice’s and Bob’s systems jointly, then it has to be entangled. But maybe that’s not so bad, since even in the classical world, we know that a joint probability distribution over two systems in general has to be correlated. I completely support this explanation, except for the word “maybe”. Entanglement is no more than a flavor of correlation. Also one should note the bias in the fossilized term “density matrix”. For a lot of physicists, indeed for virtually all of them until the past few decades, density matrices are “just a formalism”. A much better name is mixed state, whereas a vector state in a Hilbert space is a pure state. Mixed states let you completely recover locality. When I learned about mixed states, they lifted a fog. Anyway, the short answer to your question, David, is that you don’t give up on locality at all, you just redefine it. David Speyer Says: Comment #61 April 6th, 2007 at 7:49 am Thanks, that helps a lot! mitchell porter Says: Comment #62 April 7th, 2007 at 1:24 am This is a topic I can get angry about. Where might we be by now, if the dominant attitude in physics had always been: of course quantum mechanics is incomplete; the work of fundamental physics will remain unfinished so long as all we have are quantum theories. I suppose that, if nothing else had been invented, the prevailing theories might be Bohmian field theory and general relativity, or even a Bohmian string theory; and the big conceptual problem in physics would be to understand the relationship between Bohmian nonlocality and relativistic locality. A world in which Bohmian mechanics was the dominant paradigm would, I think, be intellectually much healthier. It may sound strange to say that even today, there is a prevailing complacency towards the meaning of quantum theory; but just see how many people there still are who feel their intellectual duty is to adapt themselves to quantum reality, become comfortable in a quantum universe, etc. So far as I can tell, this is mostly a matter of ceasing to ask questions such as, why does an observable take the particular value it does; did it have a value before the measurement; is the ‘quantum state’ the actual state of the object, or just an aid to calculation; and so forth. These are completely natural questions to ask, and they would be a lot harder to ignore if Bohmian mechanics, with its classical determinism and objectivity, were the orthodoxy, and the Copenhagen interpretation was the minority viewpoint. Scott Says: Comment #63 April 7th, 2007 at 2:36 am Mitchell, out of genuine curiosity, let me ask two questions to try and bridge the gap between your way of thinking and mine. (1) Does it matter to you whether people adopt Bohmian mechanics or any of a dozen other nonlocal hidden-variable theories that I could write down, with different guiding equations? As I mentioned earlier, the big problem I have with Bohmian mechanics is that it only works in infinite-dimensional Hilbert spaces. (2) Do you think a physicist would be wrong to say of a philosophical paradigm, “I’ll embrace this if, and only if, it leads to new insights into concrete problems that I’m trying to solve”? mitchell porter Says: Comment #64 April 7th, 2007 at 5:32 am (1) Scott, if you mean Bohm-like theories based on [S:observables:S] beables other than position, at least they all have equations of motion that don’t need “measurement” as a primitive concept. So in that sense, yes, any of them would be a refreshing return to objective physics. It might even be an enlightening switch to tackle the mind-body problem from the perspective of, say, momentum-space Bohmian mechanics. That said, I’m not so sure that they are all of a piece. Howard Wiseman has a curious unpublished theorem which says that a particular property, which I’ll call “WVC”, is true only for the position basis (I’ll see if he wants to discuss it here). Also, if you took the momentum observables of your quantum theory to be the classical beables of your Bohmian theory, I am somewhat skeptical that they would still warrant the name of “momenta”. It would be time for a rethink of nomenclature from first principles, something that’s not necessary for position-basis Bohmian Objective-collapse theories like Ghirardi-Rimini-Weber also pass my basic objectivity test, as do sets of decoherent histories, although the formalism is getting a little mysterious there – e.g. you can have a coarse-graining in which observable X is only specified as taking some value in an interval (a,b); to interpret that as a beable, I think you’d have to regard X as interval-valued. One of the incoherencies which has been tolerated in the quantum age is the idea of objectively indeterminate properties: the particle has a position, just no particular position. It’s easy to joke about, but I do think the effect has been to retard progress. If you don’t even notice the incoherence, you’re not likely to do anything about it. Infinite-dimensional Hilbert spaces… That shouldn’t be a problem for you, all you have to do is dust off Blum, Shub, and Smale, right? OK, there’s lots of talk about finite-dimensional Hilbert spaces in quantum gravity, and I’m not sure how the holographic principle looks from a Bohmian perspective. As I too see the charm that a discrete fundamental physics would have, I suppose I would look for discrete approximations to continuum Bohmian gravity [algebraic-geometric hocus-pocus redacted here]… I’m getting a little too jolly here, let’s move on to (2) which I will answer with a question of my own: Do you think my objection to the notion of objectively indeterminate properties is just a “philosophical paradigm”? I think it’s more like a prerequisite of rational thought. Greg Kuperberg Says: Comment #65 April 7th, 2007 at 9:04 am Where might we be by now, if the dominant attitude in physics had always been: of course quantum mechanics is incomplete We would only have gotten less done. When I see important constructions like the Lindblad equation — which is a mixture of the Schrodinger equation and classical Brownian motion — I see the philosophical dissatisfaction with quantum probability fade. It is certainly faded among operator algebraists, for whom quantum probability is no more than non-commutative probability. Even Nielsen and Chuang does not convey the true message that classical and quantum probability satisfy virtually identical axioms; all you have to do is strike commutativity. If the community had stuck forever to the sentiment that there has to be something wrong with quantum probability, it would not have found these great ideas that undermine that sentiment. Or I should say, when the community stuck to that sentiment, it did not find most of these ideas. I concede that John Bell did not like quantum probability, and he found the Bell inequalities. But he was the last exception, and that was 40 years ago. As for “Bohmian mechanics”, it’s not a separate theory at all, it’s just a way of explaining quantum mechanics. I don’t want to say that it is never useful, but it is a very conservative, dogmatic explanation. If you want to put it above all other explanations, it reminds me of Minkowski’s description of certain skeptics of relativity: It’s like hearing a symphony with cotton in your ears. Moshe Says: Comment #66 April 7th, 2007 at 9:18 am Scott, funny how your position regarding finite dimensional Hilbert spaces is completely orthogonal to mine, I guess it is what you get used to that matters. Almost all physical systems are described by infinite dimensional Hilbert spaces (by QFT rather than QM). Finite dimensional Hilbert spaces are an abstraction that works only in certain limits (e.g non-relativistic limit). I was always wondering if there is any substantial difference in interpretation issues, especially one invoking relativity, if the right framework is used. As for the holographic bounds, finite entropy does not imply by itself a finite Hilbert space, I don’t see any strong argument for the latter. However, the scaling with the area rather than the volume requires some extreme level of bulk non-locality, I could imagine that being important. Scott Says: Comment #67 April 7th, 2007 at 10:17 am Moshe, why does finite entropy not imply a finite Hilbert space dimension? (This probably goes back to our different definitions of the word “entropy.”) I thought (from reading Bousso’s papers and talking to Susskind) that even in string theory, the log of the Hilbert space dimension goes like the surface area over 4. I saw that as a great virtue of the theory. If it needed an infinite-dimensional Hilbert space, then I’d feel like string theory couldn’t possibly be correct. Moshe Says: Comment #68 April 7th, 2007 at 10:31 am Yeah, it does go back to that notion of thermodynamic vs. Von Neumann entropy. An ideal gas in some finite volume has an infinite dimensional Hilbert space and a finite thermodynamic entropy (at any finite temperature). There is some hope that finite *maximum* entropy associated with a region of space imply finite Hilbert space associated with that region of space. Problem is I cannot figure out what does it mean to associate Hilbert space OR entropy with a finite region of space. Once gravity is involved the notion of finite part of space is not even well-defined. I am not sure which argument you refer to, in AdS/CFT for example the entropy of any bulk configuration (say a black hole) is explained in terms of conventional quantum field theory (living on finite space). The number of degrees of freedom is finite, but the Hilbert space in infinite dimensional. I’m having fun avoiding argument for a change, but let me just say that in my mind philosophical preconceptions are something to strongly avoid when searching for an unknown theory. I am wondering though why you have such a strong intuition that a finite Hilbert space is necessary, where there are all those well-understood examples of non-perturbative quantum gravity where things simply don’t work that way. James Graber Says: Comment #69 April 7th, 2007 at 11:00 am More from the peanut gallery: I want to ask about decoherence as a rival theory, rather than merely an interpretation. As I understand lecture 11, you treat decoherence as just another interpretation, not an alternate theory. Is that truly your view? Is that the general view? Would Zurek agree to that? On the other hand, I had hoped that decoherence could be viewed as an actual rival theory that went beyond QM. (Not a rival theory of type A, that disagrees with SQM at some point, but a rival theory of type B, that agrees with SQM at all points where both make predictions, and then goes on to make additional testable predictions.) Any how, if decoherence really is only an interpretation, they sure seem to be going through a lot of mathematics to make themselves feel better. Of course, this idea that QM is not complete has a long history (smiley) I always thought that QED and QFT, not to mention string theory put paid to the idea that there was nothing useful to be added to SQM, but most people don’t seem to interpret the question that way. Anyway, I would like to ask if you also hold that SQM is complete in some important sense. If so, I will need to ask for an explanation of what this means. Tying this back to decoherence as more than just an interpretation: As I understand it, decoherence (D) basically does the same thing as collapse (C). However, the hope would be you could do a better job of predicting when and where D or C will occur by observing the environment. Perhaps you could even control or engineer the environment to delay or accelerate the D/C in order to benefit your quantum computer? (Maybe this is what D-Wave needs to do!) Or does decoherence merely consist of solving the three body problem (system, apparatus, environment) in SQM? David Speyer, I really like your geometric interpretations of Bell’s theorem. They give me new clues to chew on. “Mixed states let you totally recover locality.” This is totally new to me, could you please explain further, or point me to a preexisting explanation? Greg Kuperberg Says: Comment #70 April 7th, 2007 at 11:11 am Scott: Technically speaking, infinite dimensions need not imply infinite entropy. To understand what is going on, it helps to brush up on the spectral theorem for infinite-dimensional Hilbert spaces. (Let’s say countable-dimension for now. Yet bigger Hilbert spaces are pathological, at least for the fundamental laws of physics.) The spectral theorem says that the spectrum of a self-adjoint operator, such as either a measurement or a density operator, has two parts: A point spectrum with honest eigenvectors, and a continuous spectrum with only approximate eigenvectors. A typical example of the latter is measuring position for a particle trapped in an interval. An eigenstate would be a delta function, which is not normalizable. (That is, <psi|psi> cannot be made finite.) Nonetheless, a density operators is always pure-point; in fact, the set of eigenvalues is always an absolutely convergent series. (Whose sum is 1, of course.) Depending on how you choose this series, the total entropy may be finite or infinite. It is also possible to choose a Hamiltonian so that any bounded-temperature state has finite entropy. However, string theorists could argue that this is splitting hairs, because if all finite-temperature states have finite entropy, then you could say that the Hilbert space of the universe is approximately finite-dimensional. They are certainly prepared to grasp the distinction between finite entropy and finitely many states, but my guess is that they don’t consider it important. In fact, my guess is that they view finite entropy as the more intrinsic notion. (At least, I would if I were one of them!) Moshe Says: Comment #71 April 7th, 2007 at 11:24 am Greg, in my experience there is no universal agreement on that among string theorists (at least those who are at all interested in the question), we are all trying to figure out the rules of the So far, one of the strongest points for me is that questions regarding gravitational entropy are always mapped via various dualities to similar questions in conventional physics (ideal gas and slight generalizations thereof). Systems with finite dimensional Hilbert spaces are few and far between, and as far as I can remember now are not utilized to describe gravitational entropy. I see no reason that utilizing only such systems (say spin systems) is an absolute necessity. It is entirely possible I am missing something obvious… Also, I am not sure the distinction between finite entropy and finite Hilbert space has to do with the continuous spectrum. A simple Harmonic oscillator has finite entropy (for fixed temperature, or for a fixed energy) but infinite dimensional Hilbert space, and the spectrum is discrete. Greg Kuperberg Says: Comment #72 April 7th, 2007 at 11:58 am Also, I am not sure the distinction between finite entropy and finite Hilbert space has to do with the continuous spectrum. My only point is that a normalized density matrix cannot have a continuous spectrum; it is always diagonalizable, in fact trace class. Also, I cannot resist a bit of snark for Scott. If you appreciate the distinctions among continuous-spectrum operators, point-spectrum operators, and trace-class operators, as you have to do to understand finite vs infinite entropy, then you are doing operator algebras. (Gasp!) “Mixed states let you totally recover locality.” This is totally new to me, could you please explain further, or point me to a preexisting explanation? Well, there is more than one thing to say, but the first step is really simple. In classical probability, there are states (distributions), then there are joint states, then there are marginal states. Taking a marginal is a one-sided inverse to lifting from a state to a joint state. In order to have any theory of locality, you have to have both joint states are marginals. In the quantum case, there are pure states (vectors) and mixed states (density matrices). If you only learn about pure states, then you cannot have marginals, because in general the marginal of a pure state is a mixed state. Within the world of mixed states, everything is fine. The marginal of a mixed state is another mixed state. It is easy to see that mixed states satisfy the first property of locality. Namely, if Alice and Bob share a joint state, then nothing that Alice can do at a distance can change Bob’s marginal state. Nielsen and Chuang explain these matters in ample detail. They do not, however, pound the table the way that I like to. For example, they conventionally refer to the marginal of a mixed state as a partial trace. That makes it sound like it’s just a formalism, and not supporting leg of interpretation. Scott Says: Comment #73 April 7th, 2007 at 12:30 pm As I understand lecture 11, you treat decoherence as just another interpretation, not an alternate theory. Is that truly your view? Decoherence is not an interpretation — it’s a phenomenon predicted by quantum mechanics, which no one really disputes. It’s often presented as a recent discovery, but it actually goes back to Schrödinger, von Neumann, etc. It’s just working out the details that’s the hard part. Decoherence makes no predictions — none whatsoever — beyond those of standard quantum mechanics. Indeed, that’s precisely the point of it. The “philosophical” question that people debate is whether decoherence is enough, by itself, to banish the interpretive problems, or whether you also need something else. Is that the general view? Yes, what I said above is the general view. Would Zurek agree to that? Yes, I think so. James Graber Says: Comment #74 April 7th, 2007 at 12:42 pm Scott and Greg, Thanks very much. I’ll start on N&C. It looks like it will take a while, But it should be fun. Greg Kuperberg Says: Comment #75 April 7th, 2007 at 12:47 pm The “philosophical” question that people debate is whether decoherence is enough, by itself, to banish the interpretive problems, or whether you also need something else. There is a legitimate physics problem here, namely to describe the actual process by which quantum rules degenerate to a classical limit. It is fair to say that it is far from completely understood; and since it isn’t, there is in principle room for a radical new extension of quantum probability. However, just because such an extension is conceivable, that doesn’t make it likely. In an idealized situation, it is commonplace to make a classical limit from a quantum system, with decoherence alone. A dephased qubit is a classical bit, case closed. So there is no positive evidence that decoherence is inadequate. This discussion is analogous to searches for a fifth fundamental force, besides gravity, weak, strong, and electromagnetism. Since no one understands how the world is composed of the first four, it’s reasonable to look for a fifth one. Just as long as you don’t wishfully suppose that it exists, because there is no positive evidence that it does. Scott Says: Comment #76 April 7th, 2007 at 12:49 pm Moshe, since the distinctions between entropy, maximum entropy, and log(dim(H)) can get extremely confusing, let me tell you the quantity that interests me in purely operational terms. I’m interested in the maximum number of bits that can in principle be stored in a given region, such that any one of those bits, of our choice, can later be reliably retrieved. Call that quantity N. So, if N can be finite even with an infinite-dimensional Hilbert space (which is something I’d have to understand better), then maybe I am OK with infinite-dimensional Hilbert spaces after all. On the other hand, if N can be infinite then I’m not OK, since then I’d worry about an infinite amount of computation being performed with a finite amount of resources. And I do have a preconception that that’s impossible, in the same way I have preconceptions that causality isn’t violated and the Second Law is true. That is, these are all things I’m willing to give up if forced to, but my price is exorbitantly high. Greg Kuperberg Says: Comment #77 April 7th, 2007 at 1:03 pm So, if N can be finite even with an infinite-dimensional Hilbert space (which is something I’d have to understand better), then maybe I am OK with infinite-dimensional Hilbert spaces after all. The point is that if the Hilbert space is finite-dimensional, then the information capacity of the system is limited algebraically. But you could also limit the capacity thermally or dynamically. To take Moshe’s example, the nth state of a simple harmonic oscillator has energy n+½. So to double the entropy of a state of the oscillator, you have to square its expected energy or its temperature. Anyone can imagine, and physicists can sometimes derive, reasons that heating a system to a googol degrees is unphysical. Part of the subtext of this is Strominger’s famous derivation of the Beckenstein-Hawking entropy of a black hole (well, certain idealized black holes) versus the unpersuasive calculations in loop quantum gravity. As I understand it, Strominger did a dynamical calculation of entropy, whereas the LQG paper that I saw imposed a capacity limit algebraically. I think that it misses the point to replace dynamical solutions with algebraic fiat, and it’s something that I have seen elsewhere in anti-string-theory work. John Sidles Says: Comment #78 April 7th, 2007 at 5:26 pm Scott says: Decoherence is not an interpretation … it’s just working out the details that’s the hard part … … which is why few textbooks on quantum mechanics start out with a discussion of measurement as a decoherent process — Chapter 2 of Nielsen and Chuang is a prominent exception. Howard Carmichael has a new textbook coming out that will likely do justice to this topic. It’s not so easy to answer the question “what is it that avalanche photodiodes measure, exactly?” Quantum measurement is IMHO such an inexhaustibly rich subject that students who begin by studying measurement are at-risk of never studying quantum dynamics at all. Doh! We mustn’t risk that! mitchell porter Says: Comment #79 April 7th, 2007 at 7:19 pm Greg, as an explanation, “noncommutative probability” is up there with Moliere’s “dormitive virtue”. It is one of the things that needs explaining! If a theory features fundamental probabilities a la Kolmogorov, I can interpret them as counterfactual relative frequencies. Such an interpretation is not possible for complex-valued probability amplitudes or Wigner’s quasiprobabilities, and I doubt that noncommutative probability spaces provide an explanation either. Classical probability theory has at least one natural ontological interpretation, its nonclassical formal generalizations do not. With Bohmian mechanics, it should be possible to explain where noncommutative phenomenological probabilities come from, precisely because it has a mechanism. (And the same goes for why quantum computing is more powerful than classical computing, by the way.) Greg Kuperberg Says: Comment #80 April 7th, 2007 at 7:40 pm If a theory features fundamental probabilities a la Kolmogorov, I can interpret them as counterfactual relative frequencies. Such an interpretation is not possible for complex-valued probability amplitudes or Wigner’s quasiprobabilities But it is possible with density matrices, which is the way that things are done in standard noncommutative probability. That is, a density matrix is the correct summary of all of the probabilities. It doesn’t “explain” in the sense of revealing underlying determinism, since that isn’t really possible. Basically words like “explain” and “ontological” are loaded. You are using them as a request for determinism. But there is a theorem that there isn’t determinism in any natural form. Bohm’s view was, even if we can’t have natural determinism, let’s describe it as an artificial kind of determinism. So okay, you can do that, but it isn’t a different theory, just a different explanation, and it’s only occassionally useful. Scott Says: Comment #81 April 7th, 2007 at 8:00 pm Mitchell: You might not like saying that a particle is objectively in superposition, but I don’t see how this view is logically incoherent, or how rejecting it is a “prerequisite for rational thought.” In debates about quantum foundations, saying “anyone who disagrees with me is irrational” is sort of the nuclear option… mitchell porter Says: Comment #82 April 7th, 2007 at 8:06 pm I’m not requesting determinism, I’m requesting something much more basic, namely that theories should be clear about what it is that they allege to exist. You’ve just said, in effect, that the universal density matrix is the bottom line for you, and that it can be read as a big tabulation of classical probabilities, over pure states I guess. That would mean that the actual state of things is made up of those pure states. But which pure states? Unless you have an answer to that, you only have a phenomenological theory. Scott Says: Comment #83 April 7th, 2007 at 8:13 pm Unless you have an answer to that, you only have a phenomenological theory. Ah, now we come to the heart of the matter. Modern physics was born when Galileo suggested that, instead of seeking the “true nature of motion,” we should just try to describe it phenomenologically. And it’s been a pretty successful strategy for the last 400 years, wouldn’t you say? mitchell porter Says: Comment #84 April 7th, 2007 at 8:13 pm Scott, I don’t mind if someone says that state vectors are the actual states. I just demand clarity regarding what I am being asked to entertain as a description of reality – classical beables, state vectors, both, neither. Greg Kuperberg Says: Comment #85 April 7th, 2007 at 8:32 pm No, mitchell, I’m saying that density matrices are the actual states. mitchell porter Says: Comment #86 April 7th, 2007 at 9:01 pm Greg: you just made me happy! Thank you for saying what your candidate for actuality is. Can we pursue this a little further, in several directions? 1. Cosmological dynamics. You mentioned Lindblad equations. Do you have any opinion as to whether the evolution of the universal density matrix is unitary? (Or it might even be stationary, if you’re an Huniverse=0 guy.) 2. If you say that ρX is the actual state of entity X, can I take that statement at face value? It won’t turn out to actually be a probability distribution or a dispositional description or some other less-than-ultimate characterization of X? 3. States of subsystems. At what point do I get back the phenomenal world, with its particular experimental outcomes? Is this a multiverse theory? Moshe Says: Comment #87 April 7th, 2007 at 10:37 pm Scott, indeed I would not generally relate the dimension of the Hilbert space and the number N you mention. Those two quantities have direct relation only if your system is physically made of bits, localized two state systems. For other systems, for example an ideal gas, I take it that N is the number of yes/no questions that completely specify the microstate, which I would probably just call the entropy, I agree that N has to be finite. Incidentally, just to repeat the above, the main conceptual difficulty in my mind is defining what you mean by associating entropy, or computation, or anything else, with a finite part of space. Defining such regions only makes sense using a fixed metric, and therefore is ill-defined in quantum gravity. The known cases where entropy is counted and comes out right always count the total entropy associated with the whole spacetime. John Sidles Says: Comment #88 April 8th, 2007 at 5:03 am Mitchell Porter: I just demand clarity regarding what I am being asked to entertain as a description of reality – classical beables, state vectors, both, neither. Mitchell, suppose a mathematics student asked the seemingly reasonable question: “I just demand clarity regarding what I am being asked to entertain as a description of mathematical reality – Peano integers, Cantor reals, ZFC sets, Godel propositions, … all, none.” What is the best way to respond to this student’s demand? Isn’t there plenty of evidence that physical reality is at least as conceptually flexible as mathematical reality? Even though we might not wish it to be so? Wittgenstein was among the very few modern philosophers to achieve a transition, in his personal thinking, from the “early Wittgenstein”‘s insistence upon logical clarity to the “later Wittgenstein”‘s embrace of conceptual flexibility. This achievement is admired particularly because it is uncommon … most people retain for life the ontology that they embrace when young. So it helps to pick a good one! And there is IMHO no one right answer. Robust ecosystems require many species. mitchell porter Says: Comment #89 April 8th, 2007 at 6:22 am John, the question about mathematics is just as legitimate. Do Peano integers exist? Do Cantor reals exist? The answer is yes or no. And if they do exist, then they have some particular relationship to the rest of reality. Conceptual flexibility is a human attribute, not an attribute of physical or mathematical reality per se, necessitated by the radically limited nature of what we actually know . Since we truly know so little, it is a potentially useful thing to be able to entertain diverse possibilities. But don’t miss the tree for the forest. The point of the diversity, as far as I am concerned, is to give us a chance at eventually knowing the truth. That cause is not helped by being vague about whether there actually are answers. To be is to be something, even if we don’t or can’t know what. John Sidles Says: Comment #90 April 8th, 2007 at 6:53 am Mitchell Porter says: Do Peano integers exist? Do Cantor reals exist? The answer is yes or no. Respectfully, many people would disagree. This same point was poignantly expressed in a recent issue of the Journal of the History of Philosophy: “Philosophical views, one used to believe at least, were held for reasons and because of the results of arguments, but these arguments and reasons do not play a central role in the historical author = {B. Look}, title = {“{R}adical {E}nlightenment: {P}hilosophy and the {M}aking of {M}odernity, 1650-1750” by {J}onathan {I}. {I}srael is journal = {Journal of the History of Philosophy}, year = 2002, volume = 40, issue = {3}, pages = {399–400}, jasnote = {As someone trained in a philosophy department to work on the history of philosophy, I felt uneasy in one respect with this book. While Israel emphasizes the battles between philosophies and ideas, he does not concern himself so much with the process of doing philosophy. That is, if there is a failing in this book, it is that we are presented with descriptions of philosophical views without always being given an adequate account of why such views were held by individual thinkers or how the theses of the radical Enlightenment, say, are related to each other. (For example, how did radicals see the relation between naturalism and republicanism?) Philosophical views, one used to believe at least, were held for reasons and because of the results of arguments, but these arguments and reasons do not play a central role in the historical narrative. To be fair, had Israel attempted the kind of detailed philosophical account of the arguments of particular works, his book would have been a genuinely mammoth and nearly unreadable tome. Yet it remains perhaps ironic that, in leaving open the possibility that Enlightenment views were advanced for self-interested motives, for the acquisition of political power, and not out of a commitment to “Truth,” a historian who seems so sympathetic to the ideals of the Enlightenment has produced a work that, at first glance, could be used by the new opponents of the Enlightenment and its legacy: post-modernists. Be that as it may, there is no denying that this book is a very important addition to the field and will doubtless alter the way we view the intellectual history of Europe.},} Greg Kuperberg Says: Comment #91 April 8th, 2007 at 7:27 am Do you have any opinion as to whether the evolution of the universal density matrix is unitary? There are dilation theorems to the effect that if the evolution of the universe is non-unitary (a Lindblad equation, say), then that model lifts to unitary evolution in a bigger universe. This is a generalization of state purification: Any mixed state is a marginal of a pure state on a bigger system. So you might as well say that the state of the entire universe is pure and evolves unitarily. If you say that ρX is the actual state of entity X, can I take that statement at face value? Well, that is what I do! It won’t turn out to actually be a probability distribution or a dispositional description or some other less-than-ultimate characterization of X? A density matrix is the non-commutative generalization of a probability distribution. But see, I am a Bayesian: I believe that the actual state of a classical object is a probability distribution. Even before I did quantum information theory, I believed that probability distributions are ultimate. Since you didn’t demand determinism, I’m allowed to say this. At what point do I get back the phenomenal world, with its particular experimental outcomes? Immediately. If ρ is the state of a system and x is a quantum Boolean (that is, a self-adjoint projection), then Tr(ρx) is the probability that x is true. Actually, I am attracted to the operator algebraist’s notation, in which ρ is a dual vector on the algebra of quantum random variables. So they would write ρ(x). Is this a multiverse theory? It is not a multiverse description. All of these descriptions — Bohm, multiverse, mixed-Copenhagen — describe the same scientific theory. Greg Kuperberg Says: Comment #92 April 8th, 2007 at 7:47 am mitchell: Okay, one more comment, so that I won’t sound evasive. But, in order to issue this clarification, I am going to pretend to be a philosopher for the moment. Again, re my answer to Scott, I don’t really believe in philosophical problems, but I am fine with philosophical implications. Just like a probability distribution, a density matrix is an epistemological object. (I.e., it is a description of what an observer might know.) In order to be a Bayesian, and especially in order to be a quantum Bayesian like me, you have to accept ontological relativism. (I.e., what actually “is” is observer-dependent.) The ontological absolute in my working understanding of quantum probability is that different epistemological stories, by observers who are in a position to confer, are always consistent. In particular, people generally are in a position to confer, so we can assemble an ontology which is absolute — for us people. Now to step back to science and away from philosophical hot air, what I am saying is that all of human society is essentially one classical physical system: all human perceptions are commuting random variables. So there exists a common density matrix or even a probability distribution to describe what all people see. The truth is relative in principle, but not in practice for different people. mitchell porter Says: Comment #93 April 8th, 2007 at 8:43 am Greg: I am a Bayesian: I believe that the actual state of a classical object is a probability distribution. Is this just a way of saying that you think probabilistically about possibilities, or are you really asserting (for example) that when you think a coin has a 50-50 chance of showing heads or tails, you think that its actual state is neither heads up nor tails up, but the probability function itself? In other words: the state of the object is some element (you don’t know which) of the set of possibilities over which the probability distribution is defined. If the distribution is the state of anything, it is the state of your belief about the object, not the state of the object itself. If I do assume that this is what you meant, then I am back to asking what set of possibilities your density matrix is a distribution over, because the actuality will be some element of that set. It looks like the answer ought to be: the possibilities represented by those projection operators. But then we are also back to the preferred basis problem. I would say that your theoretical work is not done until you specify some particular set of mutually orthogonal projectors, as the final statement of what might be real according to your theory. I don’t care if it’s position at one moment and momentum the next, or even if you propose an instantaneous probability distribution over sets of projectors, regarding which observables get to be actual. But noncommutative probability simply leaves the theory ungrounded and therefore unfinished. serafino Says: Comment #94 April 8th, 2007 at 8:53 am Could gambling solve the persistent problem of so many interpretations? The question of whether the waves are something “real” or a function to describe and predict phenomena in a convenient way is a matter of taste. I personally like to regard a probability wave, even in 3N-dimensional space, as a real thing, certainly as more than a tool for mathematical calculations … Quite generally, how could we rely on probability predictions if by this notion we do not refer to something real and objective? -M. Born, Dover publ., 1964, Natural Philosophy of Cause and Chance, p. 107 mitchell porter Says: Comment #95 April 8th, 2007 at 9:01 am Greg, I posted that comment before I saw your addendum, but I don’t think it changes anything, except to confirm that you were talking about states of belief. But eventually you have to talk about the possibilities and alleged actualities to which these states of belief refer, and that is what I am talking about. John, it will take more than a reminder of human intellectual frailty to get me to give up the law of the excluded middle. John Sidles Says: Comment #96 April 8th, 2007 at 9:24 am Mitchell Porter says: “It will take more than a reminder of human intellectual frailty to get me to give up the law of the excluded middle.” As Douglas Adams or maybe Terry Pratchett might say: “It’s not really a law, it’s really more of a suggestion or guideline.” I’d be grateful for a citation by either! Greg Kuperberg Says: Comment #97 April 8th, 2007 at 9:29 am Is this just a way of saying that you think probabilistically about possibilities, or are you really asserting (for example) that when you think a coin has a 50-50 chance of showing heads or tails, you think that its actual state is neither heads up nor tails up, but the probability function itself? It is certainly the former, but it’s not just that. The latter is roughly correct, except that I have reservations about the word “actual”, because it connotes an ontological absolute that cannot be entirely true. It is a description of a state of knowledge, which for some purposes is as actual as it gets. If I do assume that this is what you meant, then I am back to asking what set of possibilities your density matrix is a distribution over, Your question presupposes that a density matrix is a kind of probability distribution, i.e., a special case of a probability distribution. But it isn’t, it’s a generalization of a probability distribution. If I were to tell you that a citrus fruit is like an orange, but more general, then you simply wouldn’t understand what I am saying if you clung to the idea that an orange is as general as it gets, and that a citrus fruit must therefore be a special case. You could keep asking, “How can an orange be yellow, unless it is not yet ripe?” Your question, “How can a density matrix not be a distribution over THE set of states, unless it is not complete?” is very similar. Remember, this thread is you asking me how I view quantum probability. When I say “non-commutative probability”, I really mean “not necessarily commutative probability”. Commutative probability is a special case in which a density matrix just becomes a probability distribution over a set of possibilities. But in the non-commutative case, a density matrix is a more general object that isn’t, or more precisely isn’t uniquely, a distribution over a set of possibilities. As you seem to have studied the matter, yes, the set of orthogonal projectors isn’t unique. Nonetheless, the density matrix is the epistemological reality. As a rule, mathematical models are open to generalization. If you can’t accept the idea of generalizing probability distributions to something else, then you can’t understand what I’m saying. Greg Kuperberg Says: Comment #98 April 8th, 2007 at 9:39 am But eventually you have to talk about the possibilities and alleged actualities to which these states of belief refer, and that is what I am talking about. Well you say belief, while I say knowledge. The distinction is important, because knowledge is that part of belief which is reliable. Otherwise your assertion is at least on the same terms as my remark, but I don’t agree with it. The knowledge of two different thinkers does not have to be consistent unless they can confer. When it isn’t consistent, there does not exist a common actuality. But by the rules of quantum probability, when two different thinkers can confer, then their knowledge is always consistent. This is a counterintuitive conclusion, because in human society, people can always confer. There is never a need for one person to assume that another is in a quantum superposition. But in a society of qubits, that is exactly how things would stand. If there were sentient quantum computers — conceivably they will exist one day — I’m sure that they wouldn’t philosophize about the incompleteness of quantum probability. Instead, they would reject absolute ontology, which is it what it sounds like you are digging for. John Sidles Says: Comment #99 April 8th, 2007 at 10:59 am Greg and Mark, your dialog seems to be converging onto one-page 1963 article by Edmund Gottier titled Is Justified True Belief Knowledge? Gottier’s article actually settled a philosophical question … an event that is almost unique in the annals of philosophy. The answer, by the way, is “no”. But as Hemingway’s Jake Barnes says, ““Isn’t it pretty to think so?” Carl Says: Comment #100 April 8th, 2007 at 1:15 pm I’m familiar with the Gottier examples, and to me it seems like all they point out is that if we stick with the familiar formulation that “knowledge is a true, justified belief” then we have to be careful what sorts of things we accept as “justified.” His examples are all ones in which the purported knowledge is true and has a justification, but the justification has no connection to the truth of the proposition. I think that there’s no reason we can’t keep the traditional definition of knowledge as long as we are sure to specify that “justified” means justified in a relevant sense and by a process whose general application will also reliably produce truth. Carl Says: Comment #101 April 8th, 2007 at 1:18 pm (Oops, I copied you on the spelling, but thought, “Isn’t it pronounced ‘Gettier’?” Should have gone with the gut: Gettier, not Gottier.) John Sidles Says: Comment #102 April 8th, 2007 at 1:45 pm Hey, I’m the one who got the spelling of “Gettier” wrong! It was only 35 years ago that I first read that article. Your phrase “`justified’ means justified in a relevant sense” is a surely a darn tricky standard to apply, whether the context is math, science, engineering, politics, … or even marriage, as Dave Bacon is no doubt finding out! Lance Fortnow’s thread on the Continuum Hypothesis illustrates how subtle these issues can be, even when physics is (seemingly) not involved. Scott Oatley Says: Comment #103 April 8th, 2007 at 1:46 pm I very recently discovered your excellent blog. I know very little about quantum computing, or the related advanced physics, and I’m looking forward to using your blog as a learning resource. One of my personal interests is to understand the mathematics that underlies such things. I just read lecture 1 in this series, and I’m wondering what prerequisite math subjects would you recommend to help me get started with this quantum stuff, and to follow the lectures with a bit more understanding? I’ve got an engineering background and made it to introductory linear algebra years ago. I enjoy studying and reading about math as a hobby now, so no matter how daunting the task, I won’t run away with my tail between my legs! Scott Says: Comment #104 April 8th, 2007 at 3:13 pm Scott, the nice thing about the lectures being online is that you can start reading them now, and then if there’s anything you don’t understand, look it up on Wikipedia or Mathworld or some other online resource. By and large, all the math I use is extremely elementary — I can’t think of anything you’d need beyond complex numbers and linear algebra, maybe a wee bit of programming and discrete math. That’s not to say the ideas aren’t hard, but hopefully they’re hard in a self-contained sort of way. mitchell porter Says: Comment #105 April 10th, 2007 at 10:11 pm A few days pass and the blog caravan moves on. I suspect I’ve lost my chance of changing anyone’s mind. Nonetheless: Greg, I think I understand well enough what you are saying but I reject the philosophy of it as pernicious and retrograde. Specification of an “absolute ontology” is a minimal standard for any theory which has pretensions to finality. The point is not to be able to declare dogmatically that the theory is correct, the point is to have an exact specification of the way the world might be. The focus on epistemic states rather than possible states of the world is pernicious because it allows this point to be obscured. I do not understand why theoretical physicists, who have more reason than anyone to think that the world might be completely knowable (in outline if not in all its particulars), would settle for such a thing, but noncommutative probability combined with the epistemic focus offers a way to do it. As a generalization of the concept of probability, the significant thing about noncommutative probability is precisely that it abandons the view that the probabilities are associated with a determinate (not determinist) set of possibilities. If one’s focus is ontological, this is immediately perceived as a problem, because you want to know what the actual states of the world are supposed to be in a given theory. If states are epistemic, this is apparently not so clear, which is why I wish Bohm had prevailed over Bohr. John, I agree with Carl regarding Gettier; he did not falsify the definition of knowledge as justified true belief; he simply exposed (though everyone should already have known this, at least since Hume) that many beliefs which we are accustomed to thinking of as justified are not so, strictly speaking. From the perspective of philosophical skepticism, there is very little knowledge, and it seems that it’s the difficulty of justification which mostly makes it so – there are always too many other empirically indistinguishable possibilities. And Scott A., I owe some sort of response to your mention of Galileo. Galileo may have expelled a host of baseless apriorisms from the scientific method, but I have to wonder whether he would endorse a theoretical approach which apriori makes a virtue of incomplete descriptions. Greg, you also say: The knowledge of two different thinkers does not have to be consistent unless they can confer. If it isn’t even consistent, then it was never knowledge! – unless you think that the world can be self-inconsistent. The priors, guesses about the state of the world, or epistemic strategies of rational agents can be inconsistent-until-conferral as you describe, without impugning their rationality, but those things are not knowledge. As for sentient quantum computers rejecting absolute ontology on the grounds that they must be able to conceive of their interlocutors as being in superposed states – so long as something like the Bohmian option exists, they will have no need to reject absolute ontology. Again, I want to distinguish between epistemic uncertainty and ontological indeterminacy. A rationality based on quantum priors is not a problem. But an anti-ontology is. Why is many-worlds winning the foundations debate? « Quantum Quandaries Says: Comment #106 April 11th, 2007 at 8:47 am [...] Why is many-worlds winning the foundations debate? April 11, 2007 at 11:47 am | In Quantum, Philosophy, Uncategorized | Almost every time the foundations of quantum theory are mentioned in another science blog, the comments contain a lot of debate about many-worlds. I find it kind of depressing the extent to which many people are happy to jump on board with this interpretation without asking too many questions. In fact, it is almost as depressing as the fact that Copenhagen has been the dominant interpretation for so long, despite the fact that most of Bohr’s writings on the subject are pretty much incoherent. [...] Greg Kuperberg Says: Comment #107 April 11th, 2007 at 8:56 am Greg, I think I understand well enough what you are saying but I reject the philosophy of it as pernicious and retrograde. You are free to do that. Although I said all along, I’m not interested in philosophy for its own sake. My real work is mathematics (with elements of physics and computer science), and my philosophy is simply the way that I explain the ideas to myself and to other people. My defense of my philosophy is “it works for me”, that is, I find it helpful for my own research. If I can at least correctly explain my viewpoint to you, then that’s good enough. The point is not to be able to declare dogmatically that the theory is correct, the point is to have an exact specification of the way the world might be. On the contrary, in my view, what has an exact specification is the rules that the world follows, and not necessarily its state. As a generalization of the concept of probability, the significant thing about noncommutative probability is precisely that it abandons the view that the probabilities are associated with a determinate (not determinist) set of possibilities. Yes it does. That’s why I love it. The priors, guesses about the state of the world, or epistemic strategies of rational agents can be inconsistent-until-conferral as you describe, without impugning their rationality, but those things are not knowledge. I concede that there is a legitimate wrinkle about what one ever might have meant by knowledge. Beliefs that are rational, predictive, and reliable are at least hard to distinguish from absolute knowledge. Maybe I would concede that absolute knowledge does not exist, although as an expedient, I’m happy to call “RBR beliefs” knowledge. Again, it’s important to remember that if you have a whole society of rational agents whose perceptions all commute, then there will be an appearance of absolute knowledge. So long as something like the Bohmian option exists Again, the Bohmian option is no different as a predictive theory, it’s only a different pedagogy. It does serve to show that the actual theory of quantum probability might well be complete. After that, the only question is which pedagogy you like best.
{"url":"http://www.scottaaronson.com/blog/?p=218","timestamp":"2014-04-20T00:44:00Z","content_type":null,"content_length":"138011","record_id":"<urn:uuid:d7b98240-2bcd-4d87-be7d-9cd6a65b0871>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Need 3 Homework problems November 7th 2009, 09:26 PM Johnny Walker Black Need 3 Homework problems Verify these: [i need help getting started and other help you can throw at me] 1) sin(x + y)cos y - cos (x + y)sin y = sin x 2) cot x = (cos3x + cos x) / (sin 3x - sin x) 3) tan(B/2) = sec B / (sec B csc B + csc B) November 7th 2009, 09:45 PM Look up (and learn) the expansion rules for sin(a+b), cos(a+b), tan(a+b), sin (a-b), sin2a, etc. And apply them here. Have a go or you'll never never know. November 8th 2009, 06:50 AM Hello Johnny Walker Black Here's the method for each one: 1) Use the identities below to expand the LHS: $\sin(x+y) = \sin x \cos y +\cos x \sin y$ $\cos(x+y) = \cos x \cos y - \sin x \sin y$ Simplify the result. Then take out a common factor of $\sin x$. Then use: $\cos^2y + \sin^2y = 1$ and you're done. 2) Use the identities: $\cos 3x = 4\cos^3x - 3\cos x$ $\sin 3x = 3\sin x - 4\sin^3 x$ Simplify the result, and factorise. Then use: $\cos^2x = 1 - \sin^2 x$ in the numerator. Simplify; cancel, and you're there. 3) Get rid of $\sec B$ and $\csc B$ (horrible things!) by multiplying top-and-bottom of the fraction by $\sin B\cos B$, using the fact that: $\csc B\sin B = 1$ and $\sec B \cos B = 1$ Then use: $\sin B = 2 \sin\tfrac12B\cos\tfrac12B$ $\cos B = 2\cos^2\tfrac12B - 1$ Simplify. Then use $\tan\tfrac12B = \frac{\sin\tfrac12B}{\cos\tfrac12B}$, and you're there.
{"url":"http://mathhelpforum.com/trigonometry/113112-need-3-homework-problems-print.html","timestamp":"2014-04-19T11:35:57Z","content_type":null,"content_length":"8863","record_id":"<urn:uuid:14893fa8-ab3f-4ae2-ab3c-140aef77a719>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
a particle is moving along a curve whose equation is: `(xy^3)/(1+y^2)=8/5 ` assume the x-coordinate is increasing... - Homework Help - eNotes.com a particle is moving along a curve whose equation is: `(xy^3)/(1+y^2)=8/5 ` assume the x-coordinate is increasing at a rate of 6 units/sec when the particle is at the point (1,2) a) at what rate is the y-coordinate of the point changing at that instant? b) is the particle rising or falling at that instant? `(xy^3)/(1+y^2) = 8/5` `5xy^3 = 8(1+y^2)` Differentiate both sides with respect to time t. `5(x*3y^2y'+y^3x') = 8xx2yxxy' ---(1)` It is given that at (1,2) the rate of increasing of x is 6 units per second. `x = 1` `y = 2` `x' = 6` Applying the above values in (1) will give you; `5(x*3y^2y'+y^3x') = 8xx2yxxy'` `5(1xx3xx2^2xxy'+2^3xx6) = 8xx2xx2xxy'` `15y'+60 = 8y'` `y' = -60/7` So the y coordinate is changing at a rate of `60/7` units per second. Since we have a negative value for y' by the usual notation that means y is decreasing or falling. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/particle-moving-along-curve-whose-equation-xy-3-1-442886","timestamp":"2014-04-18T04:12:21Z","content_type":null,"content_length":"25927","record_id":"<urn:uuid:d9000d70-2a71-4c37-960b-8dffbb3ebef5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
CBSE Maths Class X Geometrical Construction CBSE Guess Visitors Online: 1840 | Sunday 20th April 2014 CBSE Maths eBooks CBSE Guess > eBooks > Class X > Maths Part II by Mr. M. P. Keshari Chapter 11: Geometrical Construction Construction 6. Construct a triangle ABC in which BC = 6cm, 1. BC = 6cm is drawn and 2. Perpendicular bisector RQ of BC is drawn which cut BC at M. and intersect BE at O. 3. Taking O as centre and OB as radius, a circle is drawn. 4. ML = 4.5cm is cut from RQ. 5. A line XY, parallel to BC is drawn through L to intersect the circle at A and A'. AB, AC, A’B and A’C are joined. ABC and A’BC are the required triangle Medium AM = A'M = 5.5cm (app.) Construction 7. Construct a triangle ABC in which BC = 5cm, 1. BC = 5cm is drawn and is constructed downwards. 2. BX is drawn perpendicular to BY. 3. Q is drawn perpendicular bisector if BC intersecting BX at O and cutting BC at E. 4. Taking O as a centre and OB as radius, a circle is drawn. 5. Taking E as centre and radius equal to 3.5cm, arc is drawn to cut the circle at A. 6. AC and AB are joined 7. AD is drawn perpendicular to BC from A to cut BC at D. 8. By measuring we find that AD = 3cm. Construction 8. Construction a 1. A ray QX is drawn making any angle with QR and opposite to P. 2. Starting from Q, seven equal line segments QQ[1], Q[1]R[2], Q[2]Q[3], Q[3]Q[4], Q[4]Q[5], Q[5]Q[6], Q[6]Q[7] are cut of from QX. 3. RQ[7] is joined and a line CQ[6] is drawn parallel to RQ[4] to intersect QR at C. 4. Line CA is drawn parallel to PR. ABC is the required triangle. Construction 9. Construct a triangle ABC in which BC = 6cm, 1. A line segment BC of length 6cm is drawn. 2. At B, 3. At B, 4. Perpendicular bisector of BC is drawn which intersect BY at O and BC at D. 5. Taking O as a center and OB as a radius a circle passing through B and C is drawn. 6. Taking D as a centre and radius 5cm an arc is drawn to intersect the circle at A. 7. AB and AC are joined. The required triangle is ABC. 8. Taking C as centre and CD as radius an arc is drawn to intersect BC produced at P such that BP = 3/2BC. 9. Through P, PQ is drawn parallel to CA meeting BA produced at Q. 10. BPQ is the required triangle similar to triangle BCA. Consyruction 10. Construct a quadrilateral ABCD in which AB = 2.5cm, BC = 3.5cm, AC = 4.2cm, CD = 3.5cm and AD = 2.5cm. Construct another quadrilateral AB’C’D’ with diagonal AC’ = 6.3cm such that it is similar to quadrilateral ABCD. 1. A line segment Ac = 4.2cm is drawn. 2. With A as a centre and radius 2.5cm, two arcs, one above AC and one below AC are drawn. 3. With C as centre and radius 3.5cm, two arcs arc drawn intersecting previous arcs at B and D./li> 4. AB, AD, BC and CD are joined ABCD is the required quadrilateral. 5. Taking A as a centre and radius 6.3cm an arc is drawn to intersect AC produced at C’. 6. Through C’, C’B’ and C’D’ are drawn parallel to CB and CD respectively. AB’C’D’ is the required quadrilateral similar to ABCD. Maths by Mr. M. P. Keshari
{"url":"http://www.cbseguess.com/ebooks/x/maths/part2/chapter11b.php","timestamp":"2014-04-20T08:42:39Z","content_type":null,"content_length":"18968","record_id":"<urn:uuid:0a7b1371-a54d-469d-8f9f-20c479bccd2b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Math- Alg 2 Number of results: 214,836 pre alg How do u find the GCF with exponets in a 7th gr. pre alg. class? Sunday, January 11, 2009 at 6:51pm by tay That is a big subject, especially for cubics and higher order polynomials. I recommend that you start by reading this: http://tutorial.math.lamar.edu/Classes/Alg/Factoring.aspx Monday, February 1, 2010 at 1:06am by drwls math - algebra Use polynomial long division, or synthetic division. See http://tutorial.math.lamar.edu/Classes/Alg/DividingPolynomials.aspx for an example. The answer is x^2 -x +4 Sunday, November 11, 2007 at 7:52pm by drwls Monday, August 1, 2011 at 10:46am by Ms. Sue Try this tutorial, be sure to look at both pages: http://www.themathpage.com/Alg/slope-of-a-line.htm Wednesday, February 3, 2010 at 8:00pm by Damon math word problem Nope. http://library.thinkquest.org/20991/alg/ratios.html Tuesday, April 17, 2012 at 8:04pm by Ms. Sue Wednesday, September 26, 2007 at 1:00am by none math(algabra readness Tuesday, August 31, 2010 at 7:42pm by Ms. Sue math ALG 2! Wednesday, September 26, 2012 at 9:50pm by Nici math ALG 2! Tuesday, March 19, 2013 at 6:20pm by Amanda math ALG 2! Tuesday, March 19, 2013 at 6:21pm by Amanda math ALG 2! Tuesday, March 19, 2013 at 6:37pm by Amanda These sites have clear explanations and examples. (Broken Link Removed) http://www.math.unc.edu/Faculty/mccombs/web/alg/classnotes/combining&simplifying/factoring/factoringgrouping.htm Wednesday, January 30, 2008 at 4:36pm by Ms. Sue how do i solve two step equations? Do you mean two equations in two unknowns? The method of substitution is one way. There is a good tutorial here: http://www.themathpage.com/alg/ Thursday, January 18, 2007 at 10:35am by megan thats what it says in my alg.1 book? Thursday, August 23, 2007 at 7:58pm by becca Simplify [(1-3y)/y]/(9/y^2)-1 Wednesday, September 26, 2007 at 12:47am by Anonymous Math- Alg 2 0000.12231 :) Friday, January 27, 2012 at 12:44am by fffsdg SO, 700+210, correct? Saturday, May 4, 2013 at 10:23pm by Alg This is a difference of two squares. You have a special factor theorm for these. http://themathpage.com/Alg/difference-two-squares.htm Sunday, February 21, 2010 at 11:10pm by bobpursley Math (URGENT) Monday, November 21, 2011 at 12:10pm by drwls Math- Alg 2 x log3 = log28 x = 3.0331 Friday, January 27, 2012 at 12:44am by drwls math ALG 2! 5 examples of what you find slope to be Tuesday, December 11, 2012 at 8:27pm by tyneisha math ALG 2! x10^6? if so.. 4,250,000 Tuesday, March 19, 2013 at 6:36pm by Amanda Math - Alg Oh okay. Will look at the link. Thanks. Thursday, February 6, 2014 at 7:32am by Brittany Jones math ALG 2! drag the 5 over and it would be 10\5 so A=2 Wednesday, September 26, 2012 at 8:51pm by Nici math ALG 2! 7F-9>3F-1 SOLVE Wednesday, September 26, 2012 at 9:50pm by tyneisha grade 12 math i dont understand alg 2 Tuesday, December 18, 2012 at 4:53pm by chris Math help please Find center and radius of circle (x-9)^2+(y+1)^2=49 Friday, December 17, 2010 at 2:24pm by Alg math ALG 2! SOLVE |5A|=10......WITH STEPS! PLZ Wednesday, September 26, 2012 at 8:51pm by tyneisha math ALG 2! Wednesday, September 26, 2012 at 9:10pm by tyneisha math ALG 2! EASY QUESTION 5 examples of what you find slope to be Tuesday, December 11, 2012 at 7:38pm by tyneisha Math Pre.Alg 33800(1+.035/2)^(2*5) = 40203.22 Friday, May 10, 2013 at 12:23am by Steve 15a²b - 10ab² = I am trying to learn how to do factoring in Alg II Thursday, January 29, 2009 at 3:28am by Jennifer math pre-ALG how to write 72,700 in word form Tuesday, June 8, 2010 at 12:02pm by laila math pre-ALG seventy-two thousand seven hundred Tuesday, June 8, 2010 at 12:02pm by just me math ALG 2! Write the following value in standard notation. 2 x 10-6 Tuesday, March 19, 2013 at 6:43pm by ty Math Pre.Alg What is $33,800 at 3.5% compounded semiannually for 5 years? Friday, May 10, 2013 at 12:23am by Bardroy Math Pre.Alg What is $33,800 at 3.5% compounded semiannually for 5 years? Friday, May 10, 2013 at 12:23am by Bardroy Alg II So in our Alg II class we're studing variables and functions, and from the situation given we are to figure out the equation. "In a lighting storm, the time interval betwee the flash ad the bang (f) is directly proportional to the distance between you and the lighting (m)." ... Thursday, September 24, 2009 at 10:35pm by Rylie Algebra 1 Answer Check Okay, assuming this is Alg I, I'm pretty sure you accidentally miscommunicated the problem wrong. The problem above is UN-FACTORABLE assuming only real numbers. However, if you insist, the answer to this problem requires some Alg II knowledge. Q: 9x^2 - x +2 A: (1/36)(-18ix... Monday, March 10, 2014 at 9:49pm by herp_derp 11. What are the horizontal and vertical asymptotes for the rational function y = 4x /(x-1)?: x = 0, y = 0 x = 1, y = 4 x = 4, y = 1 x = 1, y = 0 Wednesday, September 26, 2007 at 1:07am by Anonymous Use the product rule to find the derivative of (2x^2 + 3)(3x + 5). Tuesday, April 26, 2011 at 2:52am by Ethan Math- Alg 2 B(T)=4*e^(0.8)(7) 7 for T B(T)=62.3151 (plugged the equation into a graphing calculator) Thursday, January 26, 2012 at 10:52pm by Anonymous math ALG 2! Write the following value in standard notation. 4.25 x 106 Tuesday, March 19, 2013 at 6:36pm by ty math ALG 2! Write the following value in standard notation. 3.89 x 104 Tuesday, March 19, 2013 at 6:37pm by ty math ALG 2! Write the following value in standard notation. 3.14 x 10-5 Tuesday, March 19, 2013 at 6:41pm by ty Math - Alg Oh I think I got it now. Never mind. Thanks much for the help. Saturday, February 8, 2014 at 1:06am by Brittany Jones Intermediate Algebra Since this is not my area of expertise, I searched Google under the key words "rational expressions" to get this possible source: http://tutorial.math.lamar.edu/Classes/Alg/RationalExpressions.aspx In the future, you can find the information you desire more quickly, if you use... Tuesday, May 4, 2010 at 1:30pm by PsyDAG Math Course I was thinking about math courses for next year, and I do not know what I'm going to take. What is the differences between Alg 3 and Advanced math? And also, and does Trig involve? And I'm too scared of calculus to take it. Friday, February 27, 2009 at 11:10pm by Chopsticks math ALG 2! Write the following value in scientific notation. 7,400,000 7.4x10^3? help? Tuesday, March 19, 2013 at 6:20pm by ty math ALG 2! Write the following value in scientific notation. 0.00003165 i dont understand? Tuesday, March 19, 2013 at 6:30pm by ty i am having trouble on solving fractional equations such as: 2 3 7 ______ + _____ = ______________ x+3 x+4 x^2 +7x+12 factor the right side denominator, then look for what factors make up a common denominator. For each fraction, multiply the numerator and denominator with the ... Saturday, March 17, 2007 at 6:17pm by Katherine x^(-1/7)=(1/2) Please help the example given was x^(-1/7)=(1/5) so that solved x=78,125. but i have no idea what to do thanks! Thursday, April 5, 2012 at 7:28pm by Rochelle F math ALG 2!scientific notations helpp 4,250,000 38,900 0 .0000314 0.000002 Friday, March 15, 2013 at 4:23pm by Damon alg math 116 9x +3y=-51,-5x +y=39 solve using substitution method Monday, February 1, 2010 at 11:08pm by Juan math alg - inequality anything is write once you try hard. This to all my beliebers out there. ≥ Wednesday, November 2, 2011 at 4:59pm by Justin bieber Alg 1 The popultion of Medford High is 800 students & the population of WEstville High is 1240 students. Medfords pop. has been increasing by 30 students per year, while westville has been decreasing by 25 students per year. In how many yearas will the populations be the same. how ... Wednesday, March 9, 2011 at 11:19pm by Casey Multiply in out using the distributive rule: http://www.themathpage.com/alg/distributive-rule.htm Sunday, June 5, 2011 at 5:27pm by drwls math (advanced alg and trig) Solve to the nearest minute: x is greater than or equal to 0 and less than 360. 3sec^2 x-8tan-6=cotx Sunday, March 28, 2010 at 11:04am by Anne Math - Alg Hi Reiny, I already saw the link. But how did you get the 10x on the second part? Did you multiply everything by 10? Why so? Why not 100? Thursday, February 6, 2014 at 7:32am by Brittany Jones college math intermediate alg 4y^5*81q^7/(9q^2*16y)= do the coefficents first, then the powers 4*81/16*9= 9/4 y^(5-1)q^(7-2) 9y^4 q^5 /4 check that. Monday, December 13, 2010 at 9:10am by bobpursley Math- Alg 2 That is completely wrong. I'm sure you didn't type the equation in right or something. The correct answer is 1081-1082 bacteria. Thursday, January 26, 2012 at 10:52pm by Ryan Douglas math ALG 2! 4x = x + 78 3x = 78 x = 26 Tuesday, March 19, 2013 at 5:39pm by Ms. Sue Math- Alg 2 Which logarithmic equation is equivalent to the exponential equation below? 3^x=28 Friday, January 27, 2012 at 12:44am by CAMILA Math (alg 2) The ratio of two numbers is 3 to 2 and the difference of their squares is 20. Find the numbers. Sunday, September 7, 2008 at 5:31pm by Liz Math (alg 2) The ratio of two numbers is 3 to 2 and the difference of their squares is 20. Find the numbers. Sunday, September 7, 2008 at 5:31pm by Liz what fraction would you use to find 33 1/3% of 42 Monday, December 13, 2010 at 8:36pm by die pre-alg die!! 5. Solve: log 5 (8r-7) = log 5 (r^2 + 5): A. r = 2 or r = 6 B. r = -2 or r = 6 C. r = -2 or r = -6 D. r = 3 or r = 4 Wednesday, September 26, 2007 at 12:55am by Anonymous math ALG 2! i've answered like 10 of these for you... you should be able to figure out how this works by the past problems if you really dont understand the process Tuesday, March 19, 2013 at 6:41pm by Amanda digital electronics Tuesday, August 12, 2008 at 11:14pm by bobpursley math ALG 2! Write the following value in scientific notation. 23,000,000 i dont understand Tuesday, March 19, 2013 at 6:18pm by ty 2. Find the equation of the line which perpendicular to the line -2x + 3y = 32 and passes through the point (-4,-8). Wednesday, September 26, 2007 at 12:38am by Anonymous math ALG 2! Write the following value in scientific notation. 0.00000021 i think its 2.1x10 but i really think its wrong Tuesday, March 19, 2013 at 6:21pm by ty about nine weeks is spent on this in Alg I. methods: factoring, quadratic equation, graphing. Google quadratic equations Saturday, October 6, 2012 at 3:06am by bobpursley alg 2 y = 4^(x+1) - 1 ?? Tuesday, February 12, 2008 at 3:30pm by Reiny alg 2 Wednesday, March 12, 2008 at 10:30am by Abby alg 2. okay thanks Thursday, May 8, 2008 at 8:08pm by Miley alg 2. Monday, May 19, 2008 at 7:33pm by Miley alg 2. Tuesday, June 3, 2008 at 9:38pm by bobpursley = 1 + 1 Sunday, July 5, 2009 at 7:43pm by drwls Friday, June 25, 2010 at 7:02pm by anthony Monday, June 28, 2010 at 3:40pm by sean You're welcome Monday, June 28, 2010 at 3:44pm by Jen Tuesday, June 29, 2010 at 3:23pm by sean Monday, July 12, 2010 at 11:47am by kid Monday, July 12, 2010 at 11:48am by kid Friday, September 9, 2011 at 6:49pm by David ALG help x^2 - 5^2 = (x-5).(x+5) Friday, March 2, 2012 at 3:33am by Libuse Alg 2 so, (C) Monday, July 30, 2012 at 9:10pm by Steve Alg 1 Saturday, April 13, 2013 at 7:55pm by Anonymous Alg 1 Sunday, April 14, 2013 at 5:02pm by Allison 8. In how many ways can 12 books be displayed on a shelf if 15 books are available?: * A. 455 B. 479001600 C 2.17945728 x 1011 2730 Wednesday, September 26, 2007 at 1:00am by Anonymous 8. In how many ways can 12 books be displayed on a shelf if 15 books are available?: * A. 455 B. 479001600 C. 2.17945728 x 1011 D. 2730 Wednesday, September 26, 2007 at 1:00am by Anonymous Math (Alg 2) You replace the third column in the NUMERATOR determinant with the right hand side, NOT THE DENOMINATOR to get the third variable, Z. Therefore you got 1/Z Sunday, July 22, 2012 at 4:54pm by Damon Math - Alg How much water must be added to an 80% acid solution to make a mixture of 50mL which is 70% acid? Wednesday, February 5, 2014 at 8:36pm by McKenna Louise alg 2. my teacher got log base 4 (1/125) and it's a quiz review sheet, and all the math teachers look over it... sooo... I get how to simplify the 1/2 part but how do i condense the parenthesies Thursday, May 8, 2008 at 8:32pm by Miley math ALG 2! Four times a certain number is the same as the number increased by 78. Find the number. Thursday, December 13, 2012 at 8:05pm by tyneisha Algebra word problems The area of a rectangle is found by multiplying the length by the width: A=lw. A certain rectangle has an area os x^2+7x +12. Factor the trinomial to find the length ad width of the rectagle. Please read at this site: http://www.themathpage.com/alg/factoring-trinomials.htm ... Tuesday, January 23, 2007 at 3:25pm by deborah alg 2. Why is ln(e^-2) = -2. Why doesn't it = e^-2?? Thursday, May 8, 2008 at 8:08pm by Miley alg 2. thanks damon!!!!!! Thursday, May 8, 2008 at 8:08pm by Miley Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=Math-+Alg+2","timestamp":"2014-04-18T08:45:58Z","content_type":null,"content_length":"27282","record_id":"<urn:uuid:2784e02d-44ce-41e2-8196-e3f5d1ee9023>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Critical points of function 10sin(5 x) cos(5 x)-5 sin(5 x) is the the deritive of the function sin^2 5x + cos 5x , 0<x<2pie/5 That is correct. Are you asking how to set the derivative equal to zero? If this is your question, then you can factor out 5sin(5x), and set each factor to zero. If you want details let me know.
{"url":"http://mathhelpforum.com/calculus/164063-finding-critical-points-function.html","timestamp":"2014-04-20T06:02:30Z","content_type":null,"content_length":"31430","record_id":"<urn:uuid:5b3a10fc-a922-4c8b-9875-b633ef642844>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Use the Estimate Geometric Transformation block to find the transformation matrix which maps the greatest number of point pairs between two images. A point pair refers to a point in the input image and its related point on the image created using the transformation matrix. You can select to use the RANdom SAmple Consensus (RANSAC) or the Least Median Squares algorithm to exclude outliers and to calculate the transformation matrix. You can also use all input points to calculate the transformation matrix. ┃ Port │ Input/Output │ Supported Data Types │ Complex Values Supported ┃ ┃ Pts1/Pts2 │ M-by-2 Matrix of one-based [x y] point coordinates, where M represents the number of points. │ ● Double │ No ┃ ┃ │ │ │ ┃ ┃ │ │ ● Single │ ┃ ┃ │ │ │ ┃ ┃ │ │ ● 8, 16, 32-bit signed integer │ ┃ ┃ │ │ │ ┃ ┃ │ │ ● 8, 16, 32-bit unsigned integer │ ┃ ┃ Num │ Scalar value that represents the number of valid points in Pts1 and Pts 2. │ ● 8, 16, 32-bit signed integer │ No ┃ ┃ │ │ │ ┃ ┃ │ │ ● 8, 16, 32-bit unsigned integer │ ┃ ┃ TForm │ 3-by-2 or 3-by-3 transformation matrix. │ ● Double │ No ┃ ┃ │ │ │ ┃ ┃ │ │ ● Single │ ┃ ┃ Inlier │ M-by-1 vector indicating which points have been used to calculate TForm. │ Boolean │ No ┃ Ports Pts1 and Pts2 are the points on two images that have the same data type. The block outputs the same data type for the transformation matrix When Pts1 and Pts2 are single or double, the output transformation matrix will also have single or double data type. When Pts1 and Pts2 images are built-in integers, the option is available to set the transformation matrix data type to either Single or Double. The TForm output provides the transformation matrix. The Inlier output port provides the Inlier points on which the transformation matrix is based. This output appears when you select the Output Boolean signal indicating which point pairs are inliers checkbox. RANSAC and Least Median Squares Algorithms The RANSAC algorithm relies on a distance threshold. A pair of points, (image a, Pts1) and (image b, Pts 2) is an inlier only when the distance between and the projection of based on the transformation matrix falls within the specified threshold. The distance metric used in the RANSAC algorithm is as follows: The Least Median Squares algorithm assumes at least 50% of the point pairs can be mapped by a transformation matrix. The algorithm does not need to explicitly specify the distance threshold. Instead, it uses the median distance between all input point pairs. The distance metric used in the Least Median of Squares algorithm is as follows: For both equations: is a point in image a (Pts1) is a point in image b (Pts2) is the projection of a point on image a based on transformation matrix H is the distance between two point pairs on image b is the threshold is the number of points The smaller the distance metric, the better the transformation matrix and therefore the more accurate the projection image. The Estimate Geometric Transformation block supports Nonreflective similarity, affine, and projective transformation types, which are described in this section. Nonreflective similarity transformation supports translation, rotation, and isotropic scaling. It has four degrees of freedom and requires two pairs of points. The transformation matrix is: The projection of a point by is: affine transformation supports nonisotropic scaling in addition to all transformations that the nonreflective similarity transformation supports. It has six degrees of freedom that can be determined from three pairs of noncollinear points. The transformation matrix is: The projection of a point by is: Projective transformation supports tilting in addition to all transformations that the affine transformation supports. The transformation matrix is : The projection of a point by is represented by homogeneous coordinates as: Distance Measurement For computational simplicity and efficiency, this block uses algebraic distance. The algebraic distance for a pair of points, on image a, and on image b , according to transformation is defined as For projective transformation: , where For Nonreflective similarity or affine transformation: , The block performs a comparison and repeats it K number of times between successive transformation matrices. If you select the Find and exclude outliers option, the RANSAC and Least Median Squares (LMS) algorithms become available. These algorithms calculate and compare a distance metric. The transformation matrix that produces the smaller distance metric becomes the new transformation matrix that the next comparison uses. A final transformation matrix is resolved when either: ● K number of random samplings is performed ● The RANSAC algorithm, when enough number of inlier point pairs can be mapped, (dynamically updating K) The Estimate Geometric Transformation algorithm follows these steps: 1. A transformation matrix is initialized to zeros 2. Set count = 0 (Randomly sampling). 3. While count < K , where K is total number of random samplings to perform, perform the following; a. Increment the count; count = count + 1. b. Randomly select pair of points from images a and b, (2 pairs for Nonreflective similarity, 3 pairs for affine, or 4 pairs for projective). c. Calculate a transformation matrix , from the selected points. d. If has a distance metric less than that of , then replace with . (Optional for RANSAC algorithm only) i. Update K dynamically. ii. Exit out of sampling loop if enough number of point pairs can be mapped by . 4. Use all point pairs in images a and b that can be mapped by to calculate a refined transformation matrix 5. Iterative Refinement, (Optional for RANSAC and LMS algorithms) a. Denote all point pairs that can be mapped by as inliers. b. Use inlier point pairs to calculate a transformation matrix . c. If has a distance metric less than that of , then replace with , otherwise exit the loop. Number of Random Samplings The number of random samplings can be specified by the user for the RANSAC and Least Median Squares algorithms. You can use an additional option with the RANSAC algorithm, which calculates this number based on an accuracy requirement. The Desired Confidence level drives the accuracy. The calculated number of random samplings, K used with the RANSAC algorithm, is as follows: ● p is the probability of independent point pairs belonging to the largest group that can be mapped by the same transformation. The probability is dynamically calculated based on the number of inliers found versus the total number of points. As the probability increases, the number of samplings, K , decreases. ● q is the probability of finding the largest group that can be mapped by the same transformation. ● s is equal to the value 2, 3, or 4 for Nonreflective similarity, affine, and projective transformation, respectively. Iterative Refinement of Transformation Matrix The transformation matrix calculated from all inliers can be used to calculate a refined transformation matrix. The refined transformation matrix is then used to find a new set of inliers. This procedure can be repeated until the transformation matrix cannot be further improved. This iterative refinement is optional. Dialog Box Specify transformation type, either Nonreflective similarity, affine, or projective transformation. If you select projective transformation, you can also specify a scalar algebraic distance threshold for determining inliers. If you select either affine or projective transformation, you can specify the distance threshold for determining inliers in pixels. See Transformations for a more detailed discussion. The default value is projective. When selected, the block finds and excludes outliers from the input points and uses only the inlier points to calculate the transformation matrix. When this option is not selected, all input points are used to calculate the transformation matrix. Select either the RANdom SAmple Consensus (RANSAC) or the Least Median of Squares algorithm to find outliers. See RANSAC and Least Median Squares Algorithms for a more detailed discussion. This parameter appears when you select the Find and exclude outliers check box. Specify a scalar threshold value for determining inliers. The threshold controls the upper limit used to find the algebraic distance in the RANSAC algorithm. This parameter appears when you set the Method parameter to Random Sample Consensus (RANSAC) and the Transformation type parameter to projective. The default value is 1.5. Specify the upper limit distance a point can differ from the projection location of its associating point. This parameter appears when you set the Method parameter to Random Sample Consensus (RANSAC) and you set the value of the Transformation type parameter to Nonreflective similarity or affine. The default value is 1.5. Select Specified value to enter a positive integer value for number of random samplings, or select Desired confidence to set the number of random samplings as a percentage and a maximum number. This parameter appears when you select Find and exclude outliers parameter, and you set the value of the Method parameter to Random Sample Consensus (RANSAC). Specify the number of random samplings for the algorithm to perform. This parameter appears when you set the value of the Determine number of random samplings using parameter to Specified value. Specify a percent by entering a number between 0 and 100. The Desired confidence value represents the probability of the algorithm to find the largest group of points that can be mapped by a transformation matix. This parameter appears when you set the Determine number of random samplings using parameter to Desired confidence. Specify an integer number for the maximum number of random samplings. This parameter appears when you set the Method parameter to Random Sample Consensus (RANSAC) and you set the value of the Determine number of random samplings using parameter to Desired confidence. Specify to stop random sampling when a percentage of input points have been found as inliers. This parameter appears when you set the Method parameter to Random Sample Consensus (RANSAC). Specify whether to perform refinement on the transformation matrix. This parameter appears when you select Find and exclude outliers check box. Select this option to output the inlier point pairs that were used to calculate the transformation matrix. This parameter appears when you select Find and exclude outliers check box. The block will not use this parameter with signed or double, data type points. Specify transformation matrix data type as Single or Double when the input points are built-in integers. The block will not use this parameter with signed or double, data type points.
{"url":"http://www.mathworks.in/help/vision/ref/estimategeometrictransformation.html?s_tid=gn_loc_drop&nocookie=true","timestamp":"2014-04-23T09:25:38Z","content_type":null,"content_length":"65703","record_id":"<urn:uuid:3fb7eb2b-6ba7-4d62-90bd-61f3e35e9cc6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Decision procedures are automated theorem proving algorithms which automatically recognize the theorems of some decidable theory. The correctness of these algorithms is important, since a design error could lead to the misidentification of a false statement as a theorem. In the past, decision procedures have been shown to be correct by mechanically verifying that they are sound, i.e. they only identify valid statements. Soundness does not entail correctness, however, as a decision procedure could still fail to recognize a true formula from the theory it decides. To rigorously verify that a decision procedure for a theory T is correct, it must also be shown to be complete in that it recognize all true propositions from T . We have developed a decision procedure called bagahk for the validity of formulas modulo the theory of ground equations T=, which we have proven sound and complete in the proof assistant Coq. In this thesis, we highlight the important lemmas and theorems of these proofs. As part of the soundness proof, we embed Coq-level proof terms into the meta-language of our solver using reflection. As a result of this, bagahk can also be used to assist users in the construction of other proofs. In addition, we develop a proof system for T= and show that our decision procedure recognizes all T=-provable propositions, showing that bagahk is complete.
{"url":"http://www.cs.utexas.edu/~bendy/Bagahk/","timestamp":"2014-04-19T05:11:58Z","content_type":null,"content_length":"3670","record_id":"<urn:uuid:a2383174-4144-4282-ad9a-be0583fbbf88>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Faculty Publications Book chapters or sections • J. R. Jiang and R. K. Brayton, "Functional dependency for verification reduction," in Computer Aided Verification: Proc. 16th Intl. Conf. (CAV 2004), R. Alur and D. A. Peled, Eds., Lecture Notes in Computer Science, Vol. 3114, Berlin, Germany: Springer-Verlag, 2004, pp. 268-280. Articles in journals or magazines Articles in conference proceedings • M. Case, A. Mishchenko, and R. K. Brayton, "Cut-based inductive invariant computation," in Proc. 17th Intl. Workshop on Logic and Synthesis (IWLS 2008), New York, NY: The Association for Computing Machinery, Inc., 2008. • A. Mishchenko and R. K. Brayton, "Recording synthesis history for sequential verification," in Proc. 17th Intl. Workshop on Logic and Synthesis (IWLS 2008), New York, NY: The Association for Computing Machinery, Inc., 2008. • A. Mishchenko, R. K. Brayton, and S. Chatterjee, "Boolean factoring and decomposition of logic networks," in Proc. 17th Intl. Workshop on Logic and Synthesis (IWLS 2008), New York, NY: The Association for Computing Machinery, Inc., 2008. • A. Mishchenko, M. Case, R. K. Brayton, and S. Jang, "Scalable and scalably-verifiable sequential synthesis," in Proc. 17th Intl. Workshop on Logic and Synthesis (IWLS 2008), New York, NY: The Association for Computing Machinery, Inc., 2008. • A. Mishchenko, R. K. Brayton, and S. Jang, "Global delay optimization using structural choices," in Proc. 17th Intl. Workshop on Logic and Synthesis (IWLS 2008), New York, NY: The Association for Computing Machinery, Inc., 2008. • A. P. Hurst, A. Mishchenko, and R. K. Brayton, "Scalable min-register retiming under timing and initializability constraints," in Proc. 45th ACM/IEEE Annual Design Automation Conf. (DAC 2008), New York, NY: The Association for Computing Machinery, Inc., 2008, pp. 534-539. • M. L. Case, V. N. Kravets, A. Mishchenko, and R. K. Brayton, "Merging nodes under sequential observability," in Proc. 45th ACM/IEEE Annual Design Automation Conf. (DAC 2008), New York, NY: The Association for Computing Machinery, Inc., 2008, pp. 540-545. • A. Mishchenko, S. Cho, S. Chatterjee, and R. K. Brayton, "Combinational and sequential mapping with priority cuts," in 2007 IEEE/ACM Intl. Conf. on Computer-Aided Design (ICCAD '07) Digest of Technical Papers, Piscataway, NJ: IEEE Press, 2007, pp. 354-361. • F. Mo and R. K. Brayton, "A simultaneous bus orientation and bused pin flipping algorithm," in 2007 IEEE/ACM Intl. Conf. on Computer-Aided Design (ICCAD '07) Digest of Technical Papers, Piscataway, NJ: IEEE Press, 2007, pp. 386-389. • M. L. Case, A. Mischenko, and R. K. Brayton, "Automated extraction of inductive invariants to aid model checking," in Proc. 2007 Formal Methods in Computer Aided Design, Los Alamitos, CA: IEEE Computer Society, 2007, pp. 165-172. • A. P. Hurst, A. Mishchenko, and R. K. Brayton, "Fast minimum-register retiming via binary maximum-flow," in Proc. 2007 Formal Methods in Computer-Aided Design (FMCAD '07), Los Alamitos, CA: IEEE Computer Society, 2007, pp. 181-187. • A. Mishchenko, R. K. Brayton, J. H. Jiang, and S. Jan, "SAT-based logic optimization and resynthesis," in Proc. 16th Intl. Workshop on Logic and Synthesis, New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 358-364. • S. Chatterjee, Z. Wei, A. Mishchenko, and R. K. Brayton, "A linear time algorithm for optimum tree placement," in Proc. 16th Intl. Workshop on Logic and Synthesis (IWLS 2007), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 336-342. • A. Hurst, A. Mishchenko, and R. K. Brayton, "Fast minimum-register retiming via binary maximum-flow," in Proc. 16th Intl. Workshop on Logic and Synthesis (IWLS 2007), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 328-335. • M. L. Case, A. Mishchenko, and R. K. Brayton, "Automated extraction of inductive invariants to aid model checking," in Proc. 16th Intl. Workshop on Logic and Synthesis (IWLS 2007), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 282-289. • J. Pistorius, M. Hutton, A. Mishchenko, and R. K. Brayton, "Benchmarking method and designs targeting logic synthesis for FPGAs," in Proc. 16th Intl. Workshop on Logic and Synthesis (IWLS 2007), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 230-237. • A. Mishchenko, S. Cho, S. Chatterjee, and R. K. Brayton, "Combinational and sequential mapping with priority cuts," in Proc. 16th Intl. Workshop on Logic and Synthesis (IWLS 2007), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 91-98. • A. Hurst, A. Mishchenko, and R. K. Brayton, "Minimizing implementation costs with end-to-end retiming," in Proc. 16th Intl. Workshop on Logic and Synthesis (IWLS 2007), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 9-16. • R. K. Brayton and A. Mishchenko, "Sequential rewriting and synthesis," in Proc. 16th Intl. Workshop on Logic and Synthesis (IWLS 2007), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 1-8. • S. Chatterjee, A. Mishchenko, R. K. Brayton, and A. Kuehlmann, "On resolution proofs for combinational equivalence," in Proc. 44th Annual ACM/IEEE Design Automation Conf. (DAC 2007), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 600-605. • F. Mo and R. K. Brayton, "Semi-detailed bus routing with variation reduction," in Proc. 2007 Intl. Symp. on Physical Design (ISPD '07), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 143-150. • T. Villa, S. Zharikova, N. Yevtushenko, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "A new algorithm for the largest compositionally progressive solution of synchronous language equations," in Proc. 17th ACM Great Lakes Symp. on VLSI (GLSVLSI 2007), New York, NY: The Association for Computing Machinery, Inc., 2007, pp. 441-444. • Y. S. Yang, S. Sinha, A. Veneris, and R. K. Brayton, "Automating logic rectification by approximate SPFDs," in Proc. 2007 Asia and South Pacific Design Automation Conf. (ASP-DAC '07), Piscataway, NJ: IEEE Press, 2007, pp. 402-407. • A. Mishchenko, S. Chatterjee, and R. K. Brayton, "DAG-aware AIG rewriting: A fresh look at combinational logic synthesis," in Proc. IEEE/ACM 43rd Annual Conf. on Design Automation, New York, NY: ACM Press, 2006, pp. 532-535. • Y. Li, A. Kondratyev, and R. K. Brayton, "Gaining predictability and noise immunity in global interconnects," in Proc. 5th Intl. Conf. on Application of Concurrency to System Design, Los Alamitos, CA: IEEE Computer Society, 2005, pp. 176-185. • A. Mishchenko and R. K. Brayton, "SAT-based complete don't-care computation for network optimization," in Proc. Design, Automation and Test in Europe, Vol. 1, Los Alamitos, CA: IEEE Computer Society, 2005, pp. 412-417. • F. Mo and R. K. Brayton, "A timing-driven module-based chip design flow," in Proc. 2004 41st Design Automation Conf., New York, NY: ACM Press, 2004, pp. 67-70. • Y. Jiang, S. Matic, and R. K. Brayton, "Generalized cofactoring for logic function evaluation," in Proc. 2003 40th Design Automation Conf., Piscataway, NJ: IEEE Press, 2003, pp. 155-158. • N. Yevtushenko, T. Villa, R. K. Brayton, A. Petrenko, and A. L. Sangiovanni-Vincentelli, "Equisolvability of series vs. controller's topology in synchronous language equations," in Proc. 6th Design, Automation and Test in Europe Conf. and Exhibition (DATE 2003), N. Wehn and D. Verkest, Eds., Los Alamitos, CA: IEEE Computer Society, 2003, pp. 1154-1155. • M. Baleani, F. Gennari, Y. Jiang, Y. Patel, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "HW/SW Partitioning and Code Generation of Embedded Control Applica- tions on a Reconfigurable Architecture Platform," in Proceedings of the tenth international symposium on Hardware/software codesign, 2002. • M. Baleani, F. Gennari, Y. Jiang, Y. Patel, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "HW/SW partitioning and code generation of embedded control applications on a reconfigurable architecture platform," in Proc. 10th Intl. Symp. on Hardware/Software Codesign (CODES 2002), New York, NY: ACM Press, 2002, pp. 151-156. • R. K. Brayton, "Compatible observability don't cares revisited," in IEEE/ACM Intl. Conf. on Computer Aided Design (ICCAD 2001). Digest of Technical Papers, Piscataway, NJ: IEEE Press, 2001, pp. • A. Tabbara, R. K. Brayton, and A. R. Newton, "Retiming for DSM with area-delay trade-offs and delay constraints," in Proc. 36th Design Automation Conf. (DAC 1999), New York, NY: ACM, Inc., 1999, pp. 725-730. • R. K. Brayton, G. D. Hachtel, A. L. Sangiovanni-Vincentelli, F. Somenzi, A. Aziz, S. Cheng, S. Edwards, S. Khatri, Y. Kukimoto, A. Pardo, S. Qadeer, R. K. Ranjan, S. Sarwary, T. R. Shiple, G. Swamy, and T. Villa, "VIS: A system for verification and synthesis," in Lecture Notes in Computer Science: Computer Aided Verification, R. Alur and T. A. Henzinger, Eds., Vol. 1102, London, UK: Springer-Verlag, 1996, pp. 428-432. • E. M. Sentovich, K. J. Singh, C. Moon, H. Savoj, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Sequential circuit design using synthesis and optimization," in Proc. IEEE 1992 Intl. Conf. on Computer Design: VLSI in Computers and Processors, Los Alamitos, CA: IEEE Computer Society Press, 1992, pp. 328-333. • A. A. Malik, R. K. Brayton, A. R. Newton, and A. L. Sangiovanni-Vincentelli, "Reduced offsets for two-level multi-valued logic minimization," in Proc. 275h ACM/IEEE Conf. on Design Automation (DAC '90), New York, NY: ACM, Inc., 1990, pp. 290-296. • M. Beardslee, C. Kring, R. Murgai, H. Savoj, R. K. Brayton, and A. R. Newton, "SLIP: A software environment for System Level Interactive Partitioning," in 1989 IEEE Intl. Conf. on Computer-Aided Design (ICCAD-89). Digest of Technical Papers, Los Alamitos, CA: IEEE Computer Society Press, 1989, pp. 280-283. • A. A. Malik, R. K. Brayton, A. R. Newton, and A. L. Sangiovanni-Vincentelli, "A modified approach to two-level logic minimization," in 1988 IEEE Intl. Conf. on Computer-Aided Design (ICCAD-88). Digest of Technical Papers, Los Alamitos, CA: IEEE Computer Society Press, 1988, pp. 106-109. • R. K. Brayton, G. D. Hachtel, L. A. Hemachandra, A. R. Newton, and A. L. Sangiovanni-Vincentelli, "A comparison of logic minimization strategies using ESPRESSO: An APL program package for partitioned logic minimalization," in Proc. 1982 IEEE Intl. Symp. on Circuits and Systems (ISCAS-82), New York, NY: IEEE, 1982, pp. 42-48. [abstract] • R. K. Brayton and C. McMullen, "The decomposition and factorization of Boolean expressions," in Proc. 1982 IEEE Intl. Symp. on Circuits and Systems, Vol. 1, New York, NY: IEEE Press, 1982, pp. Technical Reports • G. Castagnetti, M. Piccolo, T. Villa, N. Yevtushenko, A. Mishchenko, and R. K. Brayton, "Solving Parallel Equations with BALM-II," EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2012-181, July 2012. [abstract] • G. Wang, A. Mishchenko, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Synthesizing FSMs According to co-bu chi Properties," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M05/13, April 2005. • N. Yevtushenko, T. Villa, R. K. Brayton, A. Petrenko, and A. L. Sangiovanni-Vincentelli, "Sequential Synthesis by Language Equation solving," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M03/9, 2003. • T. R. Shiple, R. K. Brayton, G. Berry, and A. L. Sangiovanni-Vincentelli, "Logical Analysis of Combinational Cycles," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M02/ 21, 2002. • Y. Jiang and R. K. Brayton, "Don't care computation in minimizing extended finite state machines with Presburger arithmetic," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ ERL M01/35, 2001. • R. K. Brayton, "Algebraic Methods for Multi-Valued Logic," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M99/62, 1999. • P. Chong, Y. Jiang, S. Khatri, S. Sinha, and R. K. Brayton, "Don't Care Wires in Logical/Physical Design," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M99/52, 1999. • S. Khatri, S. Sinha, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Binary and Multi-Valued SPFD-Based Wire Removal in PLA Networks," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M99/51, 1999. • S. Khatri, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "A VLSI Design Methodology Using a Network of PLAs Embedded in a Regular Layout Fabric," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M99/50, 1999. • S. Khatri, S. Sinha, A. Kuehlmann, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "SPFD-Based Wire Removal in a Network of PLAs," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M99/17, 1999. • Y. Jiang, S. Khatri, A. L. Sangiovanni-Vincentelli, and R. K. Brayton, "A Multi-Layer Area Routing Methodology Using a Boolean Satisfiability Based Router," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M99/16, 1999. • S. Khatri, R. K. Brayton, A. Mehrotra, A. L. Sangiovanni-Vincentelli, and M. Prasad, "Routing Techniques for Deep Sub-Micron Technologies," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M99/15, 1999. • S. Khatri, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "A Layout and Design Methodology for Deep Sub-micron Applications Using Networks of PLAs," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M98/68, 1998. • R. K. Brayton and S. Khatri, "A Survey of Multi-valued Synthesis Techniques," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M98/61, 1998. • S. Khatri, S. Krishnan, A. L. Sangiovanni-Vincentelli, and R. K. Brayton, "Combinational Verification Revisted," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M98/60, • S. Khatri, A. L. Sangiovanni-Vincentelli, and R. K. Brayton, "Accurate Automatic Timing Characterization of Static CMOS Libraries," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M98/58, 1998. • S. Khatri, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Multi-Valued Network Compaction Using Redundancy Removal," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M98/44, 1998. • R. Ranjan, V. Singhal, F. Somenzi, and R. K. Brayton, "On the Optimization Power of Retiming and Resynthesis Transformations," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ ERL M98/26, 1998. • S. Tasiran, S. Khatri, S. Yovine, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Accurate Timing Analysis in the Presence of Cross-Talk Using Timed Automata," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M98/25, 1998. • S. Khatri, A. Mehrotra, R. K. Brayton, R. Otten, and A. L. Sangiovanni-Vincentelli, "A Noise-Immune VLSI Layout Methodology with Highly Predictable Parasitics," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M98/24, 1998. • R. Ranjan, V. Singhal, F. Somenzi, and R. K. Brayton, "Using Combinational Verification for Sequential Circuits," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M97/77, • Y. Kukimoto, R. K. Brayton, and P. Sawkar, "Delay-Optimal Technology Mapping by DAG Covering," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M97/75, 1997. • T. Shiple, R. Ranjan, A. L. Sangiovanni-Vincentelli, and R. K. Brayton, "Deciding State Reachability for Large FSMs," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M97/ 73, 1997. • R. Hojati, A. Isles, and R. K. Brayton, "Automatic State Reduction Techniques for Hardware Systems Modeled Using Uninterpreted Functions and Infinite Memory," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M97/53, 1997. • Y. Kukimoto and R. K. Brayton, "Exact Required Time Analysis via False Path Detection," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M97/44, 1997. • R. Alur, R. K. Brayton, T. A. Henzinger, S. Qadeer, and S. Rajamani, "Partial-Order Reduction in Symbolic State Space Exploration," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M97/30, 1997. • A. Narayan, A. Isles, J. Jain, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Reachability Analysis Using Partitioned- ROBDDs," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M97/27, 1997. • T. Kam, T. Villa, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Multi-Valued Decision Diagrams for Logic Synthesis and Verification," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M96/75, 1996. • E. Goldberg, T. Villa, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Theory and Algorithms for Face Hypercube Embedding," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ ERL M96/74, 1996. • T. Kitahara and R. K. Brayton, "Low Power Synthesis via Transparent Latches and Observability Don't Cares," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M96/64, 1996. • L. Carloni, T. Villa, T. Kam, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Generation of a Minimal STG from an Implicit Cover," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M96/40, 1996. • G. Swamy, S. Rajamani, C. Lennard, and R. K. Brayton, "Minimal Logic Re-Synthesis," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M96/22, 1996. • S. Edwards, G. Swamy, and R. K. Brayton, "Identifying Common Substructure for Incremental Methods," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M96/21, 1996. • T. Villa, T. Kam, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "State Minimization of FSM's with Implicit Techniques," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M96/17, 1996. • J. Sanghavi, R. Ranjan, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Binary Decision Diagrams on Network of Workstations," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M96/9, 1996. • T. Villa, A. Saldanha, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Symbolic Two-Level Minimization," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/109, 1995. • T. Villa, T. Kam, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Explicit and Implicit Algorithms for Binate Covering Problems," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/108, 1995. • T. Kam, T. Villa, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Theory and Algorithms for State Minimization of Non-Deterministic FSM's," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/107, 1995. • T. Kam, T. Villa, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Implicit Computation of Compatible Sets for State Minimization of ISFSM's," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/106, 1995. • The VIS Group, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "VIS: A System for Verification and Synthesis," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/104, • A. Narayan, S. Khatri, J. Jain, M. Fujita, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Overcoming Memory Constraints in ROBDD Construction by Functional Decomposition and Partitioning," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/91, 1995. • R. Ranjan, J. Sanghavi, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "High Performance BDD Package Based on Exploiting Memory Hierarchy," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/81, 1995. • S. Cheng and R. K. Brayton, "Decomposition of Multi-Phase Timed Finite State Machines," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/67, 1995. • H. Wang and R. K. Brayton, "Multi-Level Optimization of FSM Networks," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/66, 1995. • A. Narayan, S. Khatri, J. Jain, M. Fujita, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Compositional Techniques for Mixed Bottom-Up/Top-Down Constructions of ROBDDs," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/51, 1995. [abstract] • S. Khatri, A. Narayan, S. Krishnan, K. McMillan, A. L. Sangiovanni-Vincentelli, and R. K. Brayton, "An Engineering Change Methodology Using Simulation Relations," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/50, 1995. [abstract] • H. Wang and R. K. Brayton, "Logic Optimization of FSM Networks Using Input Don't Care Sequences," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/42, 1995. • J. Jain, A. Narayan, C. Coelho, S. Khatri, A. L. Sangiovanni-Vincentelli, R. K. Brayton, and M. Fujita, "Combining Top-Down and Bottom-Up Approaches for ROBDD Construction," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/30, 1995. [abstract] • T. Kam, T. Villa, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Implicit State Minimization of Non-Deterministic FSM's," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ ERL M95/18, 1995. • F. Balarin, R. K. Brayton, S. Cheng, D. Kirkpatrick, A. L. Sangiovanni-Vincentelli, and E. Wu, "A Methodology for Formal Verification of Real-Time Systems," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/11, 1995. • V. Singhal, C. Pixley, A. Aziz, and R. K. Brayton, "Delaying Safeness for More Flexibility," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M95/5, 1995. • R. Ranjan, A. Aziz, R. K. Brayton, B. Plessier, and C. Pixley, "Efficient Formal Design Verification: Data Structure + Algorithm," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/100, 1994. • R. Ranjan and R. K. Brayton, "A User Friendly Environment for Property Specification," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/99, 1994. • A. Aziz and R. K. Brayton, "Synthesizing Interacting Finite State Machines," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/96, 1994. • V. Singhal, C. Pixley, R. Rudell, and R. K. Brayton, "The Validity of Retiming Sequential Circuits," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/79, 1994. • A. Aziz, T. Shiple, V. Singhal, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Formula-Dependent Equivalence for Compositional CTL Model Checking," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/78, 1994. • G. Swamy and R. K. Brayton, "Incremental Formal Design Verification," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/76, 1994. • N. Ishiura and R. K. Brayton, "A Comparative Approach to Processor Verification Using Symbolic Model Checking," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/59, • E. Sentovich and R. K. Brayton, "An Exact Optimization of Two-Level Acyclic Sequential Circuits," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/48, 1994. • S. Cheng and R. K. Brayton, "Compiling Verilog into Automata," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/37, 1994. • C. Pixley, V. Singhal, A. Aziz, and R. K. Brayton, "Multi-Level Synthesis for Safe Replaceability," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/31, 1994. • H. Wang and R. K. Brayton, "Permissible Observability Relations in FSM Networks," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/15, 1994. • R. Hojatic, V. Singhal, and R. K. Brayton, "Edge-Streett/Edge-Rabin Automata Environment for Formal Verification Using Language Containment," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/12, 1994. [abstract] • R. Hojati, S. Krishnan, and R. K. Brayton, "Heuristic Algorithms for Early Quantification and Partial Product Minimization," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ ERL M94/11, 1994. [abstract] • C. Wawrukiewicz, A. Saldanha, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Sequential Test Pattern Generation: Using Implicit STG Traversal Techniques to Generate Tests and Identify Redundancies in Sequential Circuits," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M94/4, 1994. [abstract] • T. Shiple, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Computing Boolean Expressions with OBDDs," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/84, 1993. [ • T. Kam, T. Villa, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "A Fully Implicit Algorithm for Exact State Minimization," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ ERL M93/79, 1993. [abstract] • A. Aziz, S. Tasiran, and R. K. Brayton, "BDD Variable Ordering for Interacting Finite State Machines," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/71, 1993. [ • A. Aziz, V. Singhal, G. Swamy, and R. K. Brayton, "Minimizing Interacting Finite State Machines," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/68, 1993. [abstract] • H. Wang and R. K. Brayton, "Input Don't Care Sequences in FSM Networks," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/64, 1993. [abstract] • Y. Watanabe and R. K. Brayton, "The Maximum Set of Permissible Behaviors for FSM Networks," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/61, 1993. [abstract] • T. Kam, T. Villa, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Implicit Generation of Compatibles for Exact State Minimization," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/60, 1993. [abstract] • M. Sekine, T. Villa, K. Goto, and R. K. Brayton, "A New Approach for the Synthesis of FSM's from Control-Flow Graphs," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/ 59, 1993. [abstract] • T. Shiple, R. Hojati, A. L. Sangiovanni-Vincentelli, and R. K. Brayton, "Heuristic Minimization of BDDs, Using Don't Cares," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ ERL M93/58, 1993. [abstract] • A. Aziz and R. K. Brayton, "Verifying Interacting Finite State Machines," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/52, 1993. [abstract] • W. Lam, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Exact Minimum Delay Computation and Clock Frequencies," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/40, 1993. [abstract] • P. Stephan and R. K. Brayton, "Physically Realizable Gate Models," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/33, 1993. [abstract] • V. Singhal, Y. Watanabe, and R. K. Brayton, "Heuristic Minimization for Synchronous Relations," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/30, 1993. [abstract] • W. Lam, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Circuit Delay Models and Their Exact Computation Using Timed Boolean Functions," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M93/6, 1993. [abstract] • G. Swamy, P. McGeer, and R. K. Brayton, "A Fully Implicit Quine-McCluskey Procedure Using BDD's," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/127, 1992. • W. Lam, A. Saldanha, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Delay Fault Coverage, Test Set Size, and Performance Tradeoffs," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/119, 1992. • P. Stephan, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Combinational Test Generation Using Satisfiability," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/ 112, 1992. • N. Shenoy, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Graph Algorithms for Efficient Clock Schedule Optimization," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/79, 1992. • P. McGeer, A. Saldanha, P. Stephan, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Delay Models and Sensitization Criteria in the False Path Problem," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/63, 1992. • W. Lam and R. K. Brayton, "Verification with Timed Automata," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/58, 1992. • W. Lam, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Exact Delay Computation with Timed Boolean Functions," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/57, • W. Lam, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Minimum Cycle Time of Synchronous Circuit with Bounded Delays," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/56, 1992. • M. Chiodo, T. Shiple, A. L. Sangiovanni-Vincentelli, and R. K. Brayton, "Automatic Reduction in CTL Compositional Model Checking," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/55, 1992. • E. Sentovich, K. Singh, L. Lavagno, C. Moon, R. Murgai, A. Saldanha, H. Savoj, P. Stephan, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "SIS: A System for Sequential Circuit Synthesis," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/41, 1992. [abstract] • L. Lavagno, C. Moon, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "A Novel Framework for Solving the State Assignment Problem for Event-Based Specifications," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/19, 1992. • H. Savoj, M. Silva, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Boolean Matching in Logic Synthesis," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M92/15, 1992. • R. K. Brayton, M. Chiodo, R. Hojati, T. Kam, K. Kodandapani, R. Kurshan, S. Malik, A. L. Sangiovanni-Vincentelli, E. Sentovich, T. Shiple, K. Singh, and H. Wang, "BLIF-MV: An Interchange Format for Design Verification and Synthesis," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M91/97, 1991. • C. Moon, P. Stephan, and R. K. Brayton, "Specification, Synthesis and Verification of Hazard-Free Asynchronous Circuits," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M91/67, 1991. • E. Sentovich and R. K. Brayton, "Preserving Don't Care Conditions During Retiming," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M91/2, 1991. • T. Kam and R. K. Brayton, "Multi-Valued Decision Diagrams," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M90/125, 1990. • A. Saldanha, T. Villa, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "A Framework for Satisfying Input and Output Encoding Constraints," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M90/110, 1990. • Y. Watanabe and R. K. Brayton, "Incremental Synthesis for ``Engineering Changes''," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M90/76, 1990. • L. Lavagno, S. Malik, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "MIS-MV: Optimization of Multi-Level Logic with Multiple-Valued Inputs," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M90/68, 1990. • P. McGeer and R. K. Brayton, "Consistency and Observability Invariance in Multi-Level Logic Synthesis," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M89/88, 1989. • R. McGeer, R. K. Brayton, R. Rudell, and A. L. Sangiovanni-Vincentelli, "Extended Stuck-Fault Testability for Combinational Networks," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M89/87, 1989. • S. Malik, E. Sentovich, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Retiming and Resynthesis: Optimizing Sequential Networks with Combinational Techniques," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ERL M89/28, 1989. • S. Malik, R. K. Brayton, and A. L. Sangiovanni-Vincentelli, "Encoding Symbolic Inputs for Multi-Level Logic Implementation," EECS Department, University of California, Berkeley, Tech. Rep. UCB/ ERL M88/69, 1988. • R. K. Brayton, "Character recognition system and method multi-bit curve vector processing," U.S. Patent 4,177,448. Dec. 1979. • R. K. Brayton, F. G. Gustavson, and G. D. Aachtel, "Tableau network design system," U.S. Patent 3,705,409. Dec. 1972. Talks or presentations • A. Tabbara, R. K. Brayton, and A. R. Newton, "Retiming for DSM with area-delay trade-offs and delay constraints," presented at Intl. Workshop on Timing Issues in the Specification and Synthesis of Digital Systems (TAU '99), Monterey, CA, March 1999.
{"url":"http://www.cs.berkeley.edu/Pubs/Faculty/brayton.html","timestamp":"2014-04-16T04:40:28Z","content_type":null,"content_length":"64147","record_id":"<urn:uuid:4a0490ea-2a38-4716-bca7-bee40d64e4f6>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Joe on Sunday, January 31, 2010 at 7:25pm. Find the indicated outputs for f(x)= 5x^2-5x. F(0) = 0 f(-1)= 1 f(2)= 45 is this right? Related Questions algebra - find the indicated outputs for f(x)=3x^2-5x I get f(0)=0 f(-1) = 1 f(2... Math - Find the indicated outputs for f(x)=5x^2-2x. f(0)= f(-1)= f(2)= Math - Find the indicated outputs for f(x)= 5x^2-2x The 2 after the ^ is an ... algebra - find the indicated outputs for f(x = 3x^2 -5x f(0)= f(-1)= f (2)= math - Function k(x) = x+18 Find k(0) k(x) = 6x k(-2) k(5) k(-2) indicated ... Algebra - Find the indicated outputs for f(x)=¡¼5x¡½^(2 )-2x f(0)= f(-1)= f(2)= algebra - find the indicated outputs for f(x)= 4x^2- 5x= f(0)= f(-1)= f(2)= algebra - find the indicated outputs for f(x)= 4x-5x= f(0)= f(-1)= f(2)= Algebra 1 - Find the indicated outputs for f(x)=4x(2)-5x f(0)=? f(-1)=? (f(2)=? Math - 1.Write the equation 5x+y-2=0 in normal form. 5x+y-2=0 5x+y=2 (add two to...
{"url":"http://www.jiskha.com/display.cgi?id=1264983953","timestamp":"2014-04-20T19:14:06Z","content_type":null,"content_length":"8326","record_id":"<urn:uuid:beb72d99-fecd-48eb-8942-cba57994786b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Alto, GA Statistics Tutor Find an Alto, GA Statistics Tutor ...I played tennis in high school. I played and coached privately during my college years. I was trained to coach softball as a Physical Education major at James Madison University. 29 Subjects: including statistics, reading, chemistry, physics ...I have had the pleasure of working in several different industries in my career. I currently work at Cottrell making Car-Hauling Trailers in Gainesville. I have worked with students that I was referred to by family and friends and have also tutored at several different tutoring companies, including Ava White Tutorials and Breneau Academy. 12 Subjects: including statistics, calculus, trigonometry, SAT math ...I program using Access' Jet SQL as opposed to using some of the drop-down menus as I think it affords the user greater flexibility, control and transferability. I worked as a database developer (not administrator) for a government agency for about 3.5 years. I developed large relational databases, created reports and data management routines using Oracle. 17 Subjects: including statistics, reading, writing, ESL/ESOL ...I hold a certificate from the state of Ga to be a Para Pro. I have also went through continuing education to teach children with Dyslexia. I just graduated in June of 2013 with my Associates of Applied Science degree in Early Childhood Care and Education. 10 Subjects: including statistics, ESL/ESOL, grammar, dyslexia ...I have 2 years of college level teaching experience in mathematics. I have taught one section of Precalculus and two sections of Calculus for non-STEM majors at UGA. In addition, I have taught two sections of Elementary Statistics and one section of Precaculus at Piedmont College. 20 Subjects: including statistics, calculus, geometry, Chinese Related Alto, GA Tutors Alto, GA Accounting Tutors Alto, GA ACT Tutors Alto, GA Algebra Tutors Alto, GA Algebra 2 Tutors Alto, GA Calculus Tutors Alto, GA Geometry Tutors Alto, GA Math Tutors Alto, GA Prealgebra Tutors Alto, GA Precalculus Tutors Alto, GA SAT Tutors Alto, GA SAT Math Tutors Alto, GA Science Tutors Alto, GA Statistics Tutors Alto, GA Trigonometry Tutors Nearby Cities With statistics Tutor Baldwin, GA statistics Tutors Berkeley Lake, GA statistics Tutors Chamblee, GA statistics Tutors Clarkesville statistics Tutors Clermont, GA statistics Tutors Cornelia statistics Tutors Cumming, GA statistics Tutors Demorest statistics Tutors Gillsville statistics Tutors Jefferson, GA statistics Tutors Lula, GA statistics Tutors Mount Airy, GA statistics Tutors Oakwood, GA statistics Tutors Sugar Hill, GA statistics Tutors Toccoa Falls statistics Tutors
{"url":"http://www.purplemath.com/Alto_GA_Statistics_tutors.php","timestamp":"2014-04-16T07:19:09Z","content_type":null,"content_length":"23924","record_id":"<urn:uuid:f04bdd5f-d580-4868-a036-d742ed4cb618>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Question on consecutive integers with similar prime factorizations up vote 16 down vote favorite Suppose that $n=\prod_{i=1}^{k} p_i^{e_i}$ and $m=\prod_{i=1}^{l} q_i^{f_i}$ are prime factorizations of two positive integers $n$ and $m$, with the primes permuted so that $e_1 \le e_2 \cdots \le e_k$, and $f_1 \le f_2 \le \cdots \le f_l$. Then if $k=l$ and $e_i=f_i$ for all $i$, we say that $n$ and $m$ are factorially equivalent. In other words, two integers are factorially equivalent if their prime signatures are identical. In particular, $d(n)=d(m)$ if the two are factorially equivalent. There's a question I've had for a long time, which is: Are there infinitely many integers $n$ such that $n$ is factorially equivalent to $n+1$? There are numerous curious pairs of consecutive integers for which this holds: $(2,3)$, $(14,15)$, $(21,22)$, $(33,34)$, $(34,35)$, $(38,39)$, $(44,45)$, as well as $(98,99)$, and many more. As you can see, many of them are almost-primes, but the last two pairs are quite striking. Although there are so many of them, a proof that there are infinitely many such pairs seems elusive. Has anyone made any progress on (or even asked) such a question? Does anyone here have a solution or progress for this? Edit: As an added bonus, the $k$th such $n$, as a function of $k$, seems almost linear. It would be interesting to express and prove an asymptotic formula for this. Can anyone guess heuristically what the slope of this line is? What I'll add, though I'd like to keep my question focused on the above, is that there are many other questions you can ask: How many integers $n$ are there such that $n$ is factorially equivalent to $n^2+1$, or $n^4+5n+3$, or $2^n+1$? You can generate an almost unending list of seemingly uncrackable number-theoretic conjectures this way. Many of these questions seem to relate to other well-known number theoretic conjectures. The Twin Prime Conjecture would imply that there are infinitely many $n$ such that $n$ is factorially equivalent to $n+2$. The truth of my question above would imply that there are infinitely many $n$ such that $d(n)=d(n+1)$, a result which has actually been proven, so my conjecture is a strengthening of it. Furthermore, the proof of the infinitude of Mersenne primes would prove the infinitude of $n$ factorially equivalent to $2^n-1$. But beyond all these connections to well-known conjectures, I think the question about and its generalizations are aesthetically interesting. analytic-number-theory nt.number-theory prime-numbers open-problem 2 I can't answer your question, but note that your sequence is Sloane's A052213. [1] oeis.org/classic/A052213 – Charles Jul 18 '10 at 22:24 Thanks! I didn't even know about this. – David Corwin Jul 18 '10 at 23:53 You have a triple, (33, 34, 35). It seems you could cut down considerably on frequency by considering triples or quadruples. Is it known there are infinitely many triples with $$ d(n) = d(n+1) = d (n+2) $$ or that there are not infinitely many? – Will Jagy Jul 19 '10 at 0:29 2 Can you conclude much from small numbers? The product of the first four primes is 210. Below that you have only a handful of signatures, and some adjacencies are to be expected. – Charles Matthews Jul 19 '10 at 7:30 Yes, parity seems to give something here. The other effect worth thinking through is the number of signatures, given that the possible signatures for integers of size N is apparently the partition function summed up to log N. We certainly know the average order of the partition function. So (this currently looks a bit crude, since small primes are not dealt with) what do we expect from random adjacencies of the same partition? – Charles Matthews Jul 19 '10 at 10:04 show 3 more comments 6 Answers active oldest votes I'm coming into this late, but am wondering why no one seems to have mentioned the results of Goldston, Graham, Pintz and Yildirim: up vote 6 down vote accepted In particular, their Theorem 4 answers the OPs first question in the affirmative. add comment This is actually meant to be a comment, not an answer, but I'm new here and I don't have enough reputation to post a comment yet...sorry! I just wanted to note that Dickson's conjecture would imply infinitely many consecutive numbers with the same prime signature. For example, Dickson's conjecture would say that there are infinitely many k such that 4k+1 and 9k+2 are both prime. For each such k, 4(9k+2) and 9(4k+1) would be consecutive numbers with the prime signature (1,2). One might expect there to be roughly $\frac{3N}{\log 4N \log 9N}$ values of k between 1 and N such that 4k+1 and 9k+2 are both prime. This additional comment is directed toward Davidac897's comment about having more than one prime factor with exponent greater than 1, which TonyK already pointed out that Tom Sirgedas' program has already found examples of. up vote 9 down vote Dickson's conjecture also would imply infinitely many such examples where more than one prime has exponent greater than 1. For example, say we want prime signature (1,2,2). Let $a = 2^27^2$ and $b = 3^25^2$. We seek solutions in primes $p$ and $q$ to $ap + 1 = bq$. If $p = bk + 194$, then $q = ak + 169$. Dickson's conjecture would say there are infinitely many $k$ such that $bk+194$ and $ak + 169$ are both prime. (The first consecutive pair using this method is $2463524 = 2^27^212569$ and $2463525 = 3^25^210949$.) In a similar vein, you can use Dickson's conjecture to force any prime signature you wish provided that at least one of the exponents is 1. add comment It turns out that there are consecutive integers $7^3y^2$ and $2^3x^2$ where are each a product of 9 primes according to http://www.alpertron.com.ar/ECM.HTM , I could check the purported factors in MAPLE but haven't) The method in brief: Find a prime p with $p^3-1$ or $p^3+1$ of the form $qz^2$ (with q a prime) then the Pell equation $p^3u^2-qv^2=\pm 1$ has solutions and the values of $v \mod q$ are periodic. If $v$ is ever a multiple of $q$ then there is a sequence of solutions to $p^3x^2-q^3y^2=\pm1$ If $x,y$ ever have the same signature, there's the example (as long as y is prime to q and x to p) . up vote 5 down vote For the example above: There are solutions to $7b^2+1=8a^2$ the first two are $a_0=1,b_0=1$ and $a_1=31,b_1=29$ with,as easy to find, $b_n=16a_{n-1}+15b_{n-1}$ and $a_n=14a_{n-1}+15b_{n-1}$. When $n \equiv 3 \mod 7$ then $b_n \equiv 0 \mod 7$ After misses at n=3,7,10,17,31,38,45,52 (and ignoring $n=24 \mod 49$ where $b_n$ divides by 49) success at n=59. $27a^2-2b^2=1$ is never $27x^2-8y^2=1$ since b is odd but $7\cdot a^2-3^3 \cdot b^2=1$ is $7^3x^2-3^3y^2=1$ every 7th time. add comment Sorry, I can't post comments yet (maybe never:))! I wrote a program to find such numbers in the intervals [2,4000), [10^6,10^6+4000), [10^9,10^9+4000), [10^12,10^12+4000), and [10^15,10^15+4000) up vote 4 down vote Here are the numbers and signatures: http://pastebin.com/piMZNQKx Very interesting - notice that for smaller values, there are fewer prime signatures with lots of prime factors, but many of them start showing up as you go higher. When you get high enough, semiprimes seem to be almost non-existent. I also haven't seen a single instance in which more than one prime factor has an exponent greater than $1$. In what language did you write this? Could you possibly provide source code? – David Corwin Jul 21 '10 at 9:22 @Davidac897 ("I also haven't seen a single instance in which more than one prime factor has an exponent greater than 1"): just searching for the string "2," in Tom's list turns up two such instances: 1000001456 {1,1,2,4} and 1000000000475 {1,2,2}. – TonyK Jul 21 '10 at 10:38 @Davidac897: C++, simple source code is here: pastebin.com/UkJkspJN – Tom Sirgedas Jul 22 '10 at 2:00 Every prime signature in the list has a 1 in it. Is that true in general? That is, if two consecutive integers have the same prime signature, must that signature contain a 1? – Ken Fan Jul 23 '10 at 6:29 According to the most recent answer, the answer is no. – David Corwin Aug 11 '10 at 23:06 add comment This question is directly related to when $d(n)=d(n+1)$ where $d(n)$ denotes the divisor function. Solutions to $d(n)=d(n+1)$: In 1952, Erdos and Mirsky conjectured that $d(n)=d(n+1)$ has infinitely many solutions. In 1984, Heath Brown proved this result, and gave a lower bound on the counting function. Let $\ widetilde{D}(x)$ denote the number of $n\leq x$ satisfying $d(n)=d(n+1)$. Heath Brown showed that $$\widetilde{D}(x)\gg \frac{x}{(\log x)^7}.$$ In 1987 Erdős, Pomerance and Sárközy gave the upper bound $$\widetilde{D}(x)\ll \frac{x}{(\log \log x)^\frac{1}{2}}.$$ Later that year, Hildebrand improved Heath Browns Result that $$\widetilde{D}(x)\gg \frac{x}{(\log \log x)^3},$$ showing that the correct magnitude involves a doubly logarithmic factor. up vote 3 Consecutive integers with identical prime signature: down vote Let $\widetilde{\mathcal{P}}(x)$ denote the number of integers $n\leq x$ such that $n$ and $n+1$ have the same prime signature. Then $\widetilde{\mathcal{P}}(x)\leq \widetilde{D}(x)$, and so Erdős, Pomerance and Sárközy result immediately implies that $$\widetilde{\mathcal{P}}(x)\ll \frac{x}{(\log \log x)^\frac{1}{2}}.$$ This means that the counting function is not linear even though the graph resembles a straight line. ($\log \log x$ grows extremely slowly, and is nearly unnoticeable) Since $d(n)=d(n+1)$ "often" implies that $n$ and $n+1$ have the same signature, it seems likely that one could use Hildebrands lower bound to prove that the set of consecutive integers with identical prime signature is infinite. Bounding the number of times we have $d(n)= d(n+1)$, yet difference signatures, seems like a fruitful approach. Some References: (Chronological Ordering) add comment Very rough heuristic of the $k$th such $n$ (prime signature twin) as a function of $k$: 1st assumption: The dominant effect is given by primes of the signature $(1,1)$, i.e. by semiprimes. 2nd assumption: The number of semiprimes below $n$ is given by $\pi_2(n) \sim \frac{n}{\ln n} \ln \ln n$, cf. http://en.wikipedia.org/wiki/Almost_prime Then the probability density of the semiprimes is approximately given by $f_2(n) \sim \frac{\ln \ln n}{\ln n}$. 3rd assumption: The semiprimes are independently distributed. up vote 2 down vote Then the number of semiprime twins below $N$ is given by $\int^{N} f_2(x)^2 dx$. Thus the number of prime signature twins is rougly given by $(\frac{\ln \ln n}{\ln n})^2$ (a better value or an asymptotic formula can be obtained by evaluating the integral.) Thus the $k$th prime signature twin is roughly given byh $n = k \cdot (\frac{\ln k}{\ln \ln k})^2$. For $k = 200$ (as in the figure cited by the OP) this gives approximately a slope of 8,8, the same order of magnitude as in the figure. Of course this very rough calculation can be improved in various ways. 2 Why should semiprimes be dominant? – Michael Lugo Jul 21 '10 at 3:32 add comment Not the answer you're looking for? Browse other questions tagged analytic-number-theory nt.number-theory prime-numbers open-problem or ask your own question.
{"url":"http://mathoverflow.net/questions/32412/question-on-consecutive-integers-with-similar-prime-factorizations/111922","timestamp":"2014-04-19T12:40:38Z","content_type":null,"content_length":"91136","record_id":"<urn:uuid:3f9afae2-8b29-4703-a6f2-b2b64277b6b5>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Seybold, FL Algebra 2 Tutor Find a Seybold, FL Algebra 2 Tutor ...While much to most of the work I do with my students is substantive and academic (test prep, homework, re-teaching and clarifying concepts, academic content skill-building), at least some of the work I do with some of my private students is in the areas of organization, attention, and motivation.... 61 Subjects: including algebra 2, English, Spanish, reading ...I have also taken genetics recently as I am medical student. I was a chemistry major in college, and took organic chemistry my freshmen year, receiving A range grades for both semesters. I also tutored organic chemistry for 3 years following that. 32 Subjects: including algebra 2, chemistry, calculus, physics ...In the past I have tutored students ranging from elementary school to college in a variety of topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and helping others and always do my best to make sure the information is enjoyable and being presented effectively... 30 Subjects: including algebra 2, reading, biology, algebra 1 Hello everyone, my name is Becky. I have worked as a private tutor for 5 years in a variety of subjects. I am very patient and believe in teaching by example. 30 Subjects: including algebra 2, chemistry, English, geometry ...It is my goal to work with kids who have gotten behind and bring them up to speed. For those students who are already good at math but want to stay on top of it, my goal is to make math exciting and always stress to them how important of a subject it is, so they stay on top of it. I truly belie... 4 Subjects: including algebra 2, algebra 1, trigonometry, prealgebra Related Seybold, FL Tutors Seybold, FL Accounting Tutors Seybold, FL ACT Tutors Seybold, FL Algebra Tutors Seybold, FL Algebra 2 Tutors Seybold, FL Calculus Tutors Seybold, FL Geometry Tutors Seybold, FL Math Tutors Seybold, FL Prealgebra Tutors Seybold, FL Precalculus Tutors Seybold, FL SAT Tutors Seybold, FL SAT Math Tutors Seybold, FL Science Tutors Seybold, FL Statistics Tutors Seybold, FL Trigonometry Tutors Nearby Cities With algebra 2 Tutor Brickell, FL algebra 2 Tutors Carl Fisher, FL algebra 2 Tutors Coconut Grove, FL algebra 2 Tutors Fisher Island, FL algebra 2 Tutors Flamingo Lodge, FL algebra 2 Tutors Goulds, FL algebra 2 Tutors Keystone Islands, FL algebra 2 Tutors Ludlam, FL algebra 2 Tutors Miami Beach, WA algebra 2 Tutors Modello, FL algebra 2 Tutors South Florida, FL algebra 2 Tutors South Miami Heights, FL algebra 2 Tutors Sunset Island, FL algebra 2 Tutors Venetian Islands, FL algebra 2 Tutors West Dade, FL algebra 2 Tutors
{"url":"http://www.purplemath.com/Seybold_FL_algebra_2_tutors.php","timestamp":"2014-04-16T13:34:58Z","content_type":null,"content_length":"24048","record_id":"<urn:uuid:1f8e2c54-9c29-4672-8823-7ef284c5eca3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionStudy Area and DataMethodologyAt-Sensor Radiance ModelingIdentification of Shaded Ground SamplesModeling of the Shape Factor FEstimation of Surface Reflectance Using RCA SamplesResults and DiscussionCalculation of cosσiCorrelation Map of Shaded Ground SamplesAssessing Estimates of Surface ReflectanceConclusionsReferencesFigures and Tables Remote sensing images have been widely used for applications of earth surface monitoring such as landslide sites identification, land use/land cover (LULC) classification and change detection, crop yield estimation, reservoir and coastal water quality monitoring, etc. Radiometric corrections such as dark object subtraction (DOS) are often conducted prior to LULC classification and change detection [1–4]. However, radiances received at the sensor are affected by the atmosphere and properties (such as reflectance, slope and aspect) of the terrain surface as well. As a result, atmospheric correction alone does not yield output images that truly reflect terrain surface properties, namely bidirectional reflectance factor (BRF) of objects on earth surface. Although band-ratio images such as the normalized difference vegetation index (NDVI) and other vegetation indices have been used to alleviate the topography effects [5–8], these images do not directly link to properties of terrain surface and their usefulness for further applications are empirically based. Ideally, we need to use features or properties that are reflective of earth surface conditions and free of the topographic and atmospheric effects for remote sensing applications of earth surface monitoring. One such essential feature in the optical spectral range is the BRF which is considered to be the inherent property of any earth surface object. Theory and models of radiometric propagation from the sun to the sensor which are essential for remote sensing image processing have been well developed. For example, Slater [9] developed an optical model which describes the propagation of solar radiances, in both visible and infrared wavelength ranges, from the sun to the sensor through different paths. Schott [10] provided theoretical derivation of radiances of visible and thermal spectral ranges reaching the sensor. Liang et al. [11] developed an atmospheric correction algorithm for Landsat ETM+ images. The algorithm identifies surface clusters in bands that are less contaminated by atmospheric particles; then mean reflectance of each cluster in both clear and hazy regions within the scene is matched, which allows determination of the path radiance. Forster [12] demonstrated an application of calculating reflectances of surface objects by taking measurements of atmospheric parameters, diffusive irradiance, and path radiance. However, for most remote sensing applications such measurements may not be available, making it difficult to estimate reflectances of earth surface features from remote sensing images. Since radiances leaving the object surface are affected by the atmosphere and surrounding topographic features prior to being received at the sensor, these effects (in the form of path radiance and shape factors) must be taken into account in estimation of surface reflectance. Methods of in-scene estimation of path radiances have been proposed, with the dark object subtraction (DOS) method being most widely applied [13–15]. However, the DOS method tends to overestimate path radiances in applications for which the assumptions of near-zero reflectance of the dark objects are not valid [10,16]. Switzer et al. [17] developed a covariance matrix method which utilizes the correlation between multispectral bands of data simultaneously. The covariance matrix method does not require auxiliary data, but operate solely upon the digital numbers of satellite images. Mueller et al. [18] proposed a new retrieval method for satellite-based spectrally resolved surface irradiance with emphasis on the visible and near-infrared (VIS/NIR) region of the spectrum. In contrast to the above in-scene estimation methods, Cheng et al. [16] proposed using in situ measurements of surface reflectances from radiometric control areas (RCAs) for improvement in path radiance estimation. Viggh et al. [19] proposed an approach using prior spatial and spectral information about the surface reflectance for reflectance estimation of remote sensing images. Wen et al. [20] developed an algorithm for surface reflectance estimation from Landsat Thematic Mapper (TM) over rugged terrain using the bi-directional reflectance distribution function (BRDF) and radiative transfer model. Following the concept of RCA-based path radiance estimation method, we propose in this study a statistical approach for surface reflectance estimation utilizing digital elevation model (DEM) data. An area of approximately 750 km^2 in northern Taiwan was chosen for this study (Figure 1). Terrain elevation in the area varies from 64 m to 2,284 m above the mean sea level. It encompasses different landcover types including forest, bare land, orchard plantations, suburban built-up, and water body (reservoir pools and rivers). The reservoir pool stretches roughly in the east-west direction near the northwestern corner of the study area. A major river and its tributaries flow from south to north across the center of the study area before entering the reservoir pool. Apart from a relatively small portion in the most northwestern corner in which a suburban township exists, most of the study area is dominated by forest landcover. There are also a few orchard plantations and bare land parcels scattered over the study area. A set of Formosat-II multispectral images (including blue band: 450–520 nm, green band: 520–600 nm, red band: 630–690 nm, and near infrared band: 760–900 nm, with 8 m spatial resolution) of the study area acquired at 01:57 GMT (9:57 a.m. local time) on December 11, 2008 was collected (Figure 1(a)). The sun angle and view angles were 54.925° and 14.086° respectively, while azimuth angles of the satellite and the sun were 325.004° and 148.641°, respectively. DEM data of the study area (see Figure 1(b)) with a 40-m spatial resolution were also collected and used for topographic effect modeling. Elevation error of the 40-m DEM data has a mean of approximately 1 meter and standard deviation of 4 to 8 meters in mountainous region [21]. For a target of Lambertian surface, the at-sensor solar radiance of spectral wavelength λ, i.e., L[sλ], can be expressed by [10,16] L s λ ( θ , ϕ ) = L p λ + r d λ π ( E o λ cos σ i τ 1 λ + F E d λ ) τ 2 λwhere (θ, ϕ)= the zenith and azimuth angles of the target-sun directions, respectively, L[pλ]= path radiance, E[oλ]= exoatmospheric solar irradiance with respect to spectral wavelength λ, E[dλ]= downwelled irradiance, r[dλ]= diffuse reflectance of the Lambertian surface, τ[1][λ]= transmittance along the sun-target direction, τ[2][λ]= transmittance along the target-sensor direction, F = shape factor due to obstruction of terrain slope or adjacent objects, σ[i]= incidence angle of the solar irradiance at the target. Radiances reaching the sensor are recorded and linearly converted to digital numbers (DNs) by the following equation using the band-specific gain and offset parameters of the sensor: L s λ = offset ( λ ) + Gain ( λ ) × D N s λThus, the equation of radiometric propagation (Equation (1)) can be rewritten as D N s λ = D N p λ + r d λ k 1 λ cos σ i + r d λ F k 2 λwhere D N p λ = L p λ − offset ( λ ) Gain ( λ ) k 1 λ = E o λ π × Gain ( λ ) τ 1 λ τ 2 λ , k 2 λ = E d λ π × Gain ( λ ) τ 2 λ The dependence of DN[pλ], r[dλ], k[1][λ], and k[2][λ] on the wavelength λ, which can be considered as the central wavelength of a spectral band, indicates that these parameters are band-specific. For Formosat-II images used in this study, values of the offset parameter were zero for all spectral bands whereas values of the gain parameter were 0.3441, 0.3561, 0.2553 and 0.3062 (W·μm^−1·m^−2·sr^−1) for the blue, green, red and near infrared bands, respectively. For local remote sensing applications which do not cover extensively wide study areas, the exoatmospheric solar irradiance, downwelled irradiance, path radiance, and atmospheric transmittances can all be assumed to be spatially homogeneous. In other words, DN[pλ], k[1][λ] and k[2][λ] all can be viewed as constants for all ground samples. On the other hand, F and cosσ[i] represent the topographic characteristics of individual ground samples and are the sources of topographic effects. Cheng et al. [16] proposed a radiometric control areas (RCAs) approach for path radiance estimation. In their study, a radiometric control area is a horizontal and unobstructed area with spatially homogeneous and temporally stationary land surface condition. By considering only ground samples in radiometric control areas, the at-sensor radiance can be expressed as a linear regression function of surface reflectance with intercept being the path radiance. Thus, using in situ measurements of surface reflectance in the radiometric control areas, path radiance of the study area can be estimated by solving the linear equation. The idea of using ground samples in radiometric control areas is to eliminate the topographic effects in Equations (1) and (3). In this study we adopt a similar concept of radiometric control areas for estimation of surface reflectance. Although in general the at-sensor radiance can be expressed by Equation (1), there are situations in which solar irradiance cannot reach the target ground sample and should be dropped from Equation (1). As depicted in Figure 2(a), the target ground sample is located at the leeside of the solar irradiance (σ[i] > 90°, cosσ[i] < 0) and thus receives no incoming solar radiance. Whereas in Figure 2 (b), incoming solar radiation is obstructed by surrounding objects and cannot be received at the target ground sample. Under either situation, digital numbers of these ground samples (hereinafter referred to as the shaded ground samples) should be expressed by the following equation: D N s λ = D N p λ + r d λ k 2 λ FThe shaded ground samples can be identified using DEM data and sun angle of the satellite image. A straightforward algorithm as demonstrated in Figure 3 was used in this study for calculation of cosσ[i] of individual ground samples. From the DEM data, we first calculated the normal vector V⃗[o] of the target ground sample using the four normal vectors (U⃗[i], i = 1, . . . , 4) determined by the target ground sample and its surrounding ground samples, i.e., V o → = ∑ i = 1 4 U → i / | ∑ i = 1 4 U → i |The normal vector in the target-sun direction, i.e., V⃗[s], is then determined by considering the latitude and longitude of the study area and the image acquisition time. Thus, cosσ[i] can be easily obtained by cos σ i = V o → ⋅ V s →Similar to utilization of RCA samples by Cheng et al. [16], shaded ground samples eliminate the second term in the right-hand-side of Equation (3) and play a crucial role in this study. The shape factor F in Equation (6) represents the proportion of the total downwelled radiance that can reach the target ground sample. If the downwelled radiance is homogeneous over the entire sky dome above the target sample, then the shape factor F represents the ratio of the solid angle subtended at the target ground sample by the downwelled-radiance-contributing sky dome to the solid angle of a hemisphere, i.e., 2π. Thus, it can be obtained by calculating the downwelled-radiance-receiving solid angle using DEM data. However, in reality the downwelled radiance may not be homogeneous at all time and thus the shape factor not only depends on the downwelled-radiance-receiving solid angle but also spatial variation of the downwelled radiance. Since spatial variation of the downwelled radiance is essentially random due to inhomogeneous distribution of atmospheric particles, the shape factor F can therefore be treated as a random variable. Thus, in this study we adopt a statistical approach for estimation of sample-specific shape factor F by taking into account the ground-sample-specific topographic characteristics. For convenience of subsequent explanation, we first define the following notations which will be used later. Consider a group of p × p ground samples with the target ground sample located at the center. The size of this sample group (p × p) is carefully chosen such that its coverage is large enough to account for the solar radiation obstruction by surrounding objects, as illustrated in Figure 2(b). Let X be a vector consisting of the shape factor F associated to the target ground sample and elevation differences (E[i] – E[0] = d[i]) between the target sample and its neighboring samples, i.e: X ( p 2 × 1 ) = [ F E 1 − E 0 E 2 − E 0 ⋮ E p 2 − 1 − E 0 ] = [ F d 1 d 2 ⋮ d p 2 − 1 ] = [ X 1 X 2 ]Vector partitions in the above equation are self-explaining with X[1] = F and X[2] represents and elevation differences (d[i], i=1, 2, ..., p^2–1). Elevations of the target ground sample and its surrounding samples in the sample group are represented by an elevation matrix Elev which can be expressed by the following equation: Elev ( p × p ) = [ E 1 E 2 ⋯ ⋯ ⋯ E p − 1 E p E p + 1 E p + 2 ⋯ ⋯ ⋯ E 2 p − 1 E 2 p ⋮ ⋮ E ( p 2 − 1 ) / 2 E 0 E ( p 2 + 1 ) / 2 ⋮ ⋮ E p 2 − 2 p E p 2 − 2 p + 1 ⋯ ⋯ ⋯ E p 2 − p − 2 E p 2 − p − 1 E p 2 − p E p 2 − p + 1 ⋯ ⋯ ⋯ E p 2 − 2 E p 2 − 1 ]In this study the value of p was set to 41 so that all neighboring samples that may contribute to the shape factor are included in the 41 × 41 sample group. With a pixel size of 8-meter for Formosat-II multispectral images, a 41 × 41 sample group encompasses an area centered at the target sample and extending at least 160 meters outward in all directions. Spatial variations of the shape factor X[1] and elevation difference X[2] within the study area can be considered as two random fields with their mean vectors and covariance matrices defined as μ 1 = E ( X 1 ) , μ 2 = E ( X 2 ) , Σ 11 = Cov ( X 1 , X 1 ) , Σ 22 = Cov ( X 2 , X 2 ) Σ 12 = Σ 21 T = Cov [ X 1 , X 2 ] = [ Cov ( F , d 1 ) ⋯ Cov ( F , d p 2 − 1 ) ] = Var ( F ) [ Cor ( F , d 1 ) Var ( d 1 ) ⋯ Cor ( F , d p 2 − 1 ) Var ( d p 2 − 1 ) ]The shape factor F varies with locations and is considered as a random variable with a constant expectation μ[1]. For target samples with larger elevation differences between the target sample and its neighboring samples, i.e., X′[2] = (d[1], d[2], ⋯, d[p^2–1]), we can expect more significant obstruction effect and lower values of the shape factor. From the fundamental theorem of estimation theory [22], the best linear estimator (in terms of minimum mean-squared errors) of F, given the elevation differences X[2] associated to the target ground sample, is the conditional expectation of F given by the following equation: F ^ = E ( X 1 | X 2 = x 2 ) = μ 1 + Σ 12 Σ 22 − 1 ( x 2 − μ 2 )Substituting Equation (12) into Equation (13), it yields F ^ = μ 1 + Var ( F ) [ Cor ( F , d 1 ) Var ( d 1 ) ⋯ Cor ( F , d p 2 − 1 ) Var ( d p 2 − 1 ) ] Σ 22 − 1 ( x 2 − μ 2 )Readers are reminded that x[2] and μ[2] in Equations (13) and (14) are vectors representing a realization and the expectation of X[2], respectively. The mean vector and covariance matrix (i.e., μ[2] and Σ[22]) of X[2] in Equation (14) can be obtained by calculating the sample mean and sample covariance of X[2] using DEM data of a total of 388,000 ground samples within the study area. With the very large number of ground samples, estimates of μ[2] and Σ[22] can be expected to be nearly unbiased and with very small variance, even though X[2] is likely to exhibit spatial dependence. As for Var(d[i]) (i=1,...,p^2–1) in Equation (14), they represent the diagonal elements of Σ[22] and thus are readily available once Σ[22] has been obtained. Considering Lambertian targets and isotropic diffuse irradiance, surface reflectance of the target ground sample r[dλ] represents the inherent property of land surface and is independent of the shape factor F and the elevation differences of neighboring samples d[i] (i=1,...,p^2–1). Thus, if only shaded ground samples are considered, we have Cor ( D N s λ shade , d i ) = E { [ D N s λ shade − E ( D N s λ shade ) ] ⋅ [ d i − E ( d i ) ] } Var ( D N s λ shade ) Var ( d i ) = E { [ D N p λ + r d λ k 2 λ F − E ( D N p λ + r d λ k 2 λ F ) ] ⋅ [ d i − E ( d i ) ] } Var ( D N p λ + r d λ k 2 λ F ) Var ( d i ) = E { ( D N p λ + r d λ k 2 λ F ) ⋅ [ d i − E ( d i ) ] } Var ( r d λ k 2 λ F ) Var ( d i ) = E { r d λ k 2 λ F [ d i − E ( d i ) ] } Var ( r d λ k 2 λ F ) Var ( d i ) = k 2 λ E { r d λ F [ d i − E ( d i ) ] } k 2 λ Var ( r d λ F ) Var ( d i ) = E ( r d λ ) E { [ F − E ( F ) ] ⋅ [ d i − E ( d i ) ] } Var ( r d λ F ) Var ( d i ) = E ( r d λ ) Cov [ F , d i ] Var ( r d λ F ) Var ( d i ) = E ( r d λ ) Var ( F ) Var ( d i ) Cor [ F , d i ] Var ( r d λ F ) Var ( d i )Using the property of variance of product of independent random variables [23], we have Var ( r d λ ⋅ F ) = Var ( r d λ ) Var ( F ) + [ E ( r d λ ) ] 2 Var ( F ) + [ E ( F ) ] 2 Var ( r d λ ) Cor ( D N s λ shade , d i ) = E ( r d λ ) Var ( F ) Cor ( F , d i ) Var ( r d λ ) Var ( F ) + [ E ( r d λ ) ] 2 Var ( F ) + μ 1 2 Var ( r d λ ) ( i = 1 , … , p 2 − 1 )Thus, Cor ( F , d i ) = K λ ⋅ Cor ( D N s λ shade , d i ) Var ( F ) ( i = 1 , … , p 2 − 1 ) K λ = Var ( r d λ ) Var ( F ) + [ E ( r d λ ) ] 2 Var ( F ) + μ 1 2 Var ( r d λ ) E ( r d λ ) In the above equations DN^shade indicates digital numbers of shaded ground samples and K[λ] is an unknown constant since it is completely determined by distribution parameters of F and r[dλ]. Thus, substituting Equation (16a) into Equation (14), it yields F ^ = μ 1 + K λ D λ * D λ * = [ Cor ( D N s λ shade , d 1 ) Var ( d 1 ) ⋯ Cor ( D N s λ shade , d p 2 − 1 ) Var ( d p 2 − 1 ) ] Σ 22 − 1 ( x 2 − μ 2 ) Shaded ground samples account for approximately one sixth of the total number of ground samples in the study area. With this very large sample size and also based on the asymptotic distributional properties of the sample correlation coefficients [24], Cor ( D N s λ shade , d i ) (i=1,...,p^2–1) can be estimated with near zero bias and very small variance. Thus, for every ground sample the sample-specific D λ * value can be easily calculated using Equation (18). It is also worthy to note that although D λ * and K[λ] in Equation (17) are band-specific, the shape factor estimate F̂ is not band-dependent, as can be seen from Equation (14). The band-dependency will be eliminated by calculation of the product term D λ *K[λ] . Substituting Equation (17) into Equations (3) and (6), we have D N s λ = D N p λ + r d λ k 1 λ cos σ i + r d λ k 2 λ μ 1 + r d λ K λ D λ * k 2 λand D N s λ = D N p λ + r d λ k 2 λ μ 1 + r d λ K λ D λ * k 2 λ Suppose that a total of n ground samples from radiometric control areas with in situ measurements of surface reflectance are available. Digital numbers of these samples satisfy Equation (19) and can be expressed by the following matrix equation: [ D N s λ RC A 1 D N s λ RC A 2 ⋮ D N s λ RC A n ] = [ 1 ( r d λ cos σ i ) RC A 1 ( r d λ ) RC A 1 ( r d λ D λ * ) RC A 1 1 ( r d λ cos σ i ) RC A 2 ( r d λ ) RC A 2 ( r d λ D λ * ) RC A 2 ⋮ ⋮ ⋮ ⋮ 1 ( r d λ cos σ i ) RC A n ( r d λ ) RC A n ( r d λ D λ * ) RC A n ] ⋅ [ D N p λ k 1 λ k 2 λ μ 1 k 2 λ K λ ]where superscript RCA[i] indicates attributes of RCA samples. For example, D N s λ RCA i represents the band-specific digital number of the i-th RCA sample. In the above matrix equation, digital numbers and surface reflectances of RCA samples are known and cosσ[i] and D λ * can be calculated using Equation (8) and Equation (18), respectively. Thus, DN [pλ], k[1][λ], k[2][λ]μ[1] and k[2][λ]K can all be solved by the least-square multiple regression technique. In previous sections we have demonstrated that values of DN[pλ], k[1][λ], k[2][λ]μ[1] and k[2][λ]K[λ] can be considered as constants for all ground samples for applications that do not cover extensively wide areas. Therefore, estimation of band-specific surface reflectances of non-RCA samples can be achieved by solving Equation (19) and Equation (20) for non-shaded and shaded ground samples, respectively. The topographic effect of cosσ[i] calculated using Equation (8) is shown in Figure 4(a). In order to demonstrate the feasibility of using Equation (8) for modeling the topographic effect, we also rescaled grey levels of Formosat-II color image of the study area (see Figure 4(b)) and compared shaded areas in both figures. It can be observed that shaded areas in both images are largely consistent. Such results indicate the potential of characterizing the topographic effect by Equations (7) and (8), although more rigorous assessment, especially with respect to DEM accuracy, may be As was explained in Section 3.3, the correlation coefficients between digital numbers of shaded ground samples (DN^shade) and elevation differences of neighboring samples (d[i], i=1,...,p^2–1) can be estimated with near zero bias and very small variance. These correlation coefficients Cor ( D N s λ shade , d i ) (i=1,...,p^2–1) correspond to a group of p ×p (p=41) ground samples centered at the target sample, and can be arranged as the following correlation matrix P [24], P λ ( p × p ) = [ ρ 1 ρ 2 ⋯ ⋯ ⋯ ρ p − 1 ρ p ρ p + 1 ρ p + 2 ⋯ ⋯ ⋯ ρ 2 p − 1 ρ 2 p ⋮ ⋮ ρ ( p 2 − 1 ) / 2 ρ 0 ρ ( p 2 + 1 ) / 2 ⋮ ⋮ ρ p 2 − 2 p ρ p 2 − 2 p + 1 ⋯ ⋯ ⋯ ρ p 2 − p − 2 ρ p 2 − p − 1 ρ p 2 − p ρ p 2 − p + 1 ⋯ ⋯ ⋯ ρ p 2 − 2 ρ p 2 − 1 ]where ρ i = Cor ( D N s λ shade , d i ) (i=0,...,p^2–1). We shall refer to the correlation matrix P[λ] as the correlation map since it characterizes the spatial pattern of the correlation between digital numbers of shaded samples and elevation differences of neighboring samples. Figure 5 demonstrates the estimated correlation maps of different spectral bands. The correlation maps show a pattern nearly symmetric to the direction of incoming solar radiation for all spectral bands. Digital numbers (or corresponding radiances) of the shaded ground samples are contributed by the downwelled radiance from portion of the sky dome above the target sample. Downwelled radiances are the results of atmospheric scattering, most importantly the Rayleigh and Mie scatterings, whose effects are symmetric to the direction of solar irradiance. The symmetric pattern of the correlation map correctly reflects the characteristics of atmospheric scattering. The correlation map also shows that positive and negative correlation coefficients are associated with the samples falling in front of and in the back of the target sample (with respect to the target sample and the direction of solar irradiance), respectively. Such a pattern is also consistent with the topographic effects on radiance received at the target ground as explained below. As shown in Figure 6, obstacle samples behind the target sample (with reference to the direction of incoming radiation) obstruct direct solar irradiance to the shaded target sample. The higher these obstacles (i.e., higher d[i] values), the less downwelled radiance (smaller digital numbers) received at the target sample. Thus, for shaded ground samples DN^shade and elevation differences of neighboring obstacle samples (d[i], i = 1,...,p^2–1) are negatively correlated. In contrast, solar irradiances reaching the obstacle samples which situate in front of the target sample may be reflected to the target sample. The higher the obstacle samples (i.e., higher d[i] values), the more reflected radiance (larger digital numbers) received at the target sample. Thus, DN^shade and elevation differences of neighboring obstacle samples (d[i], i = 1,...,p^2–1) are positively correlated. Figure 5 also shows that correlation coefficients are lowest (at or near zero) for neighboring samples located in the direction perpendicular to the incoming solar radiation. This is explained by the fact that both the Rayleigh and Mie scattering effects are minimum in the direction perpendicular to the incoming solar radiation. Prior to calculation of surface reflectances using Equations (19) and (20), band-dependent values of DN[pλ], k[1][λ], k[2][λ]μ[1] and k[2][λ]K must be calculated by solving Equation (21). In this study in situ reflectance measurements from a set of 150 RCA samples which scattered over the entire study area were collected using a multispectral radiometer. Band-dependent reflectances of these RCA samples were then used to calculate band-dependent constants DN[pλ], k[1][λ], k[2][λ]μ[1] and k[2][λ]K (see Table 1) by solving Equation (21). Among these constants, values of DN[pλ] represent the band-specific path radiances which in principle decrease with increasing wavelength λ of solar radiation since the effect of atmospheric scattering (mainly including the Rayleigh scattering and the Mie scattering) reduces with increasing wavelength of solar radiation. Table 1 shows that band-dependent DN[pλ] values estimated by the method proposed in this study are consistent with effect of atmospheric scattering. Cheng et al. [15] used the same set of Formosat II images for path radiance estimation using the dark object subtraction (DOS) method and AERONET measurements of molecule and aerosol optical depths (AODs) (see Table 2). The DOS method tends to overestimate the path radiance if the selected dark objects are not near-zero reflectors, especially in mountainous areas [10,15]. The proposed method seems to yield more reasonable results than the commonly used DOS method with their path radiance estimates closer to path radiances calculated using AERONET measurements. Since the study area encompasses an area of 750 km^2 with different land cover conditions, it is difficult to conduct extensive in situ measurements of surface reflectance and validate estimates of surface reflectance by the proposed method. Thus, in this study we selected two subregions which represent moderate and high degree of terrain variation in our study area and visually compared their corresponding true color satellite images and color reflectance images. As demonstrated in Figure 7, color reflectance image (using blue, green and red colors for reflectances of the blue, green and red bands) and the true color satellite image of the subregion A (moderate terrain variation with elevation varies from 0 to approximately 600 m above the mean sea level) are very similar, indicating good estimates of surface reflectance in this region. Differences in reflectances of the water body, vegetation, built-up, and bared soils are well preserved in the color reflectance image. It can also be observed that shaded areas are present in the east and southeast corner of the true color satellite image of subregion A, whereas such shade effect has largely diminished in the color reflectance image since surface reflectances only depend on landcover types and are not affected by the topographic condition. True color satellite image of the subregion B which is an area with high mountains and substantial terrain variation (elevation varies from approximately 200 to 2000 m) shows significant topographic effect with visually apparent shaded areas. Although the topographic effect has also been largely eliminated in the color reflectance image of subregion B, the reflectance image still shows different levels of reflectance for the shaded (in purple color) and non-shaded (in green color) areas. Such results may arise from the unaccounted shade effect of very high mountains in subregion B. In our study an elevation matrix (Equation (10)) of 41×41 sample group is adopted and calculation of the shape factor considers only neighboring samples within the 160-m range of the target sample. High mountains not within the 160-m range may also block solar irradiance and cast shade on the target sample and such effect has not been considered in our analysis. It also indicates that elevation matrices of larger size may need to be used for areas of more significant terrain variation. Further study on determining the size of the elevation matrix and the correlation map with respect to the degree of elevation change (or terrain variation), i.e., variable size of elevation matrix, may help to improve reflectance estimation in areas with substantial terrain variation. It is also important to mention that the proposed method requires in situ reflectance measurements and DEM data. Thus, for areas without DEM data or for historical remote sensing data with no in situ reflectance values, application of the proposed method cannot be implemented.
{"url":"http://www.mdpi.com/2072-4292/4/4/934/xml","timestamp":"2014-04-20T01:41:09Z","content_type":null,"content_length":"131715","record_id":"<urn:uuid:c372ce78-366a-4d26-b7d9-6c97f83b6f4b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Candy Pi Calculator Math Monday: Candy Pi Calculator by George Hart There are many ways to calculate an approximation to pi, but rarely is math as delicious as in this idea from Davidson College professor Tim Chartier. Make a quarter circle in a square of graph paper and place chocolate chips on the squares that lie completely inside the circle. If you now count the chips and compute four times the number of chocolate chips divided by the total number of squares, that will be approximately pi. Here, there are 22 chips out of 36 squares, so we calculate 4·22/36=2.444, which is off by about 0.7 from 3.14, so is not a very close estimate of pi, but we can improve it.here. This article first appeared on Make: Online, October 10, 2011. Return to Math Monday Archive.
{"url":"http://momath.org/home/math-monday-candy-pi-calculator/","timestamp":"2014-04-18T02:58:27Z","content_type":null,"content_length":"12805","record_id":"<urn:uuid:828f7532-8225-49cf-a628-6659c0ec7651>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Mouse directional movement [Archive] - OpenGL Discussion and Help Forums 10-30-2009, 09:54 AM So this is what I'm working on... I'm using a 2D system to pull in the screen coordinates and I translate them into openGL coordinates. I create a line at the center with the center being (0,0) and then the end point being the open GL screen coordinate. It looks fine. Then i wanted to translate the line around on the screen with the keyboard. Ok So I apply a Translation matrix on it. That works fine, it translates around with every key press and still rotates about the right origin. Translate(transVec[0],transVec[1], 0) Some Variables: mTransVec[2] = translation vector. x = [0] & y = [1]. mLineEnd[2] = Lines endpoints, openGL coords, in an array. x = [0] & y = [1]. Angle = the rotational angle that the line is at from the origin in degree's or rads The Problem: I'm trying to figure out the math or some way with translation and rotation matrices that when I hit a key to go "forward" that the line will go in the direction/angle the line is pointing. I'm so lost as to how to do this... been trying a lot so far with rotation, using the angle of rotation and then some how tranlate it along that angle by rotating it.. ahhh not working It likes to keep going in circles. =\ Any help or direction of the math behind having the object follow where the cursor is would help so dearly. Thanks.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-169030.html","timestamp":"2014-04-21T12:38:43Z","content_type":null,"content_length":"6304","record_id":"<urn:uuid:11cd34f2-f9a2-47ca-b29c-a5e3510be285>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics: Rocket Propulsion Video | MindBites Physics: Rocket Propulsion About this Lesson • Type: Video Tutorial • Length: 10:54 • Media: Video/mp4 • Use: Watch Online & Download • Access Period: Unrestricted • Download: MP4 (iPod compatible) • Size: 117 MB • Posted: 07/01/2009 This lesson is part of the following series: Physics (147 lessons, $198.00) Physics: Momentum (8 lessons, $16.83) Physics: Momentum and Its Conservation (5 lessons, $9.90) This lesson was selected from a broader, comprehensive course, Physics I. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/ product/physics. The full course covers kinematics, dynamics, energy, momentum, the physics of extended objects, gravity, fluids, relativity, oscillatory motion, waves, and more. The course features two renowned professors: Steven Pollock, an associate professor of Physics at he University of Colorado at Boulder and Ephraim Fischbach, a professor of physics at Purdue University. Steven Pollock earned a Bachelor of Science in physics from the Massachusetts Institute of Technology and a Ph.D. from Stanford University. Prof. Pollock wears two research hats: he studies theoretical nuclear physics, and does physics education research. Currently, his research activities focus on questions of replication and sustainability of reformed teaching techniques in (very) large introductory courses. He received an Alfred P. Sloan Research Fellowship in 1994 and a Boulder Faculty Assembly (CU campus-wide) Teaching Excellence Award in 1998. He is the author of two Teaching Company video courses: “Particle Physics for Non-Physicists: a Tour of the Microcosmos” and “The Great Ideas of Classical Physics”. Prof. Pollock regularly gives public presentations in which he brings physics alive at conferences, seminars, colloquia, and for community audiences. Ephraim Fischbach earned a B.A. in physics from Columbia University and a Ph.D. from the University of Pennsylvania. In Thinkwell Physics I, he delivers the "Physics in Action" video lectures and demonstrates numerous laboratory techniques and real-world applications. As part of his mission to encourage an interest in physics wherever he goes, Prof. Fischbach coordinates Physics on the Road, an Outreach/Funfest program. He is the author or coauthor of more than 180 publications including a recent book, “The Search for Non-Newtonian Gravity”, and was made a Fellow of the American Physical Society in 2001. He also serves as a referee for a number of journals including “Physical Review” and “Physical Review Letters”. About this Author 2174 lessons Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/. Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through... Recent Reviews This lesson has not been reviewed. Please purchase the lesson to review. This lesson has not been reviewed. Please purchase the lesson to review. Rocket ships are a really dramatic demonstration of lots of physics. It's so impressive that we can design and build rocket ships that really, they work, they get to the moon, and we've sent probes out beyond the edges of our solar system. And we know where they're going to go, how fast they're going to be going. People use the expression `rocket science' along with `brain surgery' as a kind of a representation of the epitome of human intellect and achievement. And it's kind of cool that, just knowing introductory physics, one can understand an awful lot of rocket science. So I'd like to think a little bit about how rockets work and whether or not we can just use principles of physics, Newton's Law and conservation of momentum, and figure out, for instance, what's going to be the velocity of a rock as a function of time. Now, what's the principle of a rocket? The principle is very simple: suppose you're out in deep space and you're holding a bunch of little pebbles in your hand. And you take one pebble and you toss it out the back. So what's happening? You are applying a force to the pebble and, by Newton's Third Law, the pebble applies an equal and opposite force back on you. So the pebble accelerates one way and you accelerate the other way. You accelerate, and now you toss another pebble, and another and another. Each time you toss one, you accelerate a little bit more. That's what a rocket's doing. Instead of pebbles, it's just throwing little molecules of fuel out the back, and each one that it throws out, it's accelerating a little bit forward in the opposite direction. So that's the principle, it's very simple. How about calculating? How do you figure out velocity as a function of time for a rocket? The principle of physics there, the quantitative principle, is just conservation of momentum. You're out in deep space, there's no external forces on you, and so, if you just figure out at any moment in time what's your momentum, and then a moment later, you can consider your momentum plus the momentum of the little chunk of fuel that you just sent out. And they will be equal, momentum before and momentum after will be the same. So that's the principle we're going to use to solve the equations. Let me draw some pictures, because, I've got to admit, it is rocket science. It's not the easiest story in the world, and you've got to be as careful as can be to get the equations right. Here's a rocket out in deep space and I'm going to call it's mass, at this instant in time, capital M plus little m. So this represents the mass of the rocket and this represents the mass of that little chunk of fuel that we're just about to send out the back. At this moment in time, as observed by somebody in a fixed reference frame, say, on earth, the rocket is moving with velocity v. Now the rocket tosses out a little chunk of fuel, so here's the picture a moment later. The rocket has accelerated a little bit. It's got velocity v plus v. We'd really like an equation for v. The rocket's mass is now capital M. It's thrown this little m out the back. Now, what velocity should I associate with that little m? A rocket shoots fuel out the back with some velocity via exhaust. And it's sort of a characteristic number of the engine and you might think of it as a constant. The problem is I'm a little hesitate to just put it in the picture, because vex is the velocity of the fuel with respect to the rocket. In this picture, I'm drawing all my arrows as velocities with respect to the observer on the ground. If a rocket is cruising by fast and it ejects a little piece of fuel out the back with relative velocity vex, what do you see from the ground? You see a velocity v of the rocket plus vex, so what you see here is v plus vex. Now that's plus an arrow. The vex arrow is to the left, so, of course, this arrow will be shorter than the initial velocity. In other words, if you're throwing stuff out the back and it's got a huge exhaust velocity, this would come out to be a negative arrow, and so the fuel really would be flying backwards. On the other hand, if vex was some tiny number, if the rocket was just sort of sputtering fuel out the back, it's cruising along at a big v, the rocket fuel is sort of behind it, it's going a little bit more slowly, but still moving in the same rocket as the rocket, just with a relative velocity of vex. So that's one little subtle thing that you just have to think about and convince yourself that this is correct. Now, we're all set. Here's the initial situation, here's the final situation, no external forces. This is my system, it's still my system here, and so I can set initial momentum equals final momentum. So M times v plus m times v, that's the initial momentum. v times m plus vex times m plus m times v plus m times v, that's the grand total momentum afterwards. Set them equal. Write it down, you'll see a bunch of terms, and then you'll realize that many are common on both sides and you can cancel them. And you will end up with what's known as a rocket equation. In fact, if you take that rocket equation and you divide by t, you'll get the following very interesting equation, so let's write it down: M dv dt is equal to minus vex dM dt. Now, where this minus sign comes from, if you're looking at your equations, you probably have a plus sign there, but you also have a d little m, so there's another little thinker. If you are ejecting fuel out the back, your rate of change of mass is negative if you are ejecting positive m. In other words, dM is negative of dm. So that's just a little subtle point. It's just our definition of the sign of the mass of the fuel being thrown out the back. So this is the equation. It's the rocket equation and look at it, just think about it for a second. At the instant in time that we're considering, we've got mass times acceleration, dv dt of the rocket. So that should be the force on the rocket: F equals ma and there it is. That's the formula. We call it the thrust on the rocket. vex is typically a number and dM dt, that's how rapidly you are throwing fuel out the back, how many kilogram per second is being thrown out. So these are typically constants, so this looks like a constant force equation and you think, "Hey, that's easy! Constant force means constant acceleration, doesn't it? I could just use vf equals vi plus a times time." No, it is, at this moment in time, this right-hand side is a constant, but M is changing with time. The rocket is constantly losing mass, it's throwing out fuel. And so this is really not a good simple constant acceleration story and you can't use the old constant acceleration kinematics equations. If want to know vf in terms of vi, you've just to solve this little differential equation. It's a math problem, so I'm not going to go through the details too carefully. Physicists tend to do nifty things, like cancel dt from both sides, and let's see, I've got a dM and I'll divided by M, so I'll get dv is equal to -vex dM divided by M. That's my equation rewritten, after having canceled out dt, and now what do you do with an equation like this? Integrate both sides. This is a constant, so constants come out of integrals. I get the integral from vi till vf and, over here, this integral is over mass, it's from Mi till Mf and I ran out of room there. This is the integral from Mi to Mf. What's the integral of 1 over M dM? That's a natural logarithm. This is just an easy integral, it's vf minus vi. Here's the answer, we've really got it: vf is equal to vi plus vex times the logarithm of Mi over Mf. I did a little trick here. What I really had was logarithm of Mf minus logarithm of Mi and I used an identity, log of a minus log of b is log of a over b, and then I used another identity of logarithms to turn it upside down and get rid of my minus sign. Just some math tricks. Look at this formula; it makes some sense. The logarithm of Mi over Mf, you started off with a big mass and, after you've ejected most of the fuel, you've got a rather small final mass, so you're going to be taking the logarithm of a pretty big number, so that's going to be some positive number. You multiply it by vex. So if you want to go fast, send out your fuel with a large exhaust velocity. Remember the thrust was vex dm dt. Which do you want to make big? If you make dM dt big, you're going to lose mass really fast, you're going to get rid of all your fuel right away. So you'd rather make vex as big as possible, and then this equation says that's what you'll want to do to get a big final velocity. This is what the rocket engineers are working on when they're designing better rockets, one of the things is to make the exhaust speed very large. Mi over Mf can be a big number. Unfortunately, the natural logarithm is a function, which, when you take the log of a big number, it gives you a much smaller number, typically, when you've got big numbers here, like the log of 1000 is just a little bit under 7. So that's too bad, it means that you need a huge amount of fuel in a rocket and a very small amount of payload if you want to get up to any kind of reasonable speeds. This is rocket science. It's not the easiest physics in the world, but we're certainly completely equipped to understand the basic ideas and do the calculations. There's a few little technical details you've got to think about, but it's all stuff we've already learned, like relative velocities and conservation of momentum. It's kind of impressive, mere mortals, like me and you, can really understand rocket science. Momentum and Its Conservation Rocket Propulsion Page [2 of 2] Get it Now and Start Learning Embed this video on your site Copy and paste the following snippet: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/lesson/4538-physics-rocket-propulsion","timestamp":"2014-04-18T15:42:48Z","content_type":null,"content_length":"61886","record_id":"<urn:uuid:c803b7c7-36a7-4feb-9b77-1321412bc452>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2004 [00447] [Date Index] [Thread Index] [Author Index] Re: Re: how to explain this weird effect? Integrate • To: mathgroup at smc.vnet.net • Subject: [mg46432] Re: [mg46290] Re: how to explain this weird effect? Integrate • From: Daniel Lichtblau <danl at wolfram.com> • Date: Wed, 18 Feb 2004 00:36:55 -0500 (EST) • References: <200402121216.HAA12039@smc.vnet.net> <c0hhvb$lgl$1@smc.vnet.net> <200402140256.VAA08500@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com steve_H wrote: > Andrzej Kozlowski <akoz at mimuw.edu.pl> wrote in message news:<c0hhvb$lgl$1 at smc.vnet.net>... >>It's not hard to explain if you actually look at the output you get >>before substituting values for n and m. > hi; > I think you missed my point. > As I said , I do see why Mathematica complained. It is clear from > the ouput. and I know that taking the limit will give the correct > result I wanted to see. But this is not the point. > My point is that mathematically speaking, it should not make a > difference when one does the substitution. But using a computer > algebra package, it made a difference. I am not looking for a way around > this, I wanted to talk about the user not having to work around these > limitations. > So, the question is that, why did not Mathematica perform the Limit operation > itself to give the correct answer? > Look at this example: > r = 1/a > r /. a -> 0 > Here Mathematica complains becuase of 1/0 problem, but still returns > ComplexInfinity as the correct answer. > Now when I type > Limit[r, a -> 0] > no complaint is given, and infinity is the answer again. > mathematically speaking, 1/a when a=0, is the same as Limit[1/a , a->0] > So, the final answer should not be different. > But when I typed > r = Integrate[Sin[m x] Sin[n x], {x, 0, 2 Pi}] > r /. {n -> 2, m -> 2} > Mathematica complained about 1/0 output, BUT also did NOT give the answer. > So, here we have 2 examples, both have 1/0 problem, in both cases Mathematica > complained about 1/0, but in one case it still gave the final answer, > and in the second case it did not. > to conclude, Mathematica should do one of 2 things: > 1. complain about 1/0, but internally apply the Limit to see if it can > obtain an answer. > 2. not complain about 1/0 if applying the Limit will resolve it, else > only then complain about 1/0 and give no answer. > thanks, > Steve This thread seems to persist so I thought I might say a few words about replacement vs. extraction of limits in Mathematica. Specifically I will provide some of the reasons why the latter is not compatible with the semantics of the former. First, the example r = 1/a followed by r /. a->0 gives an undirected infinity. I think all agree this is appropriate and thus requires no comment. But what of Limit[r, a->0] returning DirectedInfinity[1] (that is, +infinity)? This follows from Limit semantics, which has a default direction associated to limits (and can be overridden by a specification Direction->...). For infinities, the direction is simply the direction vector of the infinity. For finite points, the default direction is -1, that is, approach from the right. Thus, in this example, we approach the origin from the right, and so DirectedInfinity[1] is the correct result. The more general question, as best I can discern, is whether replacement should in some cases invoke Limit. The short answer is "No, it should not". I'll try to explain what are the issues but note that already the example above gives insight into one such. First observe that replacement is a structural operation. There is no notion of "variables' and indeed one might do expr /. {2*a->3, 5->x, Pi->7.2, r^2+3*s->z, Sin[x]->Cos[t]} Limit, by way of contrast, works with a single variable. Hence the domains of applicability of Limit and ReplaceAll are not the same. Next note that Limit may apply various nontrivial transformations unrelated to whatever is specified in the ReplaceAll. In addition to being an intrinsically nonstructural operation (it is mathematical, after all), it can also be slow. Which means it is not well suited for large-scale replacement operations of the type occasionally done in computations e.g. when doing random replacements in a large expression to see if it might be zero. Another thing to observe, as noted in the example above, is that limits may depend on direction. When multiple variables are present, they depend on order as well. Hence results of automatic use of Limit with replacement of variables, when it could be done at all, would be difficult to interpret. I hope this helps to explain why Limit neither is nor could be involved in ReplaceAll. Daniel Lichtblau Wolfram Research • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Feb/msg00447.html","timestamp":"2014-04-20T23:40:17Z","content_type":null,"content_length":"39324","record_id":"<urn:uuid:5b8bfff4-4895-4a7b-8a07-a9794da28f99>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Theorem unifies superfluids and other weird materials Matter exhibits weird properties at very cold temperatures. Take superfluids, for example: discovered in 1937, they can flow without resistance forever, spookily climbing the walls of a container and dripping onto the floor. In the past 100 years, 11 Nobel Prizes have been awarded to nearly two dozen people for the discovery or theoretical explanation of such cold materials -- superconductors and Bose-Einstein condensates, to name two -- yet a unifying theory of these extreme behaviors has eluded theorists. University of California, Berkeley, physicist Hitoshi Murayama and graduate student Haruki Watanabe have now discovered a commonality among these materials that can be used to predict or even design new materials that will exhibit such unusual behavior. The theory, published online June 8 by the journal Physical Review Letters, applies equally to magnets, crystals, neutron stars and cosmic Earlier theories by Nobel Laureate Yoichiro Nambu predicted that magnetic spins oscillate in two directions independently, and thus magnets have two Nambu-Goldstone bosons. The new theory shows that in ferromagnets, these two waves are not independent, so that the there is only one Nambu-Goldstone boson, a precession wave as shown above. Credit: Haruki Watanabe/UC Berkeley. "This is a particularly exciting result because it concerns pretty much all areas of physics; not only condensed matter physics, but also astrophysics, atomic, particle and nuclear physics and cosmology," said Murayama, the MacAdams Professor of Physics at UC Berkeley, a faculty senior scientist at Lawrence Berkeley National Laboratory and director of the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo. "We are putting together all of them into a single theoretical framework." The theorem Watanabe and Murayama proved is based on the concept of spontaneous symmetry breaking, a phenomenon that occurs at low temperatures and leads to odd behavior. This produces superconductors, which allow electric currents to flow without resistance; or Bose-Einstein condensates, which have such low energy that every atom is in the same quantum state. By describing the symmetry breaking in terms of collective behavior in the material -- represented by so-called Nambu-Goldstone bosons -- Murayama and Watanabe found a simple way to classify materials' weirdness. Boson is the name given to particles with zero or integer spin, as opposed to fermions, which have half-integer spin. "Once people tell me what symmetry the system starts with and what symmetry it ends up with, and whether the broken symmetries can be interchanged, I can work out exactly how many bosons there are and if that leads to weird behavior or not," Murayama said. "We've tried it on more than 10 systems, and it works out every single time." Anthony Leggett of the University of Illinois at Urbana Champaign, who won the 2003 Nobel Prize in Physics for his pioneering work on superfluids, pointed out that "it has long been appreciated that an important consequence of the phenomenon of spontaneously broken symmetry, whether occurring in particle physics or in the physics of condensed matter, is the existence of the long-wavelength collective excitations known as Nambu-Goldstone bosons. "In their paper, Watanabe and Maruyama have now derived a beautiful general relation … (involving) Nambu Goldstone bosons … (that) reproduces the relevant results for all known cases and gives a simple framework for discussing any currently unknown form of ordering which may be discovered in the future." "Surprisingly, the implications of spontaneous symmetry breaking on the low energy spectrum had not been worked out, in general, until the paper by Watanabe and Murayama," wrote Hirosi Ooguri, a professor of physics and mathematics at Caltech. "I expect that there will be a wide range of applications of this result, from condensed matter physics to cosmology. It is a wonderful piece of work in mathematical physics." Symmetry has been a powerful concept in physics for nearly 100 years, allowing scientists to find unifying principles and build theories that describe how elementary particles and forces interact now and in the early universe. The simplest symmetry is rotational symmetry in three dimensions: a sphere, for example, looks the same when you rotate it arbitrarily in any direction. A cylinder, however, has a single rotational symmetry around its axis. Some interactions are symmetric with respect to time, that is, they look the same whether they proceed forward or backward in time. Others are symmetric if a particle is replaced by its antiparticle. When symmetry is broken spontaneously, new phenomena occur. Following the Big Bang, the universe cooled until its symmetry was spontaneously broken, leading to a predicted Higgs boson that is now being sought at the Large Hadron Collider in Geneva, Switzerland. With solids, liquids or gases, symmetry relates to the behavior of the spins of the atoms and electrons. In a ferromagnetic material, such as iron or nickel, the randomness of the electron spins at high temperatures makes the material symmetric in all directions. As the metal cools, however, the electron spins get locked in and force their neighbors to lock into the same direction, so that the magnet has a bulk magnetic field pointing in one direction. Nambu-Goldstone bosons are coherent collective behavior in a material. Sound waves or phonons, for example, are the collective vibration of atoms in a crystal. Waves of excitation of the electron spin in a crystal are called magnons. During the cooling process of a ferromagnet, two symmetries were spontaneously broken, leaving only one Nambu-Goldstone boson in the material. In Bose-Einstein condensates, for example, "you start with a thin gas of atoms, cool it to incredibly low temperature -- nanokelvins -- and once you get to this temperature, atoms tend to stick with each other in strange ways," Murayama said. "They have this funny vibrational mode that gives you one Nambu-Goldstone boson, and this gas of atoms starts to become superfluid again so it can flow without viscosity forever." On the other hand, solid crystals, regardless of their compositions or structures, have three Nambu-Goldstone bosons, equivalent to the three vibrational modes (phonons). "What this Nambu-Goldstone boson is, how many of them there are and how they behave decide if something becomes a superfluid or not, and how things depend on the temperature," Murayama added. "All these properties come from how we understand the Nambu-Goldstone boson." Yoichiro Nambu shared the 2008 Nobel Prize in Physics, in part, for explaining that in some systems, the number of broken symmetries equals the number of Nambu-Goldstone bosons. The new theorem expands on Nambu's ideas to the more general case, Watanabe said, proving that in weird materials, the number of Nambu-Goldstone bosons is actually less than the number of broken "What Nambu showed was true, but only for specialized cases applicable to particle physics," he said. "Now we have a general explanation for all of physics; no exceptions." One characteristic of states with a low Nambu-Goldstone boson number is that very little energy is required to perturb the system. Fluids flow freely in superfluids, and atoms vibrate forever in Bose-Einstein condensates with just a slight nudge. As a student at the University of Tokyo, Watanabe had proposed a theorem to explain materials' properties through Nambu-Goldstone bosons, but was unable to prove it until he came to UC Berkeley last year and talked with Murayama. Together, they came up with a proof in two weeks of what they call a unified theory of Nambu-Goldstone bosons. "Those two weeks were very exciting," Watanabe said. Story Source: The above story is based on materials provided by University of California - Berkeley. Note: Materials may be edited for content and length. Journal Reference: 1. Haruki Watanabe, Hitoshi Murayama. Unified Description of Non-Relativistic Nambu-Goldstone bosons. submitted to Physical Review Letters, 2012 Cite This Page: University of California - Berkeley. "Theorem unifies superfluids and other weird materials." ScienceDaily. ScienceDaily, 11 June 2012. <www.sciencedaily.com/releases/2012/06/120611092351.htm>. University of California - Berkeley. (2012, June 11). Theorem unifies superfluids and other weird materials. ScienceDaily. Retrieved April 20, 2014 from www.sciencedaily.com/releases/2012/06/ University of California - Berkeley. "Theorem unifies superfluids and other weird materials." ScienceDaily. www.sciencedaily.com/releases/2012/06/120611092351.htm (accessed April 20, 2014).
{"url":"http://www.sciencedaily.com/releases/2012/06/120611092351.htm","timestamp":"2014-04-20T04:12:11Z","content_type":null,"content_length":"91960","record_id":"<urn:uuid:a1068e58-f8c0-46ea-a3b3-cd33827461e5>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
South, CA Algebra 1 Tutor Find a South, CA Algebra 1 Tutor ...I also teach SAT and ACT in those subjects. I was also a National Merit Finalist in high school. I have been teaching ACT English through Elite Educational Institute, helping many students improve their understanding of the structure and content of the test. 22 Subjects: including algebra 1, English, ACT Reading, ACT Math ...I continue to tutor and help others with math at a much higher level with heavy involvement in Algebra I. I deal with basic and advanced Algebra to this day with research. I took Algebra II in High School and passed with an A. 13 Subjects: including algebra 1, calculus, geometry, physics ...During the past 20 years I have tutored several students in these subjects, and have even tutored younger, elementary-aged students. I tutored one young man from middle school math through Honors PreCalculus. I also tutored his younger brother for several years. 15 Subjects: including algebra 1, geometry, GRE, statistics I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always work with students to overcome obstacles that they might have. 37 Subjects: including algebra 1, English, chemistry, calculus ...I recently moved from the fast pace of New York City to the quiet suburbs here in South Jersey - and I couldn't be happier. This move has shown me that I strongly believe in personal relationships over professional connections - that it's not just who you know, but how good those people are. I love to read, occasionally write, and always think. 8 Subjects: including algebra 1, English, reading, literature Related South, CA Tutors South, CA Accounting Tutors South, CA ACT Tutors South, CA Algebra Tutors South, CA Algebra 2 Tutors South, CA Calculus Tutors South, CA Geometry Tutors South, CA Math Tutors South, CA Prealgebra Tutors South, CA Precalculus Tutors South, CA SAT Tutors South, CA SAT Math Tutors South, CA Science Tutors South, CA Statistics Tutors South, CA Trigonometry Tutors Nearby Cities With algebra 1 Tutor Bassett, CA algebra 1 Tutors Bicentennial, CA algebra 1 Tutors Boyle Heights, CA algebra 1 Tutors East Los Angeles, CA algebra 1 Tutors Firestone Park, CA algebra 1 Tutors Flint, CA algebra 1 Tutors Glassell, CA algebra 1 Tutors Hancock, CA algebra 1 Tutors Highland Park, LA algebra 1 Tutors Pasadena, CA algebra 1 Tutors San Marino algebra 1 Tutors Sanford, CA algebra 1 Tutors Sepulveda, CA algebra 1 Tutors South Pasadena algebra 1 Tutors View Park, CA algebra 1 Tutors
{"url":"http://www.purplemath.com/South_CA_algebra_1_tutors.php","timestamp":"2014-04-18T21:28:49Z","content_type":null,"content_length":"23992","record_id":"<urn:uuid:571615e5-ce85-4d15-83a6-616879e39c34>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Computation Theory Next: Computer Graphics and Image Up: Lent Term 2008 Previous: Compiler Construction Contents Computation Theory Lecturer: Dr J.K.M. Moody No. of lectures: 12 Prerequisite course: Discrete Mathematics or Mathematics for Computation Theory This course is a prerequisite for Complexity Theory (Part IB and Diploma), Quantum Computing (Part II and Diploma). The aim of this course is to introduce several apparently different formalisations of the informal notion of algorithm; to show that they are equivalent; and to use them to demonstrate that there are uncomputable functions and algorithmically undecidable problems. • Introduction: algorithmically undecidable problems. Decision problems. The informal notion of algorithm, or effective procedure. Examples of algorithmically undecidable problems. [1 lecture] • Register machines. Definition and examples; graphical notation. Register machine computable functions. Doing arithmetic with register machines. [1 lecture] • Universal register machine. Natural number encoding of pairs and lists. Coding register machine programs as numbers. Specification and implementation of a universal register machine. [2 lectures] • Undecidability of the halting problem. Statement and proof. Example of an uncomputable partial function. Decidable sets of numbers; examples of undecidable sets of numbers. [1 lecture] • Turing machines. Informal description. Definition and examples. Turing computable functions. Equivalence of register machine computability and Turing computability. The Church-Turing Thesis. [2 • Primitive recursive functions. Definition and examples. Primitive recursive partial function are computable and total. [1 lecture] • Partial recursive functions. Definition. Existence of a recursive, but not primitive recursive function. Ackermann's function. A partial function is partial recursive if and only if it is computable. [2 lectures] • Recursive and recursively enumerable sets. Decidability and recursive sets. Generability and recursive enumeration. Example of a set that is not recursively enumerable. Example of a recursively enumerable set that is not recursive. Alternative characterisations of recursively enumerable sets as the images and the domains of definition of partial recursive functions. [2 lectures] At the end of the course students should • be familiar with the register machine and Turing machine models of computability • understand the notion of coding programs as data, and of a universal machine • be able to use diagonalisation to prove the undecidability of the Halting Problem • understand the mathematical notion of partial recursive function and its relationship to computability • be able to develop simple mathematical arguments to show that particular sets are not recursively enumerable Recommended reading * Hopcroft, J.E., Motwani, R. & Ullman, J.D. (2001). Introduction to automata theory, languages, and computation. Addison-Wesley (2nd ed.). Cutland, N.J. (1980). Computability. An introduction to recursive function theory. Cambridge University Press. Davis, M.D., Sigal, R. & Weyuker E.J. (1994). Computability, complexity and languages. Academic Press (2nd ed.). Sudkamp, T.A. (1995). Languages and machines. Addison-Wesley (2nd ed.). Next: Computer Graphics and Image Up: Lent Term 2008 Previous: Compiler Construction Contents
{"url":"http://www.cl.cam.ac.uk/teaching/0708/DIPLOMA/node23.html","timestamp":"2014-04-17T15:41:12Z","content_type":null,"content_length":"9271","record_id":"<urn:uuid:825a951d-0227-4f37-a1d6-50600185dd86>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Dallas Algebra Tutor Find a Dallas Algebra Tutor ...To really make use of this subject, you need to understand not only how to solve a problem, but why that solution works. I focus on teaching WHY! This not only makes the solution easier to remember; it will also make it much easier to apply to different problems! 23 Subjects: including algebra 1, algebra 2, English, writing ...See my profile description for a better understanding of the excellence I bring to the table as an accounting tutor. My expertise in business comes from my master's and bachelor's degree in business administration as well as my years of experience in business, serving companies that range in siz... 17 Subjects: including algebra 1, algebra 2, accounting, finance I am a mathematics tutor (developmental math and algebra) at a college Mathematics lab, so I am used to tutoring. I am also fluent in French (which is my first language). Moreover, I successfully passed the CLEP exam in French with a perfect score of 80/80. I can successfully tutor French as well as Mathematics. 5 Subjects: including algebra 1, algebra 2, French, prealgebra BS- MathematicsMAEd - Curriculum & InstructionCurrently pursuing an EdD in Instructional Technology & Distance Education6.5 years experience teaching high school mathMiddle, high school, and college level tutoring experience as wellSince I have decided the pursue my doctoral degree full time, I am a... 15 Subjects: including algebra 1, algebra 2, reading, elementary (k-6th) ...My work experience included being a chief financial officer of an public corporation for over twenty-five years and previously a corporate controller. I was involved in training young accountants on the process from journal entries and sub ledgers to financial statement report preparation. I ha... 19 Subjects: including algebra 1, algebra 2, reading, writing Nearby Cities With algebra Tutor Arlington, TX algebra Tutors Balch Springs, TX algebra Tutors Carrollton, TX algebra Tutors Duncanville, TX algebra Tutors Garland, TX algebra Tutors Grand Prairie algebra Tutors Grapevine, TX algebra Tutors Highland Park, TX algebra Tutors Irving, TX algebra Tutors Lewisville, TX algebra Tutors Mesquite, TX algebra Tutors Plano, TX algebra Tutors Richardson algebra Tutors Rowlett algebra Tutors University Park, TX algebra Tutors
{"url":"http://www.purplemath.com/dallas_algebra_tutors.php","timestamp":"2014-04-18T15:44:05Z","content_type":null,"content_length":"23731","record_id":"<urn:uuid:0d7bb7fa-efca-4526-b285-9b0b6cb7496a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 2002 [00299] [Date Index] [Thread Index] [Author Index] Re: Changing delayed formula • To: mathgroup at smc.vnet.net • Subject: [mg35486] Re: [mg35461] Changing delayed formula • From: "Julio Vera" <jvera at adinet.com.uy> • Date: Tue, 16 Jul 2002 04:49:46 -0400 (EDT) • References: <200207130749.DAA08590@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com Thank you very much for the very useful information sent on this issue. I had tried Outer, but was not able to make it work properly, since I did not use Sequence. It took a while, but I think I understood how Sequence works now. This definitely solves my problem. Distribute works too, but it needs an extra step to substitute the commas in the sublists by &&. I had seen Map, but didn´t understand it. I was substituting it with a forced use of Table. This was valuable information to receive, too. The question about the "growing" delayed formula was not answered, since it became unnecessary. Maybe a formula of the sort is of no use within Mathematica. Even though it might be strictly theoretical, or out of sheer curiosity, I would like to know if it is possible to define such a delayed formula. I can define both sides of the formula independently, but when I join them, the result does not work. Though it is not rejected by Mathematica, it is as if it was not defined. The output is the same as the one obtained without defining the formula. Thanks again for all the help. ----- Original Message ----- From: "Julio Vera" <jvera at adinet.com.uy> To: mathgroup at smc.vnet.net Subject: [mg35486] [mg35461] Changing delayed formula > Hi, > I have a list of lists. it's length (the number of sublists it > contains) varies. > In[1]:= des={{11,12,13,14},{21,22,23},{31,32},{41,42,43,44,45}} > Out[1]:= {{11,12,13,14},{21,22,23},{31,32},{41,42,43,44,45}} > The length of each of the sublists is arbitrary, too. So I have this > list of lengths. > In[2]:= elems=Flatten[Table[Dimensions[des[[i]]],{i,Length[des]}]] > Out[2]:= {4,3,2,5} > I want to obtain the list of all combinations of one element of each > sublist, bounded by &&. This will be a list of 120 elements, each of > them with 4 components. I define a delayed formula, and apply Array to > it (the characters printed as bold are subscripts in the Mathematica > notebook). > In[3]:= > cond4[a_,b_,g_,d_]:=des[[1,a]]&&des[[2,b]]&&des[[3,g]]&&des[[4,d]] > In[4]:= combi=Flatten[Array[condLength[des],elems]] > Out[4]:= > {11&&21&&31&&41,11&&21&&31&&42,11&&21&&31&&43,11&&21&&31&&44,... > ...14&&23&&32&&42,14&&23&&32&&43,14&&23&&32&&44,14&&23&&32&&45} > Since the length of des varies, I would have to define cond each time. > For instance: > cond3[a_,b_,g_,d_]:=des[[1,a]]&&des[[2,b]]&&des[[3,g]] > I would like to make a definition for cond that would adapt to these > changes automatically. > I arrived to this solution, which is not rejected by Mathematica, but > does not work, either. > In[5]:= d[a_,b_]:=des[[a,b<>"_"]] > In[6]:= Unprotect[ReplaceAll] > Out[6]:= {ReplaceAll} > In[7]:= > In fact, I quit the kernel and rerun all the cells except the one > written here as In[3]. If not, the definition for cond4 remains as it > was. I was not able to clear cond4 individually. > Thanks very much for anything you can suggest. > Best regards, > Julio Vera • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2002/Jul/msg00299.html","timestamp":"2014-04-18T03:08:26Z","content_type":null,"content_length":"38096","record_id":"<urn:uuid:bff83b6d-1d59-4c88-a759-ddf8b00e6421>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Substitution Ciphers From MRL Wiki Substitution Ciphers work by encoding each individual character in plaintext into ciphertext according to some correspondence table. [edit] Cryptanalysis Strength of substitution ciphers can be reduced by looking for repeated patterns, common initial and final letters. There are several approaches to breaking Substitution Ciphers: • Reversing of the substitution algorithms itself by finding patterns. • Brute Force approach is often unfeasible due to 26! permutations of a particular ciphertext message, but in simpler cases it could work as a shortcut. • Word and letter frequency analysis. The most common letters in English language are E, T, O, and A which should also appear in weak substitution ciphers.
{"url":"http://www.midnightresearch.com/wiki/index.php/Substitution_Ciphers","timestamp":"2014-04-17T00:49:03Z","content_type":null,"content_length":"11995","record_id":"<urn:uuid:ad673f2d-39c1-40b9-90f0-4fc91c1d9b05>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
ECU Undergraduate Catalog 2000-2001 - Section 9 - Courses ECU Undergraduate Catalog 2000-2001 All JUST courses numbered above 2999 are criminal justice/social work major-only courses. 1000. The Criminal Justice System (3) (F) (S) (SS) Overview and discussion of the roles, problem areas, and suggested program changes for police and law enforcement, detention services, courts, community correctional services, and correctional institutions. 2000. The Criminal Offender (3) (F) (S) (SS) Examination of the legal, sociological, psychological, medical, and constitutional approaches to the understanding of criminal behavior. 3003. Addiction, Crime, and the Criminal (3) P: JUST 3500. Study of crime relationship to alcohol and drug addiction and abuse. 3007. Criminal Investigation (3) (F) P: JUST 3500. Fundamentals of criminal investigation including the various types of physical evidence, collection and preservation of evidence, preliminary procedures, crime scene searches, major crime investigations, and court appearances. 3008. Correctional Systems (3) (S) P: JUST 3500. Examines federal, state, and local correctional operations. Topics include the role and purpose of correctional facilities, historical and philosophical development, management and organizational principles, custody and security operations, treatment and classification issues, custody levels of various correctional facilities for men and women, and the role of correctional personnel. 3012. Police Operations (3) (S) P: JUST 3500. Examines the role and operation of law enforcement organizations in the United States. Includes accountability, legal issues, and community 3100. Interviewing and Crisis Management (3) (F) (S) P: JUST 1000, 2000. Provides introduction to interviewing and crisis intervention techniques. Examines interactions with persons other than offenders, including victims, witnesses, children and families of those involved in criminal activity, and individuals in crisis situations. Analyzes techniques for management of crises encountered by criminal justice personnel. Focuses on development of effective communication skills, mediation of conflict, and methods of defusing violence. 3500. Principles of Criminal Law (3) (F) (S) P: JUST 1000, 2000. Nature, sources, and types of criminal law; the examination in detail of selected specific criminal offenses; and criminal liability and defenses and basic legal research. 3501. Criminal Procedure (3) (WI) (F) (S) P: JUST 3500. Rules and procedures that govern the criminal justice process from arrest through search, interrogation, indictment, arraignment, and trial until final sentence; review and rights given to prisoners; and basic concepts from the Constitution on which the due process rights of individuals are based. 3502. Correctional Law (3) (WI) (F) (S) P: JUST 3500. Examines the legal issues of confining prisoners and operating a correctional facility. Prisoners' rights, constitutional issues, and the legal role and responsibilities of jails, prisons, and community correctional personnel. Role of the courts in correctional matters. Traces the development of correctional law in the US. 3700. Public Safety in a Multicultural Environment (3) (F) (S) P: JUST 3500. Focus on issues related to providing public safety services in communities in which cultural, ethnic, racial, philosophical, and moral diversity exists. Also addresses discrimination within the system, including hiring, promotion, and assignment policies. 3800. Research Methods in Criminal Justice (3) (F) (S) P: JUST 3500. Examination and discussion of research design, conceptualization, hypothesis formulation, measurement, sampling techniques, data management, and research writing as they relate to the field of criminal justice. 4004. Criminal Justice History (3) (S) P: JUST 3500. Examines the development of major aspects of criminal justice from pre-historic time to the present day. Students will be exposed to past practices in American criminal justice as well as other societies. 4005. Organized Crime (3) (SS) P: JUST 3500. Examines the type of individuals and organizations involved in organized crime, the type of activities conducted, historical and socio-political forces which facilitate organized criminal behavior, structural aspects of organized crime, and official responses to this phenomenon. 4006. Community Corrections (3) (F) or (SS) P: JUST 3800. Designed to teach the student how to apply intervention methods within particular community service‑delivery constructs. 4200. The Juvenile Justice System (3) (F) (S) P: JUST 3800. Examines conditions under which delinquency occurs and explores strategies and treatment interventions which have been identified as most effective in dealing with delinquent behaviors. Explores the role of the juvenile court in the prevention and control of delinquency. Special emphasis given to the changing role of the court and implications for professional practice. 4300. Criminal Justice Administration (3) (F) (S) P: JUST 3500. Provides an understanding of the basic concepts of organization and management as applied to criminal justice organizations, including management principles, supervision, and leadership. 4401, 4402, 4403. Independent Study (1,2,3) (F) (S) (SS) May be repeated for a maximum of 3 s.h. credit. P: JUST 3500. Selected readings, research, or studies related to criminal justice. Faculty conferences to be arranged by student-faculty contracts for program approved by director of the criminal justice program. 4500. Issues and Problems in Criminal Justice (3) (F) (S) Should be taken during last semester in criminal justice program. P: JUST 3501 or 3502; senior standing. Examination and discussion of values, ethics, and major issues of concern to the American criminal justice system. 4600. Special Topics in Criminal Justice (3) (F) or (S) or (SS) May be repeated for credit with change of topic. P: JUST 3500. Study of specialized topics and current developments in criminal 4990. Field Education and Seminar (9) (F) (S) 2 seminar hours per week; 4 days directed field education per week. Application for admission to this course must be received 2 semesters in advance of placement. P: Minimum cumulative 2.5 GPA to be eligible for consideration; completion of all required JUST and supportive area courses; selection based upon availability of appropriate placements and criteria specified in Criminal Justice Student Handbook. Supervised field education opportunity in approved agencies offered during the final semester of the criminal justice program. 3000. Residential Institutions (3) 3006. Security Systems (3) 3009. Corrections Case Management (3) 4001. Police Organization and Administration (3) 4002. Correctional Administration (3) 5000. Comparative Criminal Justice (3) The following courses will satisfy the general education humanities requirement: LATN 2021, 2022, 3021, 3022. 1001. Latin Level I (3) (F) First of a 2-semester course sequence. Training in the principles of Latin grammar with an emphasis on reading skills. Correct pronunciation taught, but no other oral skills required. All communication in English. 1002. Latin Level II (3) (S) P: LATN 1001; placement by examination; or consent of instructor. Second of a two-course sequence. Completion of basic skills of Latin grammar. Elementary readings introduced, adjusted to the level of the student. 1003. Latin Level III (3) (F) P: LATN 1002; placement by examination; or consent of instructor. Intensive review and application of basic skills of grammar acquired in LATN 1001-1002. Development of reading skills through selected works of a major author such as Cicero or Caesar. 1004. Latin Level IV (3) (S) P: LATN 1003; placement by examination; or consent of instructor. Continued development of reading skills and introduction to critical approaches to literature. Readings in the poetry of a major author such as Vergil, Catullus, or Ovid. 2021. Age of Cicero (3) P: LATN 1004 or consent of instructor. Readings in Latin literature from Cicero, Caesar, Catullus, Lucretius, Varro, and Sallust. A coherent literary and historical portrait of the last fifty years of the Roman republic. 2022. Age of Augustus (3) P: LATN 2021 or consent of instructor. Readings in Latin literature to be taken from Horace's Odes, Vergil's Ecologues and Georgics, Propertius' Elegies, Augustus' Res Gestae, and the works of Ovid and Tibullus. A coherent literary and historical portrait of the early empire in Rome. 3021. Silver Age Latin Literature (3) P: LATN 2022 or consent of instructor. Advanced course with readings to be taken from Seneca's prose, Lucan, Petronius, Tacitus, Pliny the Younger, and 3022. Roman Drama (3) P: LATN 3021 or consent of instructor. Advanced course with readings to be taken from Plautus, Terence, and Senecan tragedy. 4521, 4522, 4523. Directed Readings in Latin (1,2,3) May be repeated. P: Consent of instructor. Indepth exploration of a selected aspect of Roman culture (literature, civilization, etc.). 3401, 3402, 3403. Seminar in Leadership Development (1,2,3) (S) P: Nomination by student's dean/chairperson. Series of seminars designed to acquaint students with a variety of leadership experiences and patterns. Each seminar led by a thought leader from a different area of society. 1000. Research Skills for Electronic and Print Resources (1) (F) (S) (SS) Introduction to university electronic and print information sources. 3102. Research Sources and Techniques (3) How to select and research topics in all areas through reference and nonreference materials. Designed to meet the student's academic interests and needs in general and major areas. 3200. The Art of Storytelling (3) (S) Selecting, adapting, evaluating, and using the art of storytelling in professions such as human services, business, education, recreation, health care, and entertainment. Emphasis on storytelling performance for audiences of all ages. 4950. Literature for Children (3) (WI) (S) Same as ENGL 4950. May not count toward general education literature requirement or as advanced elective for ENGL majors. Survey of literature for children from early childhood through junior high school. 2123. Early Experiences for the Prospective Teacher (1) 4323. School Media Specialist in Grades K-12 (3) 4324. Observation and Supervised Participation as a School Media Specialist (8) 5114. Materials for Children (2) 5115. Materials for Young Adults (2) 2076, 2077. Non‑Polymeric Materials (3,0) (F) (S) 2 lecture and 4 lab hours per week. P: ITEC 2000, 2001, 2020; DESN 2034, 2035. Studying the shaping, forming, and utilization of non‑polymeric materials such as metals, ceramics, and combinations that are used in the various manufacturing processes in industry. A practical approach, having the students plan and conceive products, will be 3020. Manufacturing Processes (3) (WI*) (F) (S) (SS) P: ITEC 2090; MANF 2076, 2077. Broad survey course of the common manufacturing processes used to produce industrial products. Includes an overview of the latest manufacturing processes techniques. 3500. Automation Systems (3) (F) 2 lecture and 2 lab hours per week. P: ELEC 2054; MANF 3020. Study of the basic types of automated systems commonly used in industry, including control systems and common types of computer applications in the design, development, and management of automated manufacturing systems. 3800. Capital Equipment (3) (S) P: ACCT 2401; ITEC 3292. Analysis of competitive equipment offerings, make‑versus‑buy opportunities and repair‑versus‑replacement costs associated with manufacturing and construction equipment decisions. 4020, 4021. Process System Design (3,0) (F) 2 lecture and 2 lab hours per week. P: ITEC 3292, 4300; MANF 3300, 3500; 3 s.h. management/human relations elective; consent of instructor. Study, planning, and selection of processes for manufacturing various products. Emphasis is placed on selection criteria such as safety, material, jigs, fixtures, layout, and overall efficiency. 4023. Process System Application (3) (F) (S) 6 lab hours per week. P: MANF 4020, 4021; consent of instructor. Planning and layout of a processing system for manufacturing of a line product. Emphasis is placed on process design, costing, control systems, and setup. 4200. Work Methods Analysis (3) (S) P: MANF 3300. Analysis of work methods and a study of work measurement systems. Includes the principles of motion study, work simplification, and work measurement by direct and predetermined motion‑time systems. 4502. Laboratory Problems: Production (3) (F) (S) 6 lab hours per week. P: MANF 3020. Independent study of industrial manufacturing systems, processes, and concepts. 4507. Laboratory Problems: Metals (3) 6 lab hours per week. P: MANF 2076, 2077. Indepth and independent study of concepts and/or processes of the metals area, its tools, and materials, with a strong emphasis on lab work. 5504. Independent Study: Manufacturing (3) May be repeated for credit with consent of department chair. P: Consent of instructor. Research‑oriented course in problem solving with the tools, materials, and processes of the manufacturing industries. 2066, 2067. Polymeric Materials (3,0) 2072, 2073. Metals Technology I (3,0) 3072. Metals Technology (3) 3300. Plant Layout and Materials Handling (3) (S) 4060, 4061. Woods Products Manufacturing (3,0) 4092, 4093. Manufacturing (3,0) 4094, 4095. Industrial Maintenance (3,0) 4501. Laboratory Problems: Maintenance (3) 5060. Organic Matrix Composite Materials (3) 5090, 5091. Fluid Power Circuits (3,0) 0001. Intermediate Algebra-A (2) (F) (S) (SS) May not be taken by students who have credit for any of the following courses: MATH 0045, 1065, 1074, 1075, 1085, 2119, 2171, or who have passed the mathematics placement test. May not count toward general education mathematics requirement, certification, or degree requirements. Remedial course in basic algebra; some sections may be taught in a lab/tutorial mode. 0045. Intermediate Algebra-B (2) (F) (S) May not be taken by students who have credit for any of the following courses: MATH 0001, 1065, 1074, 1075, 1085, 2119, 2171, or who have passed the mathematics placement test. May not count toward general education mathematics requirement, certification, or degree requirements. Remedial course in basic algebra; some sections may be taught in a lab/tutorial mode. 1065. College Algebra (3) (F) (S) (SS) (GE:MA) May not be taken by students who have credit for MATH 1085. P: Appropriate score on mathematics placement test. Covers the usual topics: sets; linear, quadratic, polynomial, and exponential functions; inequalities; permutations; combinations; the binomial theorem; and mathematical induction. 1066. Applied Mathematics for Decision Making (3) (F) (S) (SS) (GE:MA) Required for students planning to major in business administration or accounting. P: Appropriate score on the mathematics placement test or approval of the department chair. Develops skills in formulating models for and interpreting solutions to business word problems. Topics covered: linear equations, nonlinear equations, systems of linear equations, applications of matrix algebra, and applied basic differential calculus. (No proofs will be included.) 1067. Algebraic Concepts and Relationships (3) (F) (S) (SS) (GE:MA) May not count toward MATH or CSCI major or minor. P: Appropriate score on mathematics placement test. Study of the properties of the integers, rationals, real and complex numbers and polynomials from an algebraic point of view; conjectures and intuitive proofs in number theory; the properties of linear and quadratic functions. Representations of real-world relationships with physical models, charts, graphs, equations and inequalities. Emphasis on the development of problem-solving strategies and abilities. 1074. Applied Trigonometry (2) (F) (S) (SS) Students planning to take MATH 2171 must elect 1085. May not be taken by students who have credit for MATH 1075 or 1085. P: MATH 1065. Study of trigonometry emphasizing the practical and computational aspects of the subject. The properties of the trigonometric functions: use of tables, interpolation, logarithms, solution of right and oblique triangles, and applications. 1075. Plane Trigonometry (3) May not be taken by students who have successfully completed MATH 1074 or 1085. P: MATH 1065. Includes the topics usually covered in a plane trigonometry course with trigonometric functions and related concepts, trigonometric identities and their applications, graphs of trigonometric functions, graphs of inverse trigonometric relations and functions, trigonometric equations, and vectors. 1077. Pre-Calculus Concepts and Relationships (3) (S) May not count toward MATH or CSCI major or minor. P: MATH 1067. Modeling approach to the study of functions (including logarithmic, exponential, and trigonometric functions), data analysis and matrices; lays a foundation for future course work in calculus, finite mathematics, discrete mathematics, and statistics. 1085. Pre‑Calculus Mathematics (5) (F) (S) (SS) (GE:MA) May not be taken by students who have credit for MATH 1074 or 1075. P: MATH 1065 with a minimum grade of C. One‑semester course in algebra and trigonometry for qualified students who plan to take calculus. 2119. Elements of Calculus (3) (F) (S) (SS) (GE:MA) May not receive credit for MATH 2119 after having received credit for a higher numbered calculus course. P: MATH 1065 with a minimum grade of C. Elementary differentiation and integration techniques. Proofs are not emphasized. 2121. Calculus for the Life Sciences I (3) (F) (S) (SS) (GE:MA) May receive credit for only one of MATH 2121, 2119. May not receive credit for MATH 2121 after taking MATH 2171. P: MATH 1065 or 1077 with a minimum grade of C. Introductory differential calculus with applications for students in the biological sciences. Introduction to and differentiation of the exponential, logarithmic, and trigonometric functions; with applications to exponential and periodic phenomena, related rates, regions of increase, and extrema. 2122. Calculus for the Life Sciences II (3) (F) (S) (SS) Continuation of MATH 2121. May not receive credit for MATH 2122 after taking MATH 2172. P: MATH 2121. Introductory integral calculus with applications for students in the biological sciences. Introduction to and applications of definite integrals. Probability density functions. Functions of several variables, partial derivatives, simple differential equation and difference equation models, and the arithmetic of matrices and vectors. 2123. Early Experiences for the Prospective Teacher (1) (F) (S) P: MATH 2171. Minimum of 16 hours of directed observations and planned participation in appropriate school environments and 8 hours of seminar class instruction in the teaching area. May not count toward BA in MATH major or minor. Introduction to the teaching of mathematics designed for prospective teachers. 2124. Elementary Mathematical Models (1) (F) P: MATH 2171. Formulation and solution of various types of problems using the techniques of establishing a mathematical model. 2127. Basic Concepts of Mathematics (3) (F) (S) (SS) (GE:MA) May not count toward MATH or CSCI major or minor. P: Appropriate score on mathematics placement test. System of real numbers and subsystems and their properties from an algebraic viewpoint. Statistics and number theory are also introduced. 2129. Basic Concepts of Mathematics (2) (F) (S) (SS) May not count toward MATH or CSCI major or minor. P: MATH 2127. Second course in a sequence for elementary education majors. Methods and language of geometry and the relationship of geometry to the real world. 2171, 2172, 2173. Calculus I, II, III (4,4,4) (F) (S) (SS) (GE:MA) P for 2171: MATH 1085 or 2122 with a minimum grade of C; P for 2172: MATH 2122 with a minimum grade of C or MATH 2171; P for 2173: MATH 2172. Integrated sequence of courses in the geometry of the plane and space and in the fundamentals of calculus. Topics include curves in the Cartesian plane, properties of functions, limits and continuity, differentiation and integration of algebraic and transcendental functions and their applications, conics, polar and parametric equations, the rudiments of solid analytic geometry, partial differentiation, multiple integration, infinite series, and expansion of functions. 2228. Elementary Statistical Methods I (3) (F) (S) (SS) May not count toward MATH major or minor. May receive credit for only one of MATH 2228, 2283. P: MATH 1065 or equivalent. Collection, systematic organization, analysis and interpretation of numerical data obtained in measuring certain traits of a given population. Designed for students with limited mathematical training. 2282. Data Analysis and Probability (3) (F) (S) (SS) May not count toward MATH or CSCI major or minor. May receive credit for only one of MATH 2282, 2935. P: MATH 1067. Collection of data from experiments and surveys. Organizing and representing data. Interpreting data for the purpose of judging claims, making decisions, or making predictions. 2283. Statistics for Business (3) (F) (S) (SS) May receive credit for only one of MATH 2228, 2283. P: MATH 1065 or 1066 or equivalent. Sampling and probability distributions, measures of central tendency and dispersion, hypothesis testing, Chi‑square, and regression. 2427. Discrete Mathematical Structures (3) (F) (S) May not count toward MATH major or minor. May receive credit for only one of MATH 2427, 2775, 3237. P: MATH 1065 or 1066. Study of discrete mathematical structures. Special emphasis is given to those structures most important in computer science. Practical applications of the subject are considered. 2775. Topics in Discrete Mathematics (3) (F) May receive credit for only one of MATH 2427, 2775, 3237. P: MATH 1085. Study of selected topics in discrete mathematics appropriate for prospective teachers of secondary school mathematics. Some of the topics include: counting techniques, graph theory, difference equations, recursion, iteration, induction, and dynamical systems. 2935. Data Analysis (3) (S) May receive credit for only one of MATH 2282, 2935. P: MATH 1085. Introductory data analysis course utilizing a hands-on approach to the collection, representation, and interpretation of data. Some of the topics include types of data, sampling techniques, experimental probability, sampling distributions, simulations, and hypothesis testing using collected data. 3004. Seminar in Secondary Mathematics Curriculum Algebra (1) (S) May not count toward BA in MATH or minor. 10 practicum hours per semester. P: MATH 2123. Study of the teaching and learning of introductory high school algebra. 3005. Seminar in Secondary Mathematics Curriculum Geometry (1) (F) May not count toward BA in MATH or minor. 10 practicum hours per semester. P: MATH 2123; C: MATH 3233. Study of the teaching and learning of high school geometry. 3006. Seminar in Secondary Mathematics Curriculum Advanced Mathematics (1) (S) May not count toward BA in MATH or minor. 10 practicum hours per semester. P: MATH 3004, 3005. Study of the teaching and learning of advanced high school mathematics. 3166. Euclidean Geometry (3) (F) (S) (SS) May not count toward MATH or CSCI major or minor. P: MATH 1065 or 1067; 2127. Study of Euclidean geometry using deductive and inductive mathematical reasoning. Formal proofs are required. 3174. Vector Calculus (3) (S) P: MATH 2173. Review of vector algebra and vector functions of a single variable. Scalar and vector fields, line and surface integrals, and multiple integrals will be 3218. Teaching Mathematics in Special Education (3) (F) (S) (SS) 4 lecture/lab hours per week. Lab and practicum experiences required. May not count toward MATH major or minor. P: Admission to upper division; MATH 1065, 2127; SPED 2000; at least one of the following: SPED 2102, 2103, 2104; RP: MATH 2129. Methods, materials, and techniques of teaching mathematics to special education students. 3223. Teaching Mathematics in the Elementary Grades K‑6 (3) (F) (S) (SS) 2 lecture and 2 lab hours per week. P: MATH 2129. Teaches preservice elementary teachers appropriate techniques and methods for teaching mathematics to students in grades K‑6. Lab work provides deeper understanding of mathematical concepts and experience with materials and methods appropriate for classroom work. 3229. Elementary Statistical Methods II (3) (F) (S) May not count toward MATH major or minor. P: MATH 2228 or equivalent. Collection, systematic organization, analysis and interpretation of numerical data obtained in measuring certain traits of a given population. Designed for students with limited mathematical training. 3233. College Geometry (3) (F) P: MATH 2171. Modern college geometry is presented as an outgrowth and Extension of elementary plane geometry. Important theorems relative to the nine‑point circle, cross ratios, the geometry of circles, and solid geometry are emphasized. Euclidean transformations are also discussed. 3237. Discrete Mathematics (3) (F) May not count toward MATH or CSCI major or minor. May receive credit for only one of MATH 2427, 2775, 3237. P: MATH 2121. Introduction to logic and sets, mathematical induction, and matrices. Applications of discrete mathematics in probability, linear programming, dynamical systems, social choice, and graph theory. 3238. Applied Mathematics for Teachers (2) P: MATH 1065. Applications of mathematics to business, education, science, social science, and other fields will be included. The microcomputer may be used in studying applications. No previous knowledge of microcomputers is required. 3239. Applied Mathematics Via Modeling (3) (S) May not count toward MATH or CSCI major or minor. P: MATH 2122, 2282, 3166, 3237. Consideration of real world problems that can be modeled with algebra, geometry, calculus, and statistical, probabilistic, discrete, or other mathematical techniques appropriate for prospective teachers of middle school mathematics. Mathematical modeling processes will be examined through historical and contemporary modeling success stories. Power and limitations of mathematical modeling will be considered. 3256. Linear Algebra (3) (F) (S) (SS) P: MATH 2172. Study of vector spaces, linear maps, matrices, systems of equations, determinants, and eigen values. 3263. Introduction to Modern Algebra (3) (WI) (F) (S) (SS) P: MATH 3256. Presentation of the postulation viewpoint of modern algebra. Defining postulates for a mathematical system are exhibited from which the properties of the system are then derived. Principal systems studied are groups, rings, fields, each fully treated with illustrative examples. 3307. Mathematical Statistics I (3) (F) (S) (SS) P: MATH 2172. Axiomatic development of the theory of probability and application of the theory of probability to the construction of certain mathematical models. 3308. Mathematical Statistics II (3) (F) P: MATH 3307. Construction of mathematical models for various statistical distributions. Includes testing of hypotheses and estimation, small‑sample distributions, regression, and linear hypotheses. 3550, 3551. Mathematics Honors (2,1) (F) (S) (SS) P: MATH 2173 or consent of instructor. Open to students with exceptional mathematical ability who have completed MATH 2173. Acceptance in the program entitles the student to register for MATH 3550 or 3551. 3573. Introduction to Numerical Analysis (3) (S) Same as CSCI 3573. P: CSCI 2510 or 2600; MATH 2119 or 2172 or equivalent. Gives the student an understanding of algorithms, suitable for digital computation in the areas of linear algebra, linear programming, slope finding, area finding, and nonlinear equation solution. 3584. Computational Linear Algebra (3) (F) (S) (SS) May not count toward MATH major or minor. P: Calculus course. Introduction to the study of vectors, matrices, and determinants. Special emphasis is given to the application of linear algebra to the solution of practical problems. 4001. Technology in Secondary Mathematics Education (3) (F) May not count toward MATH major or minor. 2 lecture and 2 lab hours per week. P: Admission to upper division; MATH 2775, 2935; C: MATH 4323. Study of the uses and implications of calculators and computers in the secondary mathematics curriculum. 4201. Introduction to Stochastic Processes (3) (S) P: MATH 3307 or equivalent or consent of instructor. Introduction to the fundamental theory and models of stochastic processes. Topics include expectations and independence; sums of independent random variables; Markov chains, limiting behavior and applications of Markov chains; Poisson processes; birth and death processes; Gaussian 4319. Teaching Mathematics in the Middle Grades (3) (F) 4 hours per week and 10-12 hours of field experience. May not count toward MATH or CSCI major or minor. P: Admission to upper division; EDUC 3200; MIDG 3010, 3022; MATH 2122, 2282, 3166, 3237; or consent of instructor; C: MIDG 4001, 4010; ENGL or HIST or MIDG or SCIE 4319; or consent of instructor. Study of techniques and methods of teaching mathematics in grades 6‑9. 4323. The Teaching of Mathematics in High School (3) (F) 4 hours per week. May not count toward BA in MATH or minor. P: MATH 2123. Modern methods and techniques used in teaching secondary school mathematics are carefully considered. 4324. Internship in Mathematics (10) (S) Full-time, semester-long internship. May not count toward BA in MATH or minor. P: Admission to upper division; MATH 4323; C: MATH 4325; READ 3990. Observation and supervised teaching in mathematics in an assigned public secondary school classroom. 4325. Internship Seminar: Issues in Mathematics Education (1) (S) May not count toward BA in MATH or minor. P: Admission to upper division; MATH 4323; C: MATH 4324. Individualized study of problems or issues related to mathematics education. 4331. Introduction to Ordinary Differential Equations (3) (F) (S) P: MATH 2173. Introduction to certain linear and non‑linear differential equations. 4332. The Calculus of Finite Differences (3) P: MATH 2173. Designed to study discrete changes that take place in the values of a function and its dependent variable due to discrete changes in the independent variable. 4501, 4502, 4503. Independent Study (1,2,3) (F) (S) (SS) Number of hours per week will depend on the credit hours and the nature of the work assigned. P: Mathematics major and consent of department chair. Designed to provide advanced mathematics students an opportunity to study topics supplementing the regular curriculum. 4550, 4551. Mathematics Honors (2,1) (F) (S) (SS) Open to students with exceptional mathematical ability who have completed MATH 2173. Acceptance in the program entitles the student to register for MATH 4550 or 4551. P: MATH 2173 or consent of instructor. 5000. Introduction to Sampling Design (3) (F) P:MATH 3308 or 3229 or consent of instructor. Fundamental principles of survey sampling, including data sources and types, questionnaire design, various sampling schemes, sampling and non-sampling errors and statistical analysis. 5002. Logic for Mathematics and Computer Science (3) (S) Same as CSCI 5002. P: CSCI 3510 or MATH 2427 or 2775 or 3223 or 3256 or Phil 3580 or equivalent. Introduction to methods of mathematical logic that have important applications in mathematics and computer science. 5021. Theory of Numbers I (3) (S) P: MATH 3263 or consent of instructor. Topics in elementary theory of numbers such as properties of integers, residues, congruences, and certain fundamental theorems. Also, binary quadratic forms, algebraic numbers, and irrationality and transcendence of numbers are studied. 5031. Applied Statistical Analysis (3) (WI) May not count toward mathematics hours required for MA or MAEd in mathematics. P: MATH 2228, 2584; or equivalent; or consent of instructor. Topics include analysis of variance and covariance, experimental design, multiple and partial regression and correlation, nonparametric statistics, and use of a computer statistical package. 5064. Introduction to Modern Algebra II (3) May not receive credit for MATH 5064 after taking MATH 6011. P: MATH 3263 or consent of instructor. Continuation of the development of topics begun in MATH 3263, including normal subgroups, factor groups, homomorphism, rings, ideals, quotient rings, and fields. 5101 (F), 5102 (S). Advanced Calculus I, II (3,3) P for 5101: MATH 2173 or consent of instructor; P for 5102: MATH 5101, 3256; or consent of instructor. Treats the basic properties of the real number system, point sets, theory of limits, ordinary and uniform continuity, the fundamental theorems of calculus, infinite series and regions of convergence, improper integrals. 5110. Elementary Complex Variables (3) (F) P: MATH 2173. Study of complex numbers, analytic functions, mapping by elementary functions, integrals, residues, and poles. 5121. Numerical Analysis in One Variable (3) P: MATH 2173; CSCI 2600 or equivalent knowledge of PASCAL or PL/1. Numerical analysis of problems with one independent variable, including solution of non‑linear equations in one unknown, interpolation and approximation of functions of one variable, numerical integration, and numerical differentiation and optimization. 5122. Numerical Analysis in Several Variables (3) P: MATH 3256, 4331; CSCI 2600 or equivalent knowledge of PASCAL or PL/1. Numerical analysis of problems with several independent variables, including numerical solution of ordinary differential equations, systems of linear equations, numerical linear algebra and matrix algebra, systems of nonlinear equations, and systems of ordinary differential 5131. Deterministic Methods in Operations Research (3) P: MATH 2173; 3307 or 5801; CSCI 2600 or equivalent knowledge of PASCAL or PL/1. Introduction to deterministic techniques in operations research, including mathematical models; linear programming; the simplex method, with applications to optimization; the duality theorem; project planning and control problems; and elementary game 5132. Probabilistic Methods in Operations Research (3) P: MATH 2173, 3256; 3307 or 5801; CSCI 2600 or an equivalent knowledge of PASCAL or PL/1. Introduction to probabilistic techniques in operations research, including an introduction to stochastic processes; queuing theory with applications to inventory theory and forecasting; Poisson and Markov processes; reliability simulation; decision analysis; integer programming; and non‑linear programming. 5251. Modern Mathematics for Elementary Teachers I (3) Not open to undergraduate or graduate majors or minors in mathematics. A teacher taking this course would receive certificate renewal credit and /or 3 s.h. of graduate elective credit in elementary education. May not count toward MATH or CSCI major or minor. P: MATH 3223 or equivalent or consent of instructor. Numeration systems and the real numbers from an axiomatic approach. Topics in geometry, algebra, probability theory, and number theory. Emphasis is upon the relationship between these topics and school mathematics. 5263, 5264. Modern Mathematics for Junior High School Teachers I, II (3,3) May not count toward MATH or CSCI major or minor. P for 5263: Consent of instructor; P for 5264: MATH 5263 or consent of instructor. Introduction to set theory, mathematical systems and proofs, number systems, elementary number theory; applications of mathematics in business, science, and other areas; basic concepts of geometry, algebra, probability, and statistics. 5265, 5266. Microcomputers in Secondary Education (3,0) (F) 2 lecture and 2 lab hours per week. May not count toward MATH or CSCI major or minor. P: MATH 1075 or 1085 or 3166; consent of instructor. Introduction to the operation and programming of microcomputers in the secondary school system. 5267, 5268. LOGO: A Computer Language for Educators (3,0) 2 lecture and 2 lab hours per week. May not count toward MATH major or minor. P: MATH 3166 or consent of instructor. Study of the language LOGO and its use with students K‑12. 5270. PASCAL Using the Microcomputer (3) May not count toward MATH or CSCI major or minor. May not be taken by students who have credit for CSCI 2610. P: MATH 1065 or equivalent. Study of the PASCAL language and problem solving using a microcomputer. 5311. Mathematical Physics I (3) Same as PHYS 5311. P: MATH 4331; PHYS 2360; or consent of instructor. Mathematical methods that are important in physics with emphasis on application. Includes integral transforms, integral equations, ordinary and partial differential equations, linear and nonlinear oscillations, orthonormal systems, Hilbert spaces, calculus of variations, and special 5322. Foundations of Mathematics (3) (WI) (F) P: MATH 3233, 3263; or equivalent. Fundamental concepts and structural development of mathematics. Introduction to non‑Euclidean geometries, logic, Boolean Algebra, and set theory. Construction of the complex number systems. Transfinite cardinal numbers and a study of relations and functions. Throughout the course, the topics in mathematics are developed as postulational systems. 5521. Readings and Lectures in Mathematics (3) (F) (S) (SS) Involves individual work with the student. 5551. The Historical Development of Mathematics (3) P: MATH 3233; C: MATH 2172 or consent of instructor. Introduces the history of mathematics from antiquity to the current time. Emphasis on the study of significant problems which prompted the development of new mathematics. Involves use of computer resources and the library for research of topics and solutions. 5581. Theory of Equations (3) P: MATH 2173 or consent of instructor. Operation with complex numbers, De Moivre's theorem, properties of polynomial functions, roots of the general cubic and quartic equations, methods of determining the roots of equations of higher degree, methods of approximating roots are among the topics treated. 5601. Non‑Euclidean Geometry (3) P: MATH 3233 or consent of instructor. Study of non‑Euclidean geometries, finite geometries, and an analysis of other geometries from the point of view of properties which remain invariant under certain transformations. 5650. Elementary Topology (3) P: MATH 2173 or 3256. Introduction to metric spaces and basic point‑set topology; open set, closed set, connectedness, compactness, and limit points. 5801. Probability Theory (3) (F) P: MATH 2173 or 3307. Axioms of probability, random variables and expectations, discrete and continuous distributions, moment generating functions, functions of random variables, the central limit theorem and applications. 1063. College Algebra (3) 2165, 2166. Advanced Concepts of Modern Mathematics I, II (3,3) 2182, 2183. Integrated Calculus I, II (5,5) 3219, 3220. Teaching of Elementary Mathematics K‑3 (3,0) 3221, 3222. Teaching of Elementary Mathematics 4‑6 (3,0) 3268, 3269. Analysis I, II (2,2) 3275. Numerical Analysis III (3) 5252. Modern Mathematics for Elementary Teachers II (3) 5261, 5262. Modern Mathematics for Secondary Teachers I, II (3,3) 5301, 5302. Analytical Mechanics I, II (3,3) 5321. Applied Mathematics I (3) 5331. Introduction to Celestial Mechanics (3) 5610. Applied Analysis (3) ECU Undergraduate Catalog 2000-2001
{"url":"http://www.ecu.edu/cs-acad/aa/customcf/ugcat/ugcat0001/SECT9COURSESe1.html","timestamp":"2014-04-20T18:36:33Z","content_type":null,"content_length":"62154","record_id":"<urn:uuid:6f08c1cd-214b-4c10-86ce-753e8b621b49>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
About arrange in array Hi everyone #include <stdio.h> #include <conio.h> #include <iostream.h> void input(int *a, int n) for(int i=0;i<n;i++) { cout << "a[" << i << "]= "; cin >> a[i]; void output(int *a,int n) for(int i=0;i<n;i++) cout << a[i] << "\t"; void swap(int &a,int &B) int tmp=a; void main() int n; int *a; cout << "Enter numbers of array: "; cin >> n; Thanks in advance and have a nice day. Hi, welcome to the vast world of programming. Two of the most essential tools you need are Google and Wikipedia. Googling each type of sort - for me - returned a well written Wiki article as the top hit; here are the links: Also make sure you read IN FULL and act on this one: If you ask questions as outlined here you can be sure to get quality help. This is a great article. I'm going to put it in my sig, if I can. Last edited by xpi0t0s; 18Nov2010 at 14:36..
{"url":"http://www.go4expert.com/forums/arrange-array-t23892/","timestamp":"2014-04-21T07:43:32Z","content_type":null,"content_length":"29590","record_id":"<urn:uuid:c4b23de7-19cf-40e5-8083-74250bd0a4b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Electronic Journal of Differential Equations: Conference 11, 2004. Proceedings of the 2004-Fez conference on Differental Equations and Mechanics May 24 - 26, 2004. The following papers were submitted by the participants in the conference, then refereed and accepted for publication in a special issue of the Electronic Journal of Differential Equations. Details about the conference are available in a Foreword Message by the editors of the proceedings: A. Benkirane, A. El Baraka, A. A. El Hilali. Table of contents. (In alphabetical order) Note: To preview PDF files your computer needs Adobe Acrobat Reader, which is available from http://www.adobe.com Note: Bibliographical references to articles in this proceedings should be written as follows. Author's name, Title of the article, 2004-Fez conference on Differental Equations and Mechanics, Electron. J. Diff. Eqns., Conf. 11 (2004), pp.##-##. Go to the Electronic Journal of Differential Equations.
{"url":"http://ejde.math.txstate.edu/conf-proc/11/toc.html","timestamp":"2014-04-18T03:44:07Z","content_type":null,"content_length":"4928","record_id":"<urn:uuid:5aecb157-281c-469d-b5a7-2df485cbdd08>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Given is "model". How many theories may it be a model? up vote 2 down vote favorite Usually we have axiomatic theory and the we look for model for it - this is book picture. Of course in real math usual one has a "model" that is given structure and looks for proper axiomatizing of it. SO it is interesting question: Some structure called "model" is given by some (countable) number of axioms. How many other axiomatic theories it may to be a model? Are there any different theories ( non-isomorphic) from the first one? Are this all theories related? Some intuition: In this picture model is not "example of axiomatic structure" but "point of intersection of many axiomatic structures". How big is space in which this theories may cross? model-theory big-picture metamathematics add comment 3 Answers active oldest votes You are using the terms "model" and "theory" in an idiosyncractic way. In model theory, a model is a first-order structure, that is, a set with some functions, relations and perhaps distinguished elements, called constants. A theory, in contrast, is a collection of assertions, a set of sentences in this language. A given theory, which can be thought of as a set of axioms in the sense that you mentioned, can give rise to many models. And indeed, the Lowenheim-Skolem theorem says that if a theory has an infinite model, then it has infinite models of arbitrarily large cardinality. (Thus, except in trivial cases one cannot uniquely specify a model by giving "some (countable) number of axioms" as you said, since the same axioms will have models of many different sizes.) Suppose that M is a model in a language of size κ, meaning that the language has κ many possible assertions. In this case, since any given assertion is either true in M or its negation is true in M, the complete theory of M, that is, the set Th(M) consisting of all sentences true in M, will also have size κ. Any subset S of Th(M) will also be true in M, of course. Thus, there are 2^κ many theories true in M. For example, if the language has countably many symbols in it, then any given model in this language will satisfy continuum 2^ω many But this answer counts theories as different, when they are different merely as sets of sentences, even when these theories have the same models. But for the purposes of counting up vote 3 theories, it may be more sensible to use another common definition of theory, which is a set of sentences closed under consequence. This amounts to identifying theories that have the down vote same models. With this second understanding of theory, the answer is a little more subtle. In the empty language, for example, every model is just a naked set, with no structure. There are exactly countably many countable models in this language: one of each finite size and one countably infinite model. If φ[n] is the assertion that there are exactly n objects, then for any set A of natural numbers, we may form the theory T[A], which asserts that ¬φ[n] for each n in A. These theories are all inequivalent, and all true in any infinite model. If M is any model, then there are continuum many theories T[A] that are true in M. This shows that in fact every model M, in any language, satisfies at least continuum many deductively closed theories. If the language is larger, with uncountable size κ, then either there are uncountably many relation symbols, uncountably many function symbols or uncountably many constant symbols. In each case, it is a fun exercise to form 2^κ many inequivalent theories T in the language. Given any model M, let σ be any sentence false in M. For any theory T containing σ, we may form the theory T' = { σ implies φ | φ in T }. This theory is true in M, since σ is false in M. Thus, by counting theories in this manner, one can show that there are 2^κ many inequivalent theories true in M. "This shows that in fact every model M, in any language, satisfies at least continuum many deductively closed theories. " So we are completely lost in the space of models. Are there some minimal closed theories which are true in given model? Or all of them are completely different? – kakaz Feb 8 '10 at 15:09 The theories T_A are different, in the sense that there are models distinguishing any two of them, but they are all completely trivial. They just say that the structure has a size 1 that is not in A. The theories T' in my last paragraph, however, are not trivial. These theories assert what the model could have been like, if sigma had been true. Since sigma is not true, these theories are vacuously true in M, but the theories are really talking about what might have been true in another structure. I take this to show that we should investigate all models, rather than theories true in one model. – Joel David Hamkins Feb 8 '10 at 15:26 add comment (Note that you first need to fix a first-order language L for your structures, otherwise you can't talk about models and theories. It is not necessary for that language to be countable.) up vote 4 Since every sentence of the language L is either true or false in your structure M, there is only one complete theory that M satisfies, namely the theory Th(M) of all sentences of the down vote language which are true in M. However, every subset of Th(M) is also a theory that M satisfies; those are precisely all the theories that M satisfies. Francois, it seems we are frequently posting at the same time! I'm sorry. – Joel David Hamkins Feb 8 '10 at 13:48 3 No problem Joel. We often have slightly different perspectives on things, which is useful for the poster and the community. – François G. Dorais♦ Feb 8 '10 at 13:56 Indeed, it is usually a pleasure to read your simultaneous answers! – Mariano Suárez-Alvarez♦ Feb 8 '10 at 16:10 add comment Infinitely many. Even a model with one element and one binary operation is a model of infinitely many different theories. up vote 2 down vote Interesting: but it is known if this theories may be grouped in some way? It is obvious that given model ( even with one element) is not a model for all possible theories: so there are theories for which it is not a model;-). So assuming we have infinitely many theories, are there some, I do not know, invariants, orders, lattice operations? which allow as to show whether there is some structure in such big space of theories? Maybe from Category theory point of view it is somehow interesting? Or it is just another trivial question? – kakaz Feb 8 '10 at 12:57 And Francois pointed out you don't even need the binary relation! – Joel David Hamkins Feb 8 '10 at 19:27 add comment Not the answer you're looking for? Browse other questions tagged model-theory big-picture metamathematics or ask your own question.
{"url":"http://mathoverflow.net/questions/14639/given-is-model-how-many-theories-may-it-be-a-model/14646","timestamp":"2014-04-16T08:03:01Z","content_type":null,"content_length":"69064","record_id":"<urn:uuid:0960ec73-8a54-4b49-b3f8-e77e03ecf56f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
High frequency asymptotics of antenna/structure interactions Coats, J. (2002) High frequency asymptotics of antenna/structure interactions. PhD thesis, University of Oxford. This thesis is motivated by the need to calculate the electromagnetic fields produced by sources radiating in the presence of conductors. We begin by reviewing existing theory concerning sources in the presence of flat structures. Various extensions to the canonical Sommerfeld problem are considered. In particular we investigate the asymptotic solution for a finite source that focusses its energy at a point. In chapter 5 we review and extend the asymptotic results concerning illumination of a convex perfect conductor by an incident plane wave and outline the procedure for decoupling the electromagnetic surface field into two scalar modes. In chapter 6 we place a source on a perfect conductor and obtain a complete asymptotic solution for the fields. Special attention is paid to the asymptotic structure that smoothly matches between the leading order lit and shadow regions. We also investigate the degenerate case where one of the curvatures of the perfect conductor is zero. The case where the source is just off the surface is also investigated. In chapter 8 we use the Euler-Maclaurin summation formula to cheaply calculate the fields due to complicated arrays of point dipoles. The final chapter combines many earlier results to consider more general sources on the surface of a perfect conductor. In particular we must introduce new asymptotic regions for open sources. This then enables us to consider the focussing of the surface field due to a finite source. The nature of the surface and geometrical optics fields depends on the size of the source in comparison to the curvatures of the surface on which they lie. We discuss this in detail and conclude with the practical example of a spiral antenna. Repository Staff Only: item control page
{"url":"http://eprints.maths.ox.ac.uk/44/","timestamp":"2014-04-19T04:31:38Z","content_type":null,"content_length":"16538","record_id":"<urn:uuid:db78d603-b5f2-4669-8e35-99a5fa33e80f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
A++ [Eric Torreborre's Blog] "The Essence of the Iterator Pattern"(EIP) is the paper I liked the most last year. It gave me a brand new look over something which I had be using for years: the for loop. In this post I'll try to present some ideas of that paper and show how to implement the solution described by the authors using Scalaz-like code. A minimum previous exposure to functional programming, functors and monads will definitely help! What's in a for loop? That was really what hooked me. What do you mean "what's in a for loop"?. Is there anything magic in that construct I've been using for years? The introduction of EIP shows an example of a for loop to iterate on elements (not the C-like for used with an index). I'm transmogrifying it here to Scala but the idea remains the same: val basket: Basket[Fruit] = Basket(orange, apple) var count = 0 val juices = Basket[Juice]() for (fruit <- basket) { count = count + 1 We start from a "container" of fruits: Basket. It could actually be anything, a List, a Tree, a Map... Then the for loop actually does 3 things: 1. it returns a container having the same "shape"; juices is still a Basket 2. it accumulates some kind of measure. Here, the number of fruits in the count variable 3. it maps the elements to other elements: pressing the fruits to get some juice And this for loop is actually not the most complex: • the count variable could influence the mapping of elements: juices.add(fruit.press(harder=count)) • we could have several variables depending on each other: cumulative = cumulative + count • the mapping could also influence a "measure" variable: liquid = liquid + fruit.press.quantity The purpose of EIP is to show that the "essence" of what happens in the for loop above can be abstracted by an Applicative Traversal. And the authors go on showing that given this Applicative abstraction, we get an incredible modularity for programming. The Applicative typeclass How can an Applicative traversal be better than a for loop, and what does that even mean?? EIP has a lot of sentences and expressions which can be hard to grasp if you don't have a strong functional programming / Haskell background. Let's try to dissect that slowly and start with the formal definitions anyway. What is a Functor? The first thing we need to talk about is the Functor typeclass: trait Functor[F[_]] { def fmap[A, B](f: A => B): F[A] => F[B] One way of interpreting a Functor is to describe it as a computation of values of type A. For example List[A] is a computation returning several values of type A (non-deterministic computation), Option[A] is for computations that you may or may not have, Future[A] is a computation of a value of type A that you will get later, and so on. Another way of picturing it is as some kind of "container" for values of type A. Saying that those computations are Functors is essentially showing that we can have a very useful way of combining them with regular functions. We can apply a function to the value that is being computed. Given a value F[A] and a function f, we can apply that function to the value with fmap. For example, fmap is a simple map for a List or an Option. Pointed Functor By the way, how do you even create a value of type F[A]? One way to do that is to say that F[_] is Pointed: trait Pointed[F[_]] { def point[A](a: => A): F[A] That is, there is a point function taking a value of type A and returning a F[A]. For example, a regular List is Pointed just by using the constructor for Lists: object PointedList extends Pointed[List] { def point[A](a: => A) = List(a) Then combining the 2 capabilities, pointed and functor, gives you a PointedFunctor: trait PointedFunctor[F[_]] { val functor: Functor[F] val pointed: Pointed[F] def point[A](a: => A): F[A] = pointed.point(a) def fmap[A, B](f: A => B): F[A] => F[B] = functor.fmap(f) The PointedFunctor trait is merely the aggregation of a Pointed and a Functor. What about Applicative then? We're getting to it, the last missing piece is Applic. Applic is another way to combine a "container" with a function. Instead of using fmap to apply the function to the computed value we suppose that the function is itself a computed value inside the container F (F[A => B]) and we provide a method applic to apply that function to a value F[A]: trait Applic[F[_]] { def applic[A, B](f: F[A => B]): F[A] => F[B] Let's take an example. Say I have a way to compute the price of a Fruit when the market is open: def pricer(market: Market): Option[Fruit => Double] If the market is closed, pricer returns None, because we don't know what are the prices. Otherwise it returns a pricing function. Now if I have a grow function possibly returning a Fruit: def grow: Option[Fruit] Then, using the Applic instance, you can price the Fruit: val price: Option[Double] = applic(pricer(market)).apply(grow) The price will necessarily be an Option because you may not have a pricer nor a Fruit to price. And a bit of renaming and pimping reveals why we're using the term "Applicative": val pricingFunction = pricer(market) val fruit = grow val price: Option[Double] = pricingFunction ⊛ fruit In a way we're just doing a normal function application, but we're just doing it inside the Applicative container. Now we have all the pieces to build the Applicative functor that EIP is talking Applicative Functor An Applicative Functor is the aggregation of an Applic and a PointedFunctor: trait Applicative[F[_]] { val pointedFunctor: PointedFunctor[F] val applic: Applic[F] def functor: Functor[F] = new Functor[F] { def fmap[A, B](f: A => B) = pointedFunctor fmap f def pointed: Pointed[F] = new Pointed[F] { def point[A](a: => A) = pointedFunctor point a def fmap[A, B](f: A => B): F[A] => F[B] = functor.fmap(f) def point[A](a: => A): F[A] = pointed.point(a) def apply[A, B](f: F[A => B]): F[A] => F[B] = applic.applic(f) Let's see how that can be implemented for a List. fmap and point are straightforward: def fmap[A, B](f: A => B): F[A] => F[B] = (l: List[A]) => l map f def point[A](a: => A): F[A] = List(a) apply turns out to be more interesting because there are 2 ways to implement it, both of them being useful: 1. apply a list of functions to each element and gather the results in a List: def apply[A, B](f: F[A => B]): F[A] => F[B] = (l: List[A]) => for { a <- l; func <- f } yield func(a) 2. zip the list of functions to the list of elements to apply each function to each element def apply[A, B](f: F[A => B]): F[A] => F[B] = (l: List[A]) => (l zip f) map (p => p._2 apply p._1) There is even a third way to use List as an Applicative by using the fact that List is a Monoid. But more on that later, for now we still have to see how all of this relates to the for loop... Traversing the structure When we do a for loop, we take a "structure" containing some elements and we "traverse" it to return: • that same structure containing other elements • a value computed from the structure elements • some combination of above Gibbons & Oliveira argue that any kind of for loop can be represented as the following traverse operation: trait Traversable[T[_]] { def traverse[F[_] : Applicative, A, B](f: A => F[B]): T[A] => F[T[B]] That is, if the container/structure of type T has this traverse function using an Applicative F, then we can do whatever we would do with a for loop on it. To get a better feel for this traverse function, we're going to implement the Traversable trait for a binary tree and then we'll see how can we loop on that tree. A Binary Tree For all the other examples in this post, we're going to use a very simple binary tree: sealed trait BinaryTree[A] case class Leaf[A](a: A) extends BinaryTree[A] case class Bin[A](left: BinaryTree[A], right: BinaryTree[A]) extends BinaryTree[A] On the other hand, the first shot at the Traversable implementation is barely readable! def BinaryTreeIsTraversable[A]: Traversable[BinaryTree] = new Traversable[BinaryTree] { def createLeaf[B] = (n: B) => (Leaf(n): (BinaryTree[B])) def createBin[B] = (nl: BinaryTree[B]) => (nr: BinaryTree[B]) => (Bin(nl, nr): BinaryTree[B]) def traverse[F[_] : Applicative, A, B](f: A => F[B]): BinaryTree[A] => F[BinaryTree[B]] = (t: BinaryTree[A]) => { val applicative = implicitly[Applicative[F]] t match { case Leaf(a) => applicative.apply(applicative.point(createLeaf[B]))(f(a)) case Bin(l, r) => applicative.apply(applicative.apply(applicative.point(createBin[B]))(traverse[F, A, B](f).apply(l))). apply(traverse[F, A, B](f).apply(r)) This is a shame because the corresponding Haskell code is so concise: instance Traversable Tree where traverse f (Leaf x) = pure Leaf ⊛ f x traverse f (Bin t u) = pure Bin ⊛ traverse f t ⊛ traverse f u A bit of pimping to the rescue and we can improve the situation: def traverse[F[_] : Applicative, A, B](f: A => F[B]): BinaryTree[A] => F[BinaryTree[B]] = (t: BinaryTree[A]) => { t match { case Leaf(a) => createLeaf[B] ∘ f(a) case Bin(l, r) => createBin[B] ∘ (l traverse f) <*> (r traverse f) Informally the traverse method applies the function f to each node and "reconstructs" the tree by using the apply method (<*>) of the Applicative functor. That's certainly still some Ancient Chinese to you (as it was for me) so we'd better see the traverse method at work now. But we need to take another detour :-) Applicative Monoid One simple thing we might want to do, when iterating on a BinaryTree, is to get the content of that tree in a List. To do that, we're going to use the 3rd way to use List as an Applicative, as mentioned earlier. It turns out indeed that any Monoid (what is it?) gives rise to an Applicative instance but in a way that's a bit surprising. /** Const is a container for values of type M, with a "phantom" type A */ case class Const[M, +A](value: M) implicit def ConstIsPointed[M : Monoid] = new Pointed[({type l[A]=Const[M, A]})#l] { def point[A](a: => A) = Const[M, A](implicitly[Monoid[M]].z) implicit def ConstIsFunctor[M : Monoid] = new Functor[({type l[A]=Const[M, A]})#l] { def fmap[A, B](f: A => B) = (c: Const[M, A]) => Const[M, B](c.value) implicit def ConstIsApplic[M : Monoid] = new Applic[({type l[A]=Const[M, A]})#l] { def applic[A, B](f: Const[M, A => B]) = (c: Const[M, A]) => Const[M, B](implicitly[Monoid[M]].append(f.value, c.value)) implicit def ConstIsPointedFunctor[M : Monoid] = new PointedFunctor[({type l[A]=Const[M, A]})#l] { val functor = Functor.ConstIsFunctor val pointed = Pointed.ConstIsPointed implicit def ConstIsApplicative[M : Monoid] = new Applicative[({type l[A]=Const[M, A]})#l] { val pointedFunctor = PointedFunctor.ConstIsPointedFunctor val applic = Applic.ConstIsApplic In the code above, Const is the Applicative instance for a given Monoid. Const contains values of type T where T is a Monoid and we progressively establish what are the properties that Const must satisfy to be Applicative. • it must first be Pointed. Informally, the point method puts the neutral element of the Monoid in a Const instance • then it must be a Functor. Here the fmap function doesn't do anything but changing the type of Const from Const[M, A] to Const[M, B] • finally it must be an Applic where the apply method of Applic uses the append method of the Monoid to "add" 2 values and return the result in a Const instance. There is unfortunately a lot of typing vodoo thing here: • the type declaration for Const is Const[A, +B]. It has a type parameter B which is actually not represented by a value in the Const class! It is a phantom type. But it is actually indispensable to match the type declarations of the typeclasses • the type F that is supposed to be Applicative is... ({type l[A] = Const[T, A]})#l. Ouch, this deserves some explanation! What we want is not so hard. The type Const[A, B] has 2 type parameters. We need a way to fix A to be T and get the resulting type which will have only one type parameter. The expression above is the most concise way to get this desired type: • { type l = SomeType } is an anonymous type with a type member called l. We can access that type l in Scala by using #: { type l = SomeType }#l • Then, in { type l[A] = SomeType[T, A] }#l, l is a higher-kinded type, having a type variable A (actually SomeType[T, A] where T is fixed) That was a really long detour for a mere for loop, isn't it? Now... profit! Contents of a BinaryTree... We're going to use the Traversable instance for the BinaryTree and the List Monoid Applicative to get the contents of a BinaryTree: import Applicative._ val f = (i: Int) => List(i) val tree = Bin(Leaf(1), Leaf(2)) (tree.traverse[...](f)).value must_== List(1, 2) Simple, for each element of the tree, we put it in a List then we let the List Monoid do its magic and aggregate all the results as we traverse the tree. The only difficulty here is the limits of Scala type inference. The ... stands for type annotations that the compiler requires: tree.traverse[Int, ({type l[A]=Const[List[Int], A]})#l](f) Not pretty :-( : As pointed out by Ittay Dror in the comments, is not an applicative by itself and we need to put this list into a value to make it usable by the This is actually done by an implicit conversion method, , provided by the Applicative object implicit def liftConst[A, B, M : Monoid](f: A => M): A => Const[M, B] = (a: A) => Const[M, B](f(a)) Profit time Not everything is lost! We can encapsulate a bit the complexity in this case. We can extract part of the code above and create a contents method which will work on any of Traversable instance (assume I'm pimping the following examples so that I can write tree.method instead of method(tree)): val tree: BinaryTree[Int] = Bin(Leaf(1), Leaf(2)) tree.contents must_== List(1, 2) This is based on the following definition: def contents[A]: T[A] => List[A] = { val f = (a: A) => Const[List[A], Any](List(a)) (ta: T[A]) => traverse[({type l[U]=Const[List[A], U]})#l, A, Any](f).apply(ta).value It also turns out that the contents function is a specialized version of something even more generic, the reduce function, working with any Monoid: def contents[A]: T[A] => List[A] = reduce((a: A) => List(a)) def reduce[A, M : Monoid](reducer: A => M): T[A] => M = { val f = (a: A) => Const[M, Any](reducer(a)) (ta: T[A]) => traverse[({type l[A]=Const[M, A]})#l, A, Any](f).apply(ta).value The reduce function can traverse any Traversable structure with a function mapping each element to a Monoid element. We've used it to get the contents of the tree but can as easily get the number of def count[A]: T[A] => Int = reduce((a: A) => 1) tree.count must_== 2 Can it get simpler than this :-)? Actually in that case it can! Since we don't need (a: A) at all we can use reduceConst: def reduceConst[A, M : Monoid](m: M): T[A] => M = reduce((a: A) => m) def count[A]: T[A] => Int = reduceConst(1) It's like a Scala standard reduce on steroids because instead you don't need to provide a binary operation, you just need a Monoid instance. .... and shape of a BinaryTree We've addressed the question of doing some kind of accumulation based on the elements in the tree, now we're going to "map" them. Monads are Applicatives too! The following map method can be derived from the traverse method (note that no type annotations are necessary in that case, yes!): def map[A, B](mapper: A => B) = (ta: T[A]) => traverse((a: A) => Ident(mapper(a))).apply(ta).value Here we're traversing with an Applicative which is very simple, the Ident class: case class Ident[A](value: A) The Ident class is a simple wrapper around a value, nothing more. That simple class is an Applicative. But how? Easy. Ident is actually a Monad and we can construct an Applicative instance from every Monad. This comes from the fact that a Monad is both a PointedFunctor and an Applic: trait Monad[F[_]] { val pointed: Pointed[F] val bind: Bind[F] def functor: Functor[F] = new Functor[F] { def fmap[A, B](f: A => B): F[A] => F[B] = (fa: F[A]) => bind.bind((a: A) => pointed.point(f(a))).apply(fa) def pointedFunctor: PointedFunctor[F] = new PointedFunctor[F] { val functor = Monad.this.functor val pointed = Monad.this.pointed def applic: Applic[F] = new Applic[F] { def applic[A, B](f: F[A => B]) = a => bind.bind[A => B, B](ff => functor.fmap(ff)(a))(f) def applicative: Applicative[F] = new Applicative[F] { val pointedFunctor = Monad.this.pointedFunctor val applic = Monad.this.applic And the Ident class is trivially a Monad (having a pointed and a bind member): implicit def IdentIsMonad = new Monad[Ident] { val pointed = new Pointed[Ident] { def point[A](a: => A): Ident[A] = Ident(a) val bind = new Bind[Ident] { def bind[A, B](f: A => Ident[B]): Ident[A] => Ident[B] = (i: Ident[A]) => f(i.value) We can use our brand new map function now: tree.map((i: Int) => i.toString) must_== Bin(Leaf("1"), Leaf("2")) We can even use it to get the "shape" of our container and discard all the elements: tree.shape must_== Bin(Leaf(()), Leaf(())) The shape method just maps each element to (). Decompose / Compose I recap. We implemented a very generic way to iterate over a structure, any kind of structure (as long as it's Traversable), containing elements, any kind of element, with a function which does an "application", any kind of application. Among the possible "applications", we've seen 2 examples: collecting and mapping which are the essential operations that we usually do in a for loop. Specifically we were able to get the contents of a tree and its shape. Is there a way to compose those 2 operations into a decompose operation that would get both the content and the shape at once? Our first attempt might be: def decompose[A] = (t: T[A]) => (shape(t), contents(t)) tree.decompose must_== (Bin(Leaf(()), Leaf(())), List(1, 2)) This works but it is pretty naive because this requires 2 traversals of the tree. Is that possible to do just one? Applicative products This is indeed possible by noticing the following: the product of 2 Applicatives is still an Applicative. Proof, proof. We define Product as: case class Product[F1[_], F2[_], A](first: F1[A], second: F2[A]) { def tuple = (first, second) I spare you the full definition of Product as an Applicative to just focus on the Applic instance: implicit def ProductIsApplic[F1[_] : Applic, F2[_] : Applic] = new Applic[({type l[A]=Product[F1, F2, A]})#l] { val f1 = implicitly[Applic[F1]] val f2 = implicitly[Applic[F2]] def applic[A, B](f: Product[F1, F2, A => B]) = (c: Product[F1, F2, A]) => Product[F1, F2, B](f1.applic(f.first).apply(c.first), That's not too complicated, you just have to follow the types. What's more troubling is the amount of type annotations which are necessary to implement decompose. Ideally we would like to write: def decompose[A] = traverse((t: T[A]) => shape(t) ⊗ contents(t)) Where ⊗ is an operation taking 2 Applicatives and returning their product. Again the lack of partial type application for Const muddies the whole (upvote SI-2712 please!): val shape = (a: A) => Ident(()) val content = (a: A) => Const[List[A], Unit](List(a)) val product = (a: A) => (shape(a).⊗[({type l[T] = Const[List[A], T]})#l](content(a))) implicit val productApplicative = ProductIsApplicative[Ident, ({type l1[U] = Const[List[A], U]})#l1] (ta: T[A]) => { val (Ident(s), Const(c)) = traverse[({type l[V] = Product[Ident, ({type l1[U] = Const[List[A], U]})#l1, V]})#l, A, Unit](product). (s, c) We can improve the code sligthly by moving the implicit definition for productApplicative inside the Applicative companion object: object Applicative { implicit def ProductWithListIsApplicative[A[_] : Applicative, B] = ProductIsApplicative[A, ({type l1[U] = Const[List[B], U]})#l1] Then no implicit val productApplicative is necessary and the Applicative imports will be all we need. Collection and dispersal There is another way to do things "in parallel" while traversing the structure. The collect method that we're going to build will do 2 things: • it will accumulate some kind of state, based on the elements that we meet • it will map each element to another kind of element So, as we're iterating, we can do a regular mapping while computing some kind of measure. But before that, we need to take a little detour (again?? Yes, again) with the State monad. The State monad The State Monad is defined by: trait State[S, +A] { def apply(s: S): (S, A) It is basically: • an object keeping some previous "state", of type S • a method to extract a meaningful value from this "state", of type A • this method computes a new "state", of type S For example, a simple counter for the number of elements in a List[Int] can be implemented by: val count = state((n: Int) => (n+1, ())) It takes the previous "count" number n and returns the new state n+1 and the extracted value (() here, because we don't need to extract anything special). The State type above is a Monad. I encourage you to read "Learn You a Haskell" to get a better understanding on the subject. I will just show here that the flatMap (or bind) method of the Monad typeclass is central in putting that State to work: val count = (s: String) => state((n: Int) => (n+1, s + n)) (count("a-") flatMap count flatMap count).apply(0) must_== (3, "a-012") The count function takes the latest computed string and returns a State where we increment the current "state" by 1 and we have a new String as the result, where the current count is appended. So when we start with the string "a-" and we flatMap count 2 times, we get (3, "a-012") where 3 is the number of times we've applied the n+1 function and "a-012" the result of appending to the current By the way, why do we need to apply(0)? When we do all the flatMaps, we actually store "stateful computations". And they are executed only once we provide the initial state: 0! Collecting elements Let's now define a collect operation on Traversable which will help us to count: def collect[F[_] : Applicative, A, B](f: A => F[Unit], g: A => B) = { val applicative = implicitly[Applicative[F]] import applicative._ val application = (a: A) => point((u: Unit) => g(a)) <*> f(a) This collect operation, defined in EIP, is different from the collect operation on Scala collections which is the equivalent of filter + map. The collect of EIP is using 2 functions: • f: A => F[Unit] which collects data from each element "effectfully" (that is, possibly keeping state) • g: A => B which maps each element to something else So we could say that the EIP collect is a bit like fold + map. Knowing this, we can use collect to count elements and do some mapping: val count = (i: Int) => state((n: Int) => (n+1, ())) val map = (i: Int) => i.toString tree.collect[({type l[A]=State[Int, A]})#l, String](count, map).apply(0) must_== (2, Bin(Leaf("1"), Leaf("2"))) Here again the type annotations are obscuring the intent a bit and if type inference was perfect we would just read: val count = (i: Int) => state((n: Int) => (n+1, ())) val map = (i: Int) => i.toString tree.collect(count, map).apply(0) must_== (2, Bin(Leaf("1"), Leaf("2"))) I don't know about you, but I find this a bit magical. With the Applicative and Traversable abstractions, we can assemble our program based on 2 independent functions possibly developed and tested Dispersing elements The next utility function proposed by EIP is the disperse function. Its signature is: def disperse[F[_] : Applicative, A, B, C](f: F[B], g: A => B => C): T[A] => F[T[C]] What does it do? • f is the Applicative context that's going to evolve when we traverse the structure, but regardless of what the elements of type A are • g is a function which, for each element of type A says what to do with the current context value, B, and map that element back to the structure Please, please, a concrete example! Say I want to mark each element of a BinaryTree with its "number" in the Traversal (the "label"). Moreover I want to use the element name to be able to qualify this label: // a BinaryTree of Doubles val tree: BinaryTree[Double] = Bin(Leaf(1.1), Bin(Leaf(2.2), Leaf(3.3))) // the "label" state returning integers in sequence val labelling: State[Int, Int] = state((n: Int) => (n+1, n+1)) // for each element in the tree, and its label, // produce a String with the name and label val naming: Double => Int => String = (p1: Double) => (p2: Int) => p1+" node is "+p2 // testing by applying an initial state (label `0`) and // taking the second element of the pair `(last label, resulting tree)` tree.disperse[elided for sanity](labelling, naming).apply(0)._2 must_== Bin(Leaf("1.1 node is 1"), Bin(Leaf("2.2 node is 2"), Leaf("3.3 node is 3"))) Note that the naming function above is curried. A more familiar way to write it would be: val naming: (Double, Int) => String = (p1: Double, p2: Int) => p1+" node is "+p2 But then you would have to curry that function to be able to use it with the disperse function: tree.disperse[...](labelling, naming.curried) The implementation of disperse is: def disperse[F[_] : Applicative, A, B, C](f: F[B], g: A => B => C) = { val applicative = implicitly[Applicative[F]] import applicative._ val application = (a: A) => point(g(a)) <*> f It is using the very capabilities of the applicative functor, the point method and the <*> application. An overview of traversals We've seen in the 2 examples above that we get different, specialized, versions of the traverse function by constraining how mapping and Applicative effects occur. Here's a tentative table for classifying other specialized versions of the traverse function: function map element create state mapped depend on state state depend on element collect X X X disperse X X X measure X X traverse X X X X reduce X X reduceConst X map X The only function we haven't shown before is measure. It is mapping and accumulating state but this accumulation does not depend on the current element. Here's an example: val crosses = state((s: String) => (s+"x", ())) val map = (i: Int) => i.toString tree.measure(crosses, map).apply("") must_== ("xxx", Bin(Leaf("1"), Bin(Leaf("2"), Leaf("3")))) Other than not looking very useful, the code above is also lying! It is not possible to have a measure function accepting a State monad without having to provide the usual ugly type annotations. So the actual example is: tree.measureState(crosses, map).apply("") must_== ("xxx", Bin(Leaf("1"), Bin(Leaf("2"), Leaf("3")))) where measureState is a specialization of the measure method to States. I think that one take-away of this post is that it might be beneficial to specialize a few generic functions in Scala , like traverse, collect... for Const and State in order to avoid type annotations. Composing traversals There's another axis of composition that we haven't exploited yet. In a for loop, without thinking about it, you may write: for (a <- as) { val currentSize = a.size total += currentSize In the body of that for loop, you have statements with dependency on each other. In an Applicative traversal, this translates to the Sequential composition of Applicatives. From 2 Applicatives, we can create a third one which is their Sequential composition. More precisely, this means that if F1[_] and F2[_] are Applicatives then F1[F2[_]] is an Applicative as well. You want the demonstration? Ok, go. First, we introduce a utility function on ApplicFunctors: def liftA2[A, B, C](function: A => B => C): F[A] => F[B] => F[C] = fa => applic.applic(functor.fmap(function)(fa)) liftA2 allows to lift a regular function of 2 arguments to a function working on the Applicative arguments. This is using the fact that an ApplicFunctor is a Functor so we can apply function: A => B => C to the "a in the box", to get a F[B => C] "in the box". And then, an ApplicFunctor is an Applic, so we can "apply" F[B] to get a F[C] Armed with this function, we can write the applic method for F1[F2[_]]: implicit val f1ApplicFunctor = implicitly[ApplicFunctor[F1]] implicit val f2ApplicFunctor = implicitly[ApplicFunctor[F2]] val applic = new Applic[({type l[A]=F1[F2[A]]})#l] { def applic[A, B](f: F1[F2[A => B]]) = (c: F1[F2[A]]) => { f1ApplicFunctor.liftA2((ff: F2[A => B]) => f2ApplicFunctor.apply(ff))(f).apply(c) It's not so easy to get an intuition for what the code above is doing except that saying that we're using previous definitions to allow a F1[F2[A => B]] to be applied to F1[F2[A]]. In mere mortal terms, this means that if we do an Applicative computation inside a loop and if we reuse that computation in another Applicative computation, we still get an Applicative computation. The EIP illustration of this principle is a crazy function, the assemble function. The assemble function The assemble function takes the shape of a Traversable and a list of elements. If there are enough elements it returns Some[Traversable] filled with all the elements (+ the reminder), otherwise it returns None (and an empty list). Let's see it in action: // the "shape" to fill val shape: BinaryTree[Unit] = Bin(Leaf(()), Leaf(())) // we assemble the tree with an exact list of elements shape.assemble(List(1, 2)) must_== (List(), Some(Bin(Leaf(1), Leaf(2)))) // we assemble the tree with more elements shape.assemble(List(1, 2, 3)) must_== (List(3), Some(Bin(Leaf(1), Leaf(2)))) // we assemble the tree with not enough elements shape.assemble(List(1)) must_== (List(), None) What's the implementation of the assemble function? The implementation uses 2 Monads (which are also Applicatives as we know now): • the State[List[Int], _] Monad is going to keep track of what we've already consumed • the Option[_] Monad is going to provide, or not, an element to put in the structure • the composition of those 2 monads is State[List[Int], Option[_]] (our F1[F2[_]] in the ApplicFunctor definitions above So we just need to traverse the BinaryTree with one function: def takeHead: State[List[B], Option[B]] = state { s: List[B] => s match { case Nil => (Nil, None) case x :: xs => (xs, Some(x)) The takeHead function is a State instance where each state application removes the first element of the list of elements if possible, and returns it in an Option. This is why the result of the assemble function, once we apply it to a list of elements, is of type (List[Int], Option[BinaryTree[Int]]). A recursive implementation Just for the fun of comparison, I'm going to write a recursive version doing the same thing: def assemble(es: List[Int], s: BinaryTree[Unit]) : (List[Int], Option[BinaryTree[Int]]) = { (es, s) match { case (Nil, _) => (es, None) case (e :: rest, Leaf(())) => (rest, Some(Leaf(e))) case (_, Bin(left, right)) => { assemble(es, left) match { case (l, None) => (l, None) case (Nil, Some(l)) => (Nil, None) case (rest, Some(l)) => assemble(rest, right) match { case (r, None) => (r, None) case (finalRest, Some(r)) => (finalRest, Some(Bin(l, r))) assemble(List(1, 2, 3), shape) must_== (List(3), Some(Bin(Leaf(1), Leaf(2)))) It works, but it makes my head spin! A classical for-loop implementation By the way, what would be the real for loop version of that functionality? That one is not so easy to come up with because AFAIK there's no easy way to iterate on a BinaryTree to get a similar BinaryTree with just a for loop! So, for the sake of the argument, we're going to do something similar with just a List structure: def assemble[T](es: List[T], shape: List[Unit]) = { var elements = es var list: Option[List[T]] = None for (u <- shape) { if (!elements.isEmpty) { list match { case None => list = Some(List(elements.first)) case Some(l) => list = Some(l :+ elements.first) elements = elements.drop(1) } else { list = None (elements, list) assemble(List(1, 2, 3), List((), ())) must_== (List(3), Some(List(1, 2))) Contrast and compare with: List((), ()).assemble(List(1, 2, 3)) must_== (List(3), Some(List(1, 2))) where you just define List as a Traversable: implicit def ListIsTraversable[A]: Traversable[List] = new Traversable[List] { def traverse[F[_] : Applicative, A, B](f: A => F[B]): List[A] => F[List[B]] = (l: List[A]) => { val applicative = implicitly[Applicative[F]] l match { case Nil => applicative.point(List[B]()) case a :: rest => ((_:B) :: (_: List[B])).curried ∘ f(a) <*> (rest traverse f) The Applicative composition is indeed very powerful, but we're going to see that there are other ways to compose functions and use them with Traversables. Monadic composition This paragraph is exploring the fine relationships between applicative composition and monadic composition when doing traversals. We've seen that Applicative instances can be composed and that Monads can be Applicative. But Monads can also be composed using the so-called Kleisli composition. If we have: val f: B => M[C] val g: A => M[B] val h: A => M[C] = f ∎ g // is also a function from a value to a Monad If we have 2 "monadic" functions f and g, we can then compose them, in the Kleisli sense, and use the composed version for a traversal. Indeed we can, but does this traversal have "nice properties"? Specifically, do we have: traverse(f ∎ g) == traverse(f) ∎ traverse(g) The answer is... it depends. Monad commutativity EIP shows that, if the Monad is commutative, then this will always be true. What is a commutative Monad you ask? A Monad is commutative if for all mx: M[X] and my: M[Y] we have: val xy = for { x <- mx y <- my } yield (x, y) val yx = for { y <- my x <- mx } yield (x, y) xy == yx This is not the case with the State Monad for example: val mx = state((n: Int) => (n+1, n+1)) val my = state((n: Int) => (n+1, n+1)) xy.apply(0) must_== (2, (1, 2)) yx.apply(0) must_== (2, (2, 1)) Monadic functions commutativity Another slightly different situation is when we have a non-commutative Monad but commutative functions: val plus1 = (a: A) => state((n: Int) => (n+1, a)) val plus2 = (a: A) => state((n: Int) => (n+2, a)) val times2 = (a: A) => state((n: Int) => (n*2, a)) Here plus1 and times2 are not commutative: (0 + 1) * 2 != (0 * 2) + 1 However it is obvious that plus1 and plus2 are commutative. What does that mean when we do a traversal? If we traverse a simple List of elements using monadic composition we get: • List(1, 2, 3).traverse(times2 ∎ plus1) === 22 • List(1, 2, 3).traverse(times2) ∎ List(1, 2, 3).traverse(plus1) === 32 We get different results. However, when f and g commute we get the same result: • List(1, 2, 3).traverse(plus2 ∎ plus1) === 10 • List(1, 2, 3).traverse(plus2) ∎ List(1, 2, 3).traverse(plus1) === 10 Applicative composition vs Monadic composition Another question we can ask ourselves is: if we consider the monadic functions as applicative functions (because each Monad is Applicative), do we get the nice "distribution" property we're after? The answer is yes, even when the functions are not commutative: • List(1, 2, 3).traverse(times2 ⊡ plus1) === 4 • List(1, 2, 3).traverse(times2) ⊡ List(1, 2, 3).traverse(plus1) === 4 Well... more or less. The real situation is a bit more complex. List(1, 2, 3).traverse(times2 ⊡ plus1) returns a State[Int, State[Int, List[Int]]] while the second expression returns a State[Int, List[State[Int, Int]] so what I'm hiding here is some more manipulations to be able to query the final result with some kind of join. You wouldn't believe it but I've only shown here half of the ideas presented in EIP! To finish off this post here's 3 take-away points that I've learned while writing it: • functional programming is also about mastering some of these higher-level control structures like Applicative. Once you master them, your toolbox expands considerably in power (just consider the assemble example) • Scalaz is an incredible library but somewhat obscure to the beginner. For this post I've rewritten all the typeclasses I needed to have, and lots of examples (using specs2 of course). That gave me a much better understanding of the Scalaz functionality. You may consider doing the same to learn Scalaz (my code is available on github) • Scala is lacking behind Haskell in terms of type inference and it's a real pain for higher-order, generic programming. This can be sometimes encapsulated away by specializing generic functions to very common types (like traverseState instead of traverse). Again, please upvote SI-2712! Finally, I want to mention that there are many other Haskell functional pearls waiting to be transliterated to Scala. I mean, it's a shame that we don't have yet any equivalent for "Learn you a Haskell" or "Typeclassopedia" in the Scala world. I hope that my post, like this other one by Debasish Ghosh, will also contribute to bridge the gap. 31 comments: Holy smokes, this is a great write-up mate. You've obviously put a lot of time into making it comprehensive and covering the important points in detail. I didn't even spot any trivial errors! Wow, coming from you, that makes me both happy and proud! great stuff Brilliant. Thank you so very much. Hi Eric, Great post, very thorough and accessible. One minor problem though. (I hope this inspires you to another post on type class laws, Scalacheck, and laziness!). Your second Applicative definition for List (zipping the elements and functions) doesn't satisfy the Applicative Law. See why here: https://gist.github.com/1047343 The fix is to define pure (aka point) differently in this case. In the example, we needed the pure function g replicated three times in a list, to zip with all elements of `x`. But how can we generalize this, so we don't need prior knowledge of the length of `x`? Well, just pick a very large number: ∞! This is allowed with Haskell Lists, which are closer to Scala Streams. I just checked Scalaz; and we actually make the same mistake. Our `newtype` for the zippy Applicative Functor, ZipStream, is correctly based on Stream. But the definition of Pure is incorrect. Here are the Haskell definitions of ZipList and its type class instances. Hi Jason, I indeed had bad vibes when writing this definition and forgot to check that. A follow-up might be a good idea some day. I would like in particular to show why thinking about laws and properties and laws is a good thing, for very practical purposes and not esoteric mathematical leisure. For example "fusion" laws are important for optimization reasons. Hi Eric, Any chance you could provide some further detail or reference material for the "({type l[A]=Product[F1, F2, A]})#l" type statements? It's clear what they do, but going on to create similar constructs still feels like a black art. Thanks for a great post. I was hoping that my description was clear, but let me try again from another angle, which is how to create one when you need it. Let's say you have a higher-kinded type: T[A, B, C] and a function which only accepts a higher-kinded type M[_]. Given your type T[A, B, C] you know that if you fix A to be Int and B to be String (for example) then T[Int, String, _] is the M[_] you're looking for. For example, for a State Monad, S[A, B], you know it is a Monad when you fix A to something, say Int, and just use B as the type parameter: ({type l[B]=State[Int,B]}#l) Then, this type 'l' can have a corresponding Monad instance. This operation of fixing some type parameters to some known types and letting one other type vary is a Partial type application. Unfortunately, there is not direct way to do that with Scala. The indirect way is to create a type alias and refer to that type. So if we take the example of having T[A, B, C] and fixing A to Int, B to String, that gives: ({type l[C]=T[Int, String, C]})#l Where 'l' is an alias name for the higher-kind type T[Int, String, C] with just one type parameter, C. To be honest, I find too that there's a lot of syntax that needs to be remembered in this formula. The way I did it was by really understanding each part: - '{}' to create an anonymous class - 'type' to declare a type member inside an object. It is also a type alias, because it is '=' to another type - '#' to access the type member of a class I hope that I'm not too off-base with the exact terms that you can find in the Scala specification and that you'll be able to create such a type by yourself next time. Actually you can grab the code on github and try to reimplement by yourself the Traversable.reduce method in terms of the Traversable.traverse method for example. Hi, thank you for this post. It's possible that there is an error on the List implementation of the Applicative Functor? I'm referring to the implementation of 'applic' that zip the List[A] to the List[A=>B]. Shouldn't be p._2 apply p._1? Good catch, I fixed it. Great post. Reading it slowly, I have a question: Why in BinaryTreeIsTraverable, in the case of Leaf the code is: Couldn't it just be applicative.fmap(createLeaf[B])(f(a))? You're right Ittay, I just took the same definition as the one in the article but fmap would also work. I think it would be also good to mention liftConst since this is what causes f: Int => List[Int] to be used as a Const in the first traverse example and is crucial since it puts the value of List (i) into the const (the methods that make Const into an applicative use either Monoid.z or Monoid.append and it is not clear where other values come from) I updated the post with a paragraph showing the liftConst method (actually inspired from the WordCount example in the Scalaz library where Jason Zaugg implemented the last EIP example with Erik, thanks for the nice overview of the EIP in Scala! The article looks very nice. I added a link to this post on my web page. By the way, you made the typical mistake when spelling my last name :): Oliviera --> Oliveira You may also want to make it clear at the beginning that by "for loop" what is meant is something like "foreach" in C# (iteration over collections) and not "for" in C. Both fixes are done. Thanks again for this wonderful article which was an eye-opener for me! This is epic! I wish there were more explanations like this for other important papers related to functional programming. Real eye opener. This is clearly a great post. But I hardly understand the definition of the Pointed Functor (maybe I should do something else rather trying to learn functionnal programming :)) : Why do you write : def point[A](a: => A): F[A] instead of : def point[A](a:A): F[A] as far as point is a "function taking a value of type A and returning a F[A]"? Thank you Just missed "call-by-name parameter" @Benoit, yes it's a call-by-name parameter and it's particularly important if the Functor is a Future for example. Because in that case you want the value to be evaluated in another thread and not right away. Thank you Eric, I have caught the idea. Really a great stuff it is posting like these that give us hope!! it is posting like these that give us hope!! it is posting like these that give us hope!! I understand that Scalaz is a bit of a pre-requisite here, but this article would be much easier to follow if you had presented Monoid like you did the other basic concepts. It seems to be the only relevant concept whose interface/definition was not presented. Also... could you please redefine Const as "case class Const[M, +A](value: M)"? I have a terrible time dissociating the "A" used in the original definition from the "A" type parameters used on the following definitions. Thanks for your comments. There was a link to Learn-Yourself-a-Haskell under the Monoid word but was not visible, so I added a more visible link. I also changed the type in the Const definition to make it more consistent with the rest indeed. Shouldn't the return type of the disperse function be T[A] => F[T[C]] instead of F[A] => F[T[C]] ? I fixed the type signature, thanks! I know I'm a bit late, but a good reading remains such for years. I think I spotted a small error on the first example for collect. It looks like there's one type parameter missing from the call tree.collect[({type l[A]=State[Int, A]})#l, Int, String](count, map).apply(0) must_== (2, Bin(Leaf("1"), Leaf("2"))) Here I added Int as the second type parameter, missing from your code. Really nicely done article! good luck Hi @pagoda_5b, I'm glad that you like this post, even months later! There is no type error on the collect operation, it indeed has only 2 type parameters, one for the applicative (an Int counter using State) and one for the result of "map" operation. The Int type parameter you are tempted to add is in a sense already there because it is in the type of the binary tree that owns the collect operation. Not that I trust myself that much btw, that's just what the compiler tells me :-)
{"url":"http://etorreborre.blogspot.com/2011/06/essence-of-iterator-pattern.html","timestamp":"2014-04-19T14:39:34Z","content_type":null,"content_length":"195963","record_id":"<urn:uuid:2a0a0708-765b-4e60-9ec4-3ef297d27003>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Apples Giveaway Jenny got 15 apples from her mother. She gave 6 apples to her brother Harry. Then, her friend Rene also wanted to have apples, so Jenny gave some apples to her too. Finally when Jenny counted the remaining apples, there were only 5 apples left. How many apples did Jenny give to Rene? Kamila from United States Report Abusive School: Home Your answer: 4 Explanation: When Jenny gave her apples to Harry, she gave him six. That meant that she would only have nine left. She was very generous, for she gave her friend Rene apples. It was certain that she gave her four. About Me: I am a Vegan eight year old, I am homeschooled, and I skipped a grade. My mother writes, so I have that gift. car car from United States Report Abusive School: lionvill Your answer: 15 Explanation: the anwser is 15 About Me: i like wirner dogs vickie from United States Report Abusive School: brier creek Your answer: 4 apples Explanation: 6+5=11 15-11=4 About Me: im cool smart artist smart smart car from United States Report Abusive School: lionvill Your answer: 15 Explanation: it is 15 About Me: im 2 jab from New Zealand Report Abusive School: pointview school Your answer: 4 Explanation: 15-6 9+?=5 9-5=4 Himo from United States Report Abusive School: East elmnt Your answer: 4 Explanation: 15_6=9. 9_5=4 About Me: I 'm Boy
{"url":"http://www.dositey.com/2008/Topics/topic.php?topicId=39&code=&theme=&sub=k2&subsub=m&sub_3=MathChallengeK2&action=browse&batch=1","timestamp":"2014-04-21T13:17:35Z","content_type":null,"content_length":"21535","record_id":"<urn:uuid:004dd858-80a0-40d3-a1d3-8298281a1efa>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Paper No 1956 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 Modeling fluidization in biomass gasification processes Emmanuela Gavi*, Theodore J. Heindelt, Rodney O. Fox* *Department of Chemical and Biological Engineering, Iowa State University, Ames, IA, USA 50011 Department of Mechanical Engineering, Iowa State University, Ames, IA, USA 50011 egavi@iiastate.edu, theindel@iiastate.edu, rofox@iiastate.edu Keywords: fluidization, biomass gasification, segregation, binary particles, CFD Extensive validation of computational fluid dynamics (CFD) models is required when modeling biomass fluidization, because several required model inputs are not known or not easily measured experimentally for biomass. In the present work, CFD fluidization modeling of a biomass bed is validated by comparison with X-ray computed tomography experimental data. A parametric study was carried out by employing ground walnut shell or ground corncob as model biomass bed materials, and fluidization was performed at a gas velocity twice the minimum fluidization velocity (Ug = 2Umf). An important result is the use of an "effective density" for biomass in the CFD model, the use of which is necessary because the biomass particles can be characterized by an irregular shape and some degree of porosity, whereas CFD models assume the biomass particles to be solid spheres. If the mid-value of the density range provided by the manufacturer is employed, the bulk density of the solid phase in the bed is overestimated. It was observed that the bed height was well predicted with a coefficient of restitution (COR) equal to 0.9 for a sphericity range of y = 0.8-1. For smaller y, the predicted bed height was higher, consistent with the results of Deza et al. (2008). The results also suggest that the value of COR has a negligible effect on the predictions for biomass systems. The number of studies on biomass fluidization has increased in the last decade because of the interest in biomass thermal conversion to energy through gasification processes. However, the majority of these studies are of experimental nature, and only a few modeling works are available in literature. Among the few works present in literature that explicitly deal with biomass modeling, only one study by one of the authors of the present work (Deza et al., 2008) is aimed at validating fluidization models with experimental data. This validation step is of great importance for the subsequent mixing and reaction modeling, since the fluidization models available in the literature have been developed and validated with only standard particles, such as monodispersed dry particles shaped as spheres, cylinders, discs and spheroids. In Deza et al. (2008), a modeling study of the fluid dynamics of biomass was conducted on a two-dimensional bubbling fluidized bed with the MFIX solver. The Gidaspow drag model was used and the two coefficients that describe the deviation of the biomass particle's behavior from standard particles, namely the coefficient of restitution (COR) and the sphericity coefficient, were varied. Deza et al. (2008) found that the COR does not greatly affect the bed fluid dynamics; however, the sphericity coefficient plays an important role. Finally, for a bed composed of ground walnut shell, they suggested a large COR (z0.85) and a low sphericity (;0.6) is needed. The aim of this study is to validate computational fluid dynamics (CFD) simulations carried out with ANSYS Fluent in a biomass-filled fluidized bed. A previous work assessed the validity of the ANSYS Fluent solver and the implemented drag models in a fluidized bed of glass beads (Min et al., 2010). The present work takes a step forward and studies the fluid dynamic behavior of a single component biomass fluidized bed constituted by ground walnut shell (GWS) or ground corncob (CCB), both Geldart B particles. Experimental data for validation are obtained by comparison with X-ray computed tomography The paper is organized as follows. First a short review on fluidization theory is presented. Then operating conditions and numerical details are reported. Finally, results are summarized and some conclusions are drawn. D Diameter of bed (m) e,, Coefficient of restitution F g Drag coefficient h Bed height (m) I Interaction term M Mass (kg) P Pressure (N m2) S Stress tensor u Velocity (m s-') V Volume (m3) Greek letters 0 Sphericity coefficient e Void fraction p Density (kg m 3) Paper No 1956 bed Bed corr Correction eff Effective g Gas phase IN Initial sa Solid phase alpha Fluidization Theory In the Eulerian-Eulerian multi-fluid model approach, the gas and solid phases are treated as interpenetrating continue as explained in Syamlal et al. (1993). The sum of their volume fractions must sum to one: 6g +YeG =1 The subscript g stands for the gas phase, whereas the subscript sl stands for the 2-th solid phase. The continuity equation for the gas phase is S(,gpg)+ V (ggug) 0, (2) whereas for each solid phase it is a 0 v,(^ o,)+V(E (3) in which pg and ps, are the densities, and ug and us, are the velocities of the gas and a-th solid phases, respectively. The momentum balance equation for gas and a-th solid phase are the following: a ( cgpgug)+V. (puu g)u = V-S Z Ig + g, pg, S(EpsaPsau ) + V (apsausa sa) V Sa +Iga I ZI +SsaPsag, where Sg is the gas stress tensor, IgX is the term representing the interaction between the gas and the 2-th solid phase, Ssa is the solids stress tensor, Ig is the term representing the interaction of the a-th solid phase with the gas phase, and finally I, represents the interaction of the a-th solid phase with the 2-th solid phase. In this work, only one solid phase will be treated, therefore the term Ia is not further defined here. Table 1 summarizes the constitutive equations for the gas and solid stress tensor for a single solid phase as they are defined in ANSYS Fluent 6.3 (Fluent Inc., 2006). A simple Newtonian closure is used for the gas stress tensor while the kinetic theory of granular flows is employed to calculate the solid stress tensor. Note that MFIX describes the shearing granular flows by combining the viscous and 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 plastic flow regimes by introducing a switch at a critical packing Sg* (Syamlal et al., 1993), whereas ANSYS Fluent adopts the approach by Johnson and Jackson (1987), that combines the two theories by adding the two formulas. The interaction between gas and solid phase Ig, is defined ga = -CVP Pg -F (us- ug) (5) in which the first term on the right-hand side is the buoyancy force and the second term is the drag force, with the drag coefficient Fg,. In the present work, the drag model derived by Gidaspow (1994) is employed, in which for eg 0.8 the Ergun equation is used, and for g > 0.8 the Wen and Yu (1) equation is used: ss-(1 )/g sa glPg U--Usa 150 s 2 +1.75 d2 d a p pa if <0.8 F =a 3D sagPgg Ug sa l 265 4D d P if s > 0.8 in which dpa is the size of the solid particle and CD is a parameter defined as follows: 24(1+ 0.15Re0687) C, = --(7) D Re dpug -us g with Reg = - The fluidization model requires two pieces of information about the particle fluid dynamics that are not known nor easily measurable experimentally for biomass particles: the COR (ea,) and the sphericity coefficient 4. The COR indicates the degree of elasticity of the collisions between solid particles, so that e,, = 1 indicates perfectly elastic collisions, and e,, = 0 indicates perfectly inelastic collisions. The sphericity coefficient was introduced to approximate biomass particle shape relative to a sphere. It is defined as the ratio between the surface area of a sphere with the same volume of the biomass particle, and the surface area of the biomass particle: Sphericity in the range 0.8 < q <1 indicates a biomass particle that is approximately isometrical. Sphericities <<0.8 and q <0.5 indicate flat particles and extremely flat particles, respectively (Cui and Grace, Paper No 1956 Numerical Scheme CFD simulations of single component biomass bed fluidization were completed with the commercial software ANSYS Fluent 6.3.26 on a Linux platform. Two-dimensional (2D) and three-dimensional (3D) simulations were run. The hexahedral cells size is 4 mm and this leads to a grid size equal to 2888 for the 2D grid and 59400 for the 3D grid. Time-dependent simulations are carried out with a time step equal to 10-4 s. The simulations are solved for 10 s to allow for start-up transients to die down, and then the subsequent 60 s are used for time averaging, by sampling every 10 time steps. The computational time required to run a 70 s flow time simulation is approximately 10 days for a 2D simulation on 8 processors and 48 days for a 3D simulation on 8 processors on the high performance computing (HPC) machine at Iowa State University. Ground walnut shell (GWS) or ground corncob (CCB) provided easily obtainable model biomass systems and was fluidized with a fluidization gas velocity Ug 2Umfand no side gas injection. The details of the simulation set up, in terms of chosen fluidization models, numerical scheme details and case set up details are reported in Tables 2, 3 and 4, respectively. This work focuses on developing an approach for the biomass fluidization simulation. From this point on the developed approach will be indicated as NEW. The approach that is the current standard in fluidized bed modeling assumes biomass particles to be spherical and non-porous and will be identified in this work as STD. The simulations that were performed are grouped into three sets and summarized in Table 5. Results and Discussion Simulations were run on ANSYS Fluent with the standard approach for the 3D grid for GWS and CCB beds with different COR and sphericity (set 1). The results confirm those found by Deza et al. (2008), that the COR does not influence strongly the fluidization results, whereas the sphericity coefficient plays an important role in the fluidization. The results are reported for GWS, COR=0.9 and sphericity 0.6 and 0.9 in Fig. 1. Note the experimental data in Fig. 1 and subsequent figures were obtained using X-ray computed tomography imagining of similar cold-flow fluidized beds. Details of these experiments can be found in Min et al (2010), Franka and Heindel (2009), and Drake and Heindel (2009). In the STD approach, the density of the particles is set to be equal to the nominal density of the biomass material given by the manufacturer, and the solid packing limit is set to 0.63. The initialization is performed with the experimental observed packing (or bulk density) on the theoretical bed height. It can be observed in Fig. 1 that even though the bed height is well predicted with a sphericity of 0.6, the void fraction is underestimated. Because the drag models available in the literature were 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 derived for regular and solid particles, whereas biomass particles can be irregularly shaped and porous, some additional parameters need to be considered in the drag model. The sphericity is not sufficient, as it only regulates the transition of the particle shape from isometric to flat, while the biomass particles considered here, though not regularly shaped, are not generally flat but isometric. The high drag and the low packing experimentally observed can be caused by the porosity of the GWS and CCB biomass particles employed. For these reasons the use of a correction and an effective density is suggested here. The effective density is computed for each material from the experimental bed mass and volume, assuming the packing limit of biomass particles to be equal to the experimental packing limit of glass beads (e = 0.58): Psa = Psa,eff = bed (9) The initialization of the simulations is performed by employing a bulk density computed from the effective density and a solid packing that is slightly lower than the solid packing limit (Es, = 0.55) to facilitate the onset of fluidization. The bed height is therefore raised in order to introduce the correct amount of mass (h,A = 0.165m). With the introduction of the effective density the correct drag force is obtained; however, simulations still consider the biomass particles as non-porous spheres. In order to compare simulation results with experimental data, it is necessary to apply a further correction, because X-ray CT experiments consider the gas phase present in the particle pores and at the interstices between non-spherical particles as bed void fraction. Therefore the corrected simulated bed void fraction is calculated as ( Psanom gg,corr 1 Psa,nom sa E (10) Psa,eff ) The first set of simulations (set 1) is performed with the NEW approach in 2D and is aimed at finding the best set of parameters for GWS and CCB in terms of COR and sphericity. The results are reported in Figs. 2 and 3. In the graphs relative to GWS, for which two CORs were employed, it can be observed that this parameter is not determining for the simulation results. The sphericity instead largely affects the results, both for GWS and CCB. A good agreement with experiments in the bulk of the bed is given by a sphericity of one. The predicted bed expansion appears to be too large, however it must be reminded that these are 2D simulations, and therefore they have one degree of freedom less. This issue is solved, as expected, in 3D simulations (set 2), the results of which are shown in Figs. 4 and 5. Here the comparison with the STD approach is also reported. With the NEW approach a considerable improvement is obtained as experiments and simulation results are in reasonable agreement, both in terms of bed height and void fraction. It can also be observed that small changes in the sphericity coefficient do not strongly modify the results Therefore it can be concluded that the best way to model biomass fluidization is to employ the NEW approach with Paper No 1956 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 COR=0.9 and sphericity 1.0. Additional confirmation of these findings can be found in the following validation of simulation results, carried out along diameter lines collocated at different heights in the reactor (h/D=0.25, 0.50, 0.75 and 1). These results are shown in Figs. 6 and 7, where also the STD approach results are reported. The NEW approach with coefficient of restitution 0.9 and sphericity 1.0 gives results that are in better agreement with experimental data, especially for intermediate heights where the differences from the STD approach are more evident. A final comparison is reported in Figs. 8 and 9, in terms of void fraction contour plots. The experimental measurements, the results with the NEW approach with coefficient of restitution 0.9 and sphericity 1.0, and the results with the STD approach with coefficient of restitution 0.9 and sphericity 0.6 and 1.0, are reported on the two xz and xy planes. The bed appears defluidized with the STD approach; even with sphericity 0.6 the correct bed height is predicted. The best overall agreement is given by the NEW approach, which is able to predict a void fraction in reasonable agreement with experiments at most bed heights and radial positions. The modeling of the fluidization dynamics of a bed of biomass material was treated, by employing the commercial software ANSYS Fluent and a fluidization model available in the literature. Experimental data were used to validate the simulation results and it was found that in order to predict the correct bed height and void fraction in the bed, a correction was needed to account for the irregularity and porosity of biomass particles (in this work ground walnut shell and ground corncob). Therefore, an effective density was used instead of the nominal density of the biomass material, the initialization of the calculation was performed in order to introduce the correct amount of mass in the bed, and finally a correction term was used to allow the simulation data to be compared with experiments. With these adjustments to the model, a parametric study was completed to identify the most appropriate coefficient of restitution and sphericity, and it was found that good agreement with experimental data is obtained with a sphericity of 1.0 and COR=0.9. 0.6 0.7 0.8 0.9 void fraction, - Figure 1: Time- and plane-averaged void fraction as a function of bed height for GWS (2D simulations with STD approach). Symbols: experiments. Lines: simulations (green 6 = 1.0, em = 0.9; cyan 6 = 0.9, e = 0.9). 0.5 0.6 0.7 0.8 0.9 1 Figure 2: Time- and plane-averaged void fraction as a function of bed height for GWS (2D simulations with NEW approach): symbols experiments; green 4 = 0.6, ea = 0.9; cyan = 0.6, e< = 0.7; red = 1.0, e. = 0.9; purple S= 1.0, e.= 0.7. d 1.0o This project was funded by the ConocoPhillips Company. 0.5 0.6 0.7 0.8 0.9 1 Figure 3: Time- and plane-averaged void fraction as a function of bed height for CCB (2D simulations with NEW approach): symbols experiments; green = 0.6, ea = 0.9; red ) = 1.0, em = 0.9. Paper No 1956 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 0.6 0.7 0.8 0.9 void fraction, - -5 0 5 position, cm P" / -5 0 5 position, cm Figure 4: Time- and plane-averaged void fraction as a function of bed height for GWS (3D simulations). Experiments compared with simulations (COR=0.9): symbols experiments; green STD approach =1.0; cyan STD approach =0.6; red NEW approach =1.0; blue NEW approach =0.9. Figure 6: Average void fraction along diameter lines at four different bed heights for GWS (3D simulations): symbols experiments; red NEW approach 4 = 1.0, ea = 0.9; cyan STD approach 4 = 0.6, e, = 0.9; blue STD approach 6 = 1.0; ea = 0.9. void fraction, - -5 0 5 position, cm -5 0 5 position, cm Figure 5: Time- and plane-averaged void fraction as a function of bed height for GWS (3D simulations). Experiments compared with simulations (COR=0.9): symbols experiments; green STD approach =1.0; cyan STD approach =0.6; red NEW approach =1.0; blue NEW approach =0.9. Figure 7: Average void fraction along diameter lines at four different bed heights for CCB (3D simulations): symbols experiments; red NEW approach 4 = 1.0, e. = 0.9; cyan STD approach 4 = 0.6, em = 0.9; blue STD approach 6 = 1.0; e. = 0.9. Paper No 1956 0o 860526 078947 E 0693158 I 5 0 5, > o 0860526 08 3632 1^i 0693158 r p p I y, an Figure 8: Contour plots of void fraction for GWS in the xz (top row) and yz (bottom row) planes (3D simulations). From left to right column: experiments, NEW approach q= 1.0; e, = 0.9, STD approach q= 0.6; e,= 0.9, STD approach q= 1.0; ea, o 7;842 0693158 X I I W2 0832632 '5 0o 74947 0693158 J x, cm y, cm Figure 9: Contour plots of void fraction for CCB in the xz (top row) and yz (bottom row) planes (3D simulations). From left to right column: experiments, NEW approach q= 1.0; e, = 0.9, STD approach q= 0.6; e,= 0.9, STD approach q= 1.0; ea, 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 j j Paper No 1956 Cui, H. & Grace, J.R. Fluidization of biomass particles: a review of experimental multiphase flow aspects. Chemical Engineering Science, 62, 45-55 (2007) Deza M., Battaglia F & Heindel T.J. A validation study for the hydrodynamics of biomass in a fluidized bed. FEDSM2008, 2008 ASME Fluids Engineering Division Summer Conference, Jacksonville, FL: ASME Press, Paper FEDSM2008-55158 (2008) Drake, J.B., & Heindel, T.J., Repeatability of Gas Holdup in a Fluidized Bed using X-ray Computed Tomography, FEDSM2009, 2009 ASME Fluids Engineering Division Summer Meeting, Vail, CO, ASME Press, Paper FEDSM2009-78041, Fluent Inc. Fluent 6.3 User's Guide (2006) Franka, N.P., & Heindel, T.J., Local Time-Averaged Gas Holdup in a Fluidized Bed with Side Air Injection using X-ray Computed Tomography, Powder Technology, 193, 69-78 (2009). Gidaspow, D. Multiphase Flow and Fluidization. 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 Continuum and Kinetic Theory Descriptions. Academic Press. New York (1994) Johnson, P.C. & Jackson, R. Frictional-collisional constitutive relations for granular materials with application to plane shearing. Journal of Fluid Mechanics, 176, 67-93 (1987) Joseph, G.G., Laboreiro, J., Hrenya, C.M. & Stevens, A.R. Experimental segregation profiles in bubbling gas-fluidized beds. AIChE Journal, 53, 2804-2813 (2007) Min, J., Drake, J.B., Heindel, T.J., & Fox, R.O., Experimental Validation of CFD Simulations of a Lab-Scale Fluidized-Bed Reactor with and without Side-Gas Injection, AIChE Journal, To Appear, 2010. Syamlal, M., Rogers, W.A. & O'Brien, T.J. MFIX Documentation Theory and Guide, DOE/MC/21353-2373, NTIS/DE87006500 Paper No 1956 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 Table 1: Constitutive equations for gas and (Fluent Inc., 2006). solid stress tensor, for a single solid phase a, as defined in ANSYS Fluent 6.3 Gas stress tensor S, = -P I+ p [VU + (VuY] g VugI Solid stress tensor S Pa = -P,,I + rs Solid pressure Psa= saPsa0 + 2ps ( + ea s~ g0, Radial distribution go,, = 1 1 Solid shear stresses zrs = sasaU[Vus +(Vu. )T] Esa s -2 s Vusal Solid shear viscosity Psa = Psacol + P.sa,kn + safer 4 (2 Collisional viscosity p s,coi saPsd go,a e( a ) 5 T) Kinetic viscosity Psak = P + 2 ea )e s 6(3 ea) 5 Px sin q Frictional viscosity Ps, fr si Solid bulk viscosity s) snPsYcdp goYO I + 1 +) 3 7z' 3 0 P -( saOa (- I+ ):Vu S -r;", + ( g Granular temperature (algebraic 12(1 e )2g formulation) Yoa = da -a S Psa a >ga = -3KgaO Paper No 1956 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 Table 2: Numerical details in CFD simulations. Description Value Comment Pressure-based solver Unsteady formulation Second order implicit Time step 104 s Specified Maximum number of iterations 100 Data sampling for time statistics 10 Operating pressure 1 atm Gas density and viscosity 2.417 kg/m3, 1.8x10 Pa s Inlet boundary conditions 0.362 m/s(GWS), 0.328 m/s(CCB) Ug=2Umf Outlet boundary conditions Outflow Fully developed flow Wall boundary for gas phase No slip Specified Wall boundary for solid-phase 0 Pa Specified Convergence criteria 10-6 Specified Pressure-velocity coupling SIMPLE Phase-coupled Momentum discretization Second-order upwind Volume fraction discretization QUICK Table 3: Chosen models for the solid phase. Granular viscosity Syamlal-O'Brien Granular bulk viscosity Lun et al. Frictional viscosity Shaeffer Angle of internal friction constant=30 Frictional pressure based-ktgf Frictional modulus derived Granular temperature algebraic Solids pressure Lun et al. Radial distribution Syamlal-O'Brien Elasticity modulus derived Table 4: Bed material properties. Properties GWS CCB Particles diameter, gm 550 550 Particle nominal density, kg/m3 1300 1000 Particle effective density, kg/m3 985.61 687.8 Initial bed height (STD), mm 152 152 Initial solid packing (STD), 0.44 0.39 Solid packing limit (STD) 0.63 0.63 Initial bed height (NEW), mm 165 165 Initial solid packing (NEW), 0.55 0.55 Solid packing limit 0.58 0.58 Paper No 1956 7th International Conference on Multiphase Flow ICMF 2010, Tampa, FL USA, May 30-June 4, 2010 Table 5: Description of the four sets of simulations that were performed, with the chosen parameters values Simulations set Parameters 1. Fluent 3D parametric study with STD GWS: q=0.6, 0.8, 1.0; e,, =0.7, 0.8, 0.9 approach CCB: =0.6, 0.8, 1.0; e, = 0.9 2. Fluent 2D parametric study with NEW GWS: = 0.6, 1.0; eC = 0.7, 0.9 approach CCB: = 0.6, 1.0; e = 0.9 3. Fluent 3D final with the NEW GWS: = 0.9, 1.0; eaa= 0.9 approach CCB: = 0.9, 1.0; e.= 0.9
{"url":"http://ufdc.ufl.edu/UF00102023/00154","timestamp":"2014-04-16T22:32:25Z","content_type":null,"content_length":"46514","record_id":"<urn:uuid:e420e885-2557-419a-a960-86ffd56a6ebc>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
Dynamic Walking MATLAB Simulation Guide │ Home │ Chapter 1 - Double Pendulum │ Chapter 2 - Cornell Ranger Equations of Motion Derivation │ Chapter 3 - Simulation of the Simplest Walker │ Chapter 4 - Hip Powered Walker │ Simplest Walker MATLAB File Simulation of the Simplest Walker To ensure that our equations of motion for the Cornell Ranger are correct, we will now reduce the Cornell Ranger down to a simpler model. The simplest walker is a two dimensional bipedal passive walker that has point masses at the feet and hip and massless legs. By changing some of the parameters, we will be able to reduce the Cornell Ranger to the simplest walker and demonstrate that our dynamics are correct. Figure 13. Simplest Walker Model Reduction of the Cornell Ranger Model to the Simplest Walker To reduce the Cornell Ranger to the simplest walker, some of the parameters must be changed. The parameters that will be used are listed below. In addition, since the foot has a radius of zero, we can set two of the angles to be constant. We will lock Lastly, we will allow the swing leg to pass through the ramp for near vertical stance leg angles. This will eliminate foot scuffing problems that occur in passive walkers. This problem which occurs in physical walkers can be overcome by adding knees, using stepping stones, or allowing the swing foot to lift up. Fixed Points and Stability Before we continue on to create our simulation, we must first discuss the ideas of fixed points and stability. A step can be thought of as a stride function or Poincare map. This function takes a vector of the angles and angles rates at a particular point in the motion and returns the angles and rates at the next occurrence of that point. In our case, we will examine the point immediately after heelstrike. The result of the stride function can be found by integrating the equations of motion over one step. A period-one gait cycle exists if for a given set of initial conditions, the stride function returns the same conditions. That is, we are interested in the zeroes of as it is defined below. A period-two gait cycle exists if the stride function returns the initial conditions after two steps. The initial conditions for which a gait cycle exists are called fixed points and the gait cycles are periodic walking solutions. We will use MATLAB to find fixed points and create periodic walking Additionally, we must check the stability of the cycles. We will do this by finding the eigenvalues of the Jacobian of the stride function, . If the eigenvalues of the Jacobian are within the unit circle, then sufficiently small perturbations will decay to zero and the system will return to the gait cycle. If any eigenvalues are outside the unit circle, perturbations along the corresponding eigenvector will grow and drive the gait cycle unstable. If any eigenvalues lie on the unit circle, the cycle is neutrally stable for small perturbations along the corresponding eigenvector and these perturbations will remain constant. We will use MATLAB to find the Jacobian and its eigenvalues. Simplest Walker MATLAB Simulation We will now setup the simulation. Since we have reduced the Cornell Ranger to the simplest walker, we can re-use the equations of motion code that was derived for the Cornell Ranger. As A collision detection function must also be created. This function takes in the current time, positions, and parameters and determines if a collision has occurred. This function will be used with the integrator to stop ode113 from integrating when heelstrike occurs. The collision detection that was derived in the symbolic derivation is used. An exception is required to ignore foot scuffing. function [gstop, isterminal,direction]=collision(t,z,GL_DIM) M = GL_DIM(1); m = GL_DIM(2); c = GL_DIM(3); I = GL_DIM(4); g = GL_DIM(5); l = GL_DIM(6); w = GL_DIM(7); r = GL_DIM(8); d = GL_DIM(9); gam = GL_DIM(10); q1 = z(1); q2 = z(3); q3 = z(5); q4 = z(7); gstop = -l*cos(q1+q2)+d*cos(q1)+l*cos(-q3+q1+q2)-d*cos(q1+q2-q3-q4); if (q3>-0.05) %no collision detection for foot scuffing isterminal = 0; isterminal=1; %Ode should terminate is conveyed by 1, if you put 0 it goes till the final time u specify direction=-1; % The t_final can be approached by any direction is indicated by this Next, the main driver needs to be set up. First, the initial conditions vector and parameters are setup. The initial conditions vector only needs %%%% Root finding, Period one gait %%%% options = optimset('TolFun',1e-13,'TolX',1e-13,'Display','off'); [zstar,fval,exitflag,output,jacob] = fsolve(@fixedpt,z0,options,GL_DIM); if exitflag == 1 disp('Fixed points are'); error('Root finder not converged, change guess or change system parameters') The fixed point function calls the onestep function which integrates the equations of motion and uses the heelstrike equations over the number of steps specified . The fixed point function calls onestep for one step. function zdiff=fixedpt(z0,GL_DIM) Next, we will find the stability of the found fixed points. %%%% Stability, using linearised eigenvalue %%% disp('EigenValues for linearized map are'); We will define a function partialder which will calculate the Jacobian of the Poincare map. The Jacobian is estimated using central difference method of approximating derivatives. Central difference is accurate to the perturbation size squared as opposed to the perturbation size for forward difference but requires a little less than twice the number of evaluations. We will use perturbations of function J=partialder(FUN,z,GL_DIM) %%% Using central difference, accuracy quadratic %%% for i=1:length(z) ztemp1=z; ztemp2=z; J(:,i)=(feval(FUN,ztemp1,GL_DIM)-feval(FUN,ztemp2,GL_DIM)) ; We now setup the integration of the equations of motion. To do this we will create a function called onestep which will integrate the equations of motion over the specified number of steps. The function takes in the initial conditions, the parameters and the number of steps. function [z,t]=onestep(z0,GL_DIM,steps) M = GL_DIM(1); m = GL_DIM(2); c = GL_DIM(3); I = GL_DIM(4); g = GL_DIM(5); l = GL_DIM(6); w = GL_DIM(7); r = GL_DIM(8); d = GL_DIM(9); gam = GL_DIM(10); We then must set up the function for calculating the fixed points. If no number of steps is specified, we will assume that we are trying to find the fixed points. In this case, we only want to return the final state of the robot. flag = 1; if nargin<2 error('need more inputs to onestep'); elseif nargin<3 flag = 0; %send only last state steps = 1; Now, the motion can be integrated. In addition to setting the tolerances, an event will be set in the options. This event will call our collision function to detect for collisions while the equations of motion are being integrated. When a collision is detected, the integrator is stopped. When it is stopped, the heelstrike equations will be called using the final conditions from the integration of the single stance equations. The resulting state vector and time is then set to the initial conditions for integration of the next step. At the end of the steps, if flag is one, all of the positions and times are returned. If not, only the last positions are returned. t0 = 0; dt = 5; t_ode = t0; z_ode = z0; for i=1:steps tspan = linspace(t0,t0+dt,1000); [t_temp, z_temp, tfinal] = ode113(@ranger_ss_simplest,tspan,z0,options,GL_DIM); z0 = zplus; t0 = t_temp(end); %%% dont include the first point t_ode = [t_ode; t_temp(2:end); t0]; z_ode = [z_ode; z_temp(2:end,:); z0]; z = [zplus(1:2) zplus(5:6)]; if flag==1 Lastly, we animate the robot over the steps and create plots of the stance leg and swing leg angles. Comparison to Results Using Simplest Walker Equations We can now check our simulation of the simplest walker based on the equations of motion of the Cornell Ranger by simulating the simplest walker using the equations of motion found in The Simplest Walking Model paper^. We will use the reduced equations of motion. For the reduced model of the Cornell Ranger, Since we already have a walking simulation set up, we can simply replace the equations of motion in the appropriate sections. In the single stance equations of motion function, we replace the MM = [1 0; 1 -1]; RHS = [sin(q1-gam); cos(q1-gam)*sin(q3)-(u1^2)*sin(q3)]; Additionally, in the heelstrike function, we replace the matrix calculation of the velocities with the following code. u1 = cos(2*q1)*v1; u2 = 0; u3 = cos(2*q1)*(1-cos(2*q1))*v1; u4 = 0; Running the code reveals that the same fixed points are found and the resulting motion is the same. This result verifies our Ranger simulation. Figure 14. Plot of the Stance and Swing Angles for the Simplest Walker Paper Equations of Motion │ Home │ Chapter 1 - Double Pendulum │ Chapter 2 - Cornell Ranger Equations of Motion Derivation │ Chapter 3 - Simulation of the Simplest Walker │ Chapter 4 - Hip Powered Walker │
{"url":"http://ruina.tam.cornell.edu/research/topics/locomotion_and_robotics/ranger/ranger_paper/Reports/Ranger_Robot/control/simulator/simplestwalker.html","timestamp":"2014-04-17T01:04:47Z","content_type":null,"content_length":"39955","record_id":"<urn:uuid:4876c845-383e-4196-ae18-e3cd71af2b8b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
Laplace-Beltrami equation From Encyclopedia of Mathematics Beltrami equation A generalization of the Laplace equation for functions in a plane to the case of functions first fundamental form the Laplace–Beltrami equation has the form For isothermal coordinates on [1]). The left-hand side of equation (*) divided by Regular solutions Harmonic function). These solutions are interpreted physically like the usual harmonic functions, e.g. as the velocity potential of the flow of an incompressible liquid flowing over the surface Dirichlet principle is valid for them: Among all functions is the first Beltrami differential parameter, which is a generalization of the square of the gradient For generalizations of the Laplace–Beltrami equation to Riemannian manifolds of higher dimensions see Laplace operator. [1] E. Beltrami, "Richerche di analisi applicata alla geometria" , Opere Mat. , 1 , Milano (1902) pp. 107–198 [2] M. Schiffer, D.C. Spencer, "Functionals of finite Riemann surfaces" , Princeton Univ. Press (1954) How to Cite This Entry: Laplace–Beltrami equation. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Laplace%E2%80%93Beltrami_equation&oldid=22707
{"url":"http://www.encyclopediaofmath.org/index.php/Laplace%e2%80%93Beltrami_equation","timestamp":"2014-04-16T13:19:30Z","content_type":null,"content_length":"20322","record_id":"<urn:uuid:fc624b0a-95e5-47cf-9552-4ba068032f23>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00622-ip-10-147-4-33.ec2.internal.warc.gz"}
Concave minimization: Theory, Applications and Algorithms, in Handbook of Global Optimization Results 1 - 10 of 13 - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2007 "... Abstract—In this paper, based on ideas from lossy data coding and compression, we present a simple but effective technique for segmenting multivariate mixed data that are drawn from a mixture of Gaussian distributions, which are allowed to be almost degenerate. The goal is to find the optimal segmen ..." Cited by 68 (13 self) Add to MetaCart Abstract—In this paper, based on ideas from lossy data coding and compression, we present a simple but effective technique for segmenting multivariate mixed data that are drawn from a mixture of Gaussian distributions, which are allowed to be almost degenerate. The goal is to find the optimal segmentation that minimizes the overall coding length of the segmented data, subject to a given distortion. By analyzing the coding length/rate of mixed data, we formally establish some strong connections of data segmentation to many fundamental concepts in lossy data compression and rate-distortion theory. We show that a deterministic segmentation is approximately the (asymptotically) optimal solution for compressing mixed data. We propose a very simple and effective algorithm that depends on a single parameter, the allowable distortion. At any given distortion, the algorithm automatically determines the corresponding number and dimension of the groups and does not involve any parameter estimation. Simulation results reveal intriguing phase-transition-like behaviors of the number of segments when changing the level of distortion or the amount of outliers. Finally, we demonstrate how this technique can be readily applied to segment real imagery and bioinformatic data. Index Terms—Multivariate mixed data, data segmentation, data clustering, rate distortion, lossy coding, lossy compression, image segmentation, microarray data clustering. 1 - Journal of Global Optimization "... Abstract. This paper presents, within a unified framework, a potentially powerful canonical dual transformation method and associated generalized duality theory in nonsmooth global optimization. It is shown that by the use of this method, many nonsmooth/nonconvex constrained primal problems in R n c ..." Cited by 17 (9 self) Add to MetaCart Abstract. This paper presents, within a unified framework, a potentially powerful canonical dual transformation method and associated generalized duality theory in nonsmooth global optimization. It is shown that by the use of this method, many nonsmooth/nonconvex constrained primal problems in R n can be reformulated into certain smooth/convex unconstrained dual problems in R m with m � n and without duality gap, and some NP-hard concave minimization problems can be transformed into unconstrained convex minimization dual problems. The extended Lagrange duality principles proposed recently in finite deformation theory are generalized suitable for solving a large class of nonconvex and nonsmooth problems. The very interesting generalized triality theory can be used to establish nice theoretical results and to develop efficient alternative algorithms for robust computations. "... Dedicated to Hoang Tuy on the occasion of his seventieth birthday Abstract. In this paper, we present some general as well as explicit characterizations of the convex envelope of multilinear functions defined over a unit hypercube. A new approach is used to derive this characterization via a related ..." Cited by 15 (1 self) Add to MetaCart Dedicated to Hoang Tuy on the occasion of his seventieth birthday Abstract. In this paper, we present some general as well as explicit characterizations of the convex envelope of multilinear functions defined over a unit hypercube. A new approach is used to derive this characterization via a related convex hull representation obtained by applying the Reformulation-Linearization Technique (RLT) of Sherali and Adams (1990, 1994). For the special cases of multilinear functions having coefficients that are either all +1 or all −1, we develop explicit formulae for the corresponding convex envelopes. Extensions of these results are given for the case when the multilinear function is defined over discrete sets, including explicit formulae for the foregoing special cases when this discrete set is represented by generalized upper bounding (GUB) constraints in binary variables. For more general cases of multilinear functions, we also discuss how this construct can be used to generate suitable relaxations for solving nonconvex optimization problems that include such structures. 1. - J. of Comp. Bio , 2004 "... A major obstacle in applying various hypothesis testing procedures to datasets in bioinformatics is the computation of ensuing p-values. In this paper, we define a generic branchand-bound approach to efficient exact p-value computation and enumerate the required conditions for successful application ..." Cited by 9 (1 self) Add to MetaCart A major obstacle in applying various hypothesis testing procedures to datasets in bioinformatics is the computation of ensuing p-values. In this paper, we define a generic branchand-bound approach to efficient exact p-value computation and enumerate the required conditions for successful application. Explicit procedures are developed for the entire Cressie–Read family of statistics, which includes the widely used Pearson and likelihood ratio statistics in a one-way frequency table goodness-of-fit test. This new formulation constitutes a first practical exact improvement over the exhaustive enumeration performed by existing statistical software. The general techniques we develop to exploit the convexity of many statistics are also shown to carry over to contingency table tests, suggesting that they are readily extendible to other tests and test statistics of interest. Our empirical results demonstrate a speed-up of orders of magnitude over the exhaustive computation, significantly extending the practical range for performing exact tests. We also show that the relative speed-up gain increases as the null hypothesis becomes sparser, that computation precision increases with increase in speed-up, and that computation time is very moderately affected by the magnitude of the computed p-value. These qualities make our algorithm especially appealing in the regimes of small samples, sparse null distributions, and rare events, compared to the alternative asymptotic approximations and Monte Carlo samplers. We discuss several established bioinformatics applications, where small sample size, small expected counts in one or more categories (sparseness), and very small p-values do occur. Our computational framework could be applied in these, and similar cases, to improve performance. Key words: p-value, exact tests, branch and bound, real extension, categorical data. - 4OR , 2004 "... Many engineering optimization problems can be formulated as nonconvex nonlinear programming problems (NLPs) involving a nonlinear objective function subject to nonlinear constraints. Such problems may exhibit more than one locally optimal point. However, one is often solely or primarily interested i ..." Cited by 9 (7 self) Add to MetaCart Many engineering optimization problems can be formulated as nonconvex nonlinear programming problems (NLPs) involving a nonlinear objective function subject to nonlinear constraints. Such problems may exhibit more than one locally optimal point. However, one is often solely or primarily interested in determining the globally optimal point. This thesis is concerned with techniques for establishing such global optima using spatial Branch-and-Bound (sBB) algorithms. "... This paper presents a brief review and some new developments on the canonical duality theory with applications to a class of variational problems in nonconvex mechanics and global optimization. These nonconvex problems are directly related to a large class of semi-linear partial differential equatio ..." Cited by 8 (7 self) Add to MetaCart This paper presents a brief review and some new developments on the canonical duality theory with applications to a class of variational problems in nonconvex mechanics and global optimization. These nonconvex problems are directly related to a large class of semi-linear partial differential equations in mathematical physics including phase transitions, post-buckling of large deformed beam model, chaotic dynamics, nonlinear field theory, and superconductivity. Numerical discretizations of these equations lead to a class of very difficult global minimization problems in finite dimensional space. It is shown that by the use of the canonical dual transformation, these nonconvex constrained primal problems can be converted into certain very simple canonical dual problems. The criticality condition leads to dual algebraic equations which can be solved completely. Therefore, a complete set of solutions to these very difficult primal problems can be obtained. The extremality of these solutions are controlled by the so-called triality theory. Several examples are illustrated including the nonconvex constrained quadratic programming. Results show that these very difficult primal problems can be converted into certain simple canonical (either convex or concave) dual problems, which can be solved completely. Also some very interesting new phenomena, i.e. trio-chaos and meta-chaos, are discovered in post-buckling of nonconvex systems. The author believes that these important phenomena exist in many nonconvex dynamical systems and deserve to have a detailed study. - J. Industrial and Management Optimization "... Abstract. This paper presents a duality theory for solving concave minimization problem and nonconvex quadratic programming problem subjected to nonlinear inequality constraints. By use of the canonical dual transformation developed recently, two canonical dual problems are formulated, respectively. ..." Cited by 5 (3 self) Add to MetaCart Abstract. This paper presents a duality theory for solving concave minimization problem and nonconvex quadratic programming problem subjected to nonlinear inequality constraints. By use of the canonical dual transformation developed recently, two canonical dual problems are formulated, respectively. These two dual problems are perfectly dual to the primal problems with zero duality gap. It is proved that the sufficient conditions for global minimizers and local extrema (both minima and maxima) are controlled by the triality theory discovered recently [5]. This triality theory can be used to develop certain useful primal-dual methods for solving difficult nonconvex minimization problems. Results shown that the difficult quadratic minimization problem with quadratic constraint can be converted into a one-dimensional dual problem, which can be solved completely to obtain all KKT points and global minimizer. 1. Concave Minimization Problem and Parametrization. The concave minimization problem to be discussed in this paper is denoted as the primal problem ((P) in short) - European Journal of Operational Research "... This article is concerned with two global optimization problems (P1) and (P2). Each of these problems is a fractional programming problem involving the maximization of a ratio of a convex function to a convex function, where at least one of the convex functions is a quadratic form. First, the articl ..." Cited by 2 (0 self) Add to MetaCart This article is concerned with two global optimization problems (P1) and (P2). Each of these problems is a fractional programming problem involving the maximization of a ratio of a convex function to a convex function, where at least one of the convex functions is a quadratic form. First, the article presents and validates a number of theoretical properties of these problems. Included among these properties is the result that, under a mild assumption, any globally optimal solution for problem (P1) must belong to the boundary of its feasible region. Also among these properties is a result that shows that problem (P2) can be reformulated as a convex maximization problem. Second, the article presents for the first time an algorithm for globally solving problem (P2). The algorithm is a branch and bound algorithm in which the main computational effort involves solving a sequence of convex programming problems. Convergence properties of the algorithm are presented, and computational issues that arise in implementing the algorithm are discussed. Preliminary indications are that the algorithm can be expected to provide a practical approach for solving problem (P2), provided that the number of variables is not too large. , 1997 "... Since the work of Zwart, it is known that cycling may occur in the cone splitting algorithm proposed by Tuy in 1964 to minimize a concave function over a polytope. In this paper, we show that despite this fact, Tuy's algorithm is convergent in the sense that it always finds an optimal solution. This ..." Cited by 2 (1 self) Add to MetaCart Since the work of Zwart, it is known that cycling may occur in the cone splitting algorithm proposed by Tuy in 1964 to minimize a concave function over a polytope. In this paper, we show that despite this fact, Tuy's algorithm is convergent in the sense that it always finds an optimal solution. This is also true for a variant of Tuy's algorithm proposed by Gallo, in which a cone is split into a smaller subset of subcones (in term of inclusion). We show on an example that this variant may also cycle. The transformation of both algorithms into finite ones is discussed. , 2000 "... The problem of initial probability assignment consistent with the available information about a probabilistic system is called a direct problem. Jaynes' maximum entropy principle (MaxEnt) provides a method for solving direct problems when the available information is in the form of moment constraint ..." Cited by 2 (0 self) Add to MetaCart The problem of initial probability assignment consistent with the available information about a probabilistic system is called a direct problem. Jaynes' maximum entropy principle (MaxEnt) provides a method for solving direct problems when the available information is in the form of moment constraints. On the other hand, given a probability distribution, the problem of finding a set of constraints which makes the given distribution a maximum entropy distribution is called an inverse problem. A method based on the MinMax measure to solve the above inverse problem is presented here. The MinMax measure of information, defined by Kapur, Baciu and Kesavan [1], is a quantitative measure of the information contained in a given set of moment constraints. It is based on both maximum and minimum entropy. Computational issues in the determination of the MinMax measure arising from the complexity in arriving at minimum entropy probability distributions (MinEPD) are discussed. The method to solve i...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2198661","timestamp":"2014-04-21T02:35:35Z","content_type":null,"content_length":"41402","record_id":"<urn:uuid:090e8593-fa55-4ad1-978a-90a47abd719a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: ttest or xtmelogit? [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: ttest or xtmelogit? From Steven Samuels <sjhsamuels@earthlink.net> To statalist@hsphsun2.harvard.edu Subject Re: st: ttest or xtmelogit? Date Tue, 11 Mar 2008 15:10:12 -0400 Not mentioned in -transint- is the variance-stabilizing property of the angular transformation: it has asymptotic variance 1/4n, which is not a function of p (Anscombe, 1948). If the observed proportion is r/ n, Anscombe showed that the arcsine of [(r + 3/8)/(n + 3/4)]^.5 is even better at stabilizing the variance, for moderate sample size. The second version has variance 1/(4n + 2). The arcsine-transformation used to be recommended because transformed proportions could be analyzed via standard ANOVA programs. I once found it useful in a variance components analysis. The 'error' variance was a mixture of a between-sample and within sample (binomial) variance. With the arcsine transformation, I could subtract out the part attributable to binomial variation. FJ Anscombe 1948. The transformation of Poisson, Binomial, and negative-binomial data. Biometrika 35:246-254 On Mar 10, 2008, at 6:02 PM, Nick Cox wrote: By arcsin I guess you mean the angular transformation (arcsine of square Its use seems to have faded dramatically in recent years. Tukey showed that this is very close to p^0.41 - (1 - p)^0.41. That makes it weaker than the logit. My guess is that it would be an unusual dataset in which the angular was much better than leaving data as is and also much better than the logit. It could happen, but it seems to be rare. The Tukey reference is given in -transint- from SSC. David Airey Maybe I should not have said it was pilot data! I won't disagree, but when cluster number is too small (< 20) to invoke xtgee or xtmelogit on the observed yes/no data, or glm on the summary statistics with binomial family and logit link, what do you do? It seems to me there is a sample size between 10 and 30 clusters of yes/no data that may be better suited to some of the older approaches like arcsin transformed proportions and then ttest or ANOVA/regress. I guess that was my * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-03/msg00470.html","timestamp":"2014-04-21T02:33:26Z","content_type":null,"content_length":"8408","record_id":"<urn:uuid:820e21d1-a939-49b0-b4c1-d1dc271325f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry/Circles/Tangents and Secants From Wikibooks, open books for an open world A tangent is a line in the same plane as a given circle that meets that circle in exactly one point. That point is called the point of tangency. A tangent cannot pass through a circle; if it does, it is classified as a chord. A secant is a line containing a chord. A common tangent is a line tangent to two circles in the same plane. If the tangent does not intersect the line containing and connecting the centers of the circles, it is an external tangent. If it does, it is an internal tangent. Two circles are tangent to one another if in a plane they intersect the same tangent in the same point. • Part II- Coordinate Geometry: • Two and Three-Dimensional Geometry and Other Geometric Figures
{"url":"https://en.wikibooks.org/wiki/Geometry/Circles/Tangents_and_Secants","timestamp":"2014-04-24T02:12:25Z","content_type":null,"content_length":"47506","record_id":"<urn:uuid:b44aa1cd-94ef-43ef-a3e6-4e98398ce33f>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
The Locker Problem February 15th 2007, 01:29 PM The Locker Problem A new school is being opened. The school has exactly 1000 lockers and 1000 students. On the first day of school, the students meet outside the building and agree on the following plan: The first student will enter the school and open all of the lockers. The second student will enter and close every locker with an even number. The third student will then "reverse" every third locker; that is, if the locker is closed, he will open it; if its open,he will close it. The fourth student will then "reverse" every fourth locker; and so on until all 1000 students in turn have entered the building and "reversed" the proper lockers. Which lockers will finally remain open? February 15th 2007, 01:49 PM February 15th 2007, 02:16 PM I thought up what seemed to be the answer fairly quickly, but it was a little longer before I came up with a decent reason why it was true. 1) Any number can be written uniquely as a product of prime numbers 2) Let's ignore the first person who opens every door, and we might as well start with them all open. We can look at the problem in cases: Case 1 - The door has no duplicated factors Lets take a locker number that is made of 2 primes eg. 6 = 2x3 This will have person 2, 3, and 2x3 changing it so it will be closed. Now if we add another factor onto it eg. 30 = 2x3x5 We will have all the same people changing this door (+3), the product of the new factor with the people from before (+3), and the new factor itself (+1). It is a little confusing but you can see in this way there still has to be an odd number of people, and so this will also be closed. Case 2 - The door has a duplicated factor but another as well eg. 12 = 2²x3 This will have person 2,3,2x2,2x3,2x2x3 changing it so it will be closed. Now add another factor again eg. 60 = 2²x3x5 Again this will have all the people from before (+5), the product of the new factor and the people from before (+5), and the new factor (+1). Again an odd number, so closed. Case 3 - The door only has 1 factor eg. 3²=9 This will only have person 3,9 changing it so it will be open. Adding another factor eg. 27 = 3³ This will only give you 1 extra person (person 27) so now the door is closed. This pretty much shows that the only doors left open are square numbers. February 15th 2007, 07:06 PM It was been discussed before.
{"url":"http://mathhelpforum.com/math-challenge-problems/11629-locker-problem-print.html","timestamp":"2014-04-17T23:11:22Z","content_type":null,"content_length":"6574","record_id":"<urn:uuid:2723e3e4-7f43-4486-a973-185b5540a32e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
continuous function up vote 5 down vote favorite Suppose the countable subspace $D$ is dense in the separable Tychonoff space $X$ and $f$ is a continous function from $D$ to the closed unit interval. What are some conditions on $X$ or $D$, which make $f$ continuously extendable over $X$? gn.general-topology soft-question 1 Are you aware of Tietze's Extension Theorem? It's related, but with a $T_4$ space $X$ rather than $T_{3.5}$. See: en.wikipedia.org/wiki/Tietze_extension_theorem – David White Oct 20 '11 at 4:12 1 But here $D$ is dense, so Tietze applies only if $D=X$... – Pietro Majer Oct 20 '11 at 8:30 add comment 4 Answers active oldest votes A relevant paper is Taĭmanov, A. D. On extension of continuous mappings of topological spaces. (Russian) Mat. Sbornik N.S. 31(73), (1952). 459–463. 56.0X The MR of this paper is: up vote Let $S$ be a $T_1$-space, $A$ a dense subspace of $S$, and $R$ a compact Hausdorff space. Let $f$ be a continuous mapping of $A$ into $R$. Then $f$ admits a continuous extension over $S$ if 5 down and only if for all disjoint closed subsets $A_1,A_2$ of $R$, the relation $(f^{-1}(A_1))^-\cap(f^{-1}(A_2))^-=0$ obtains (closure in $S$). From this result, a theorem of Yu. M. Smirnov vote [Uspehi Matem. Nauk 6, no. 4(44), 204--206 (1951)] is easily proved, as well as a theorem of Vulih [Mat. Sbornik N.S. 30(72), 167--170 (1952); MR0048790 (14,70c)]. A final corollary is a special case of a theorem widely known and recently published by Katětov [Fund. Math. 38, 85--91 (1951); MR0050264 (14,304a)]. I remembered this because as a graduate student I used it to give a (I think new at the time) proof that every compact Hausdorff space is a continuous image of a compact totally disconnected compact Hausdorff space (which, in turn, I use these days to reduce proving the Riesz representation theorem for $C(K)$ to the case where the compact space $K$ is totally disconnected). O,thanks! it is very beautiful! – Paul Oct 21 '11 at 0:43 add comment The criterion for "EVERY continuous map from $D$ to $[0, 1]$ has a continuous extension to $X$" is that any two disjoint zerosets in $D$ have disjoint closures in $X$. You can find this in Chapter 6 of Gillman and Jerison's classic "Rings of Continuous Functions". They also consider the "local problem" of continuously extending a single map at length in some of the exercises, up vote 4 e.g. given $f:D\rightarrow Y$ (not necessarily $Y=[0, 1]$) Exercise 6G characterizes the largest subspace of $X$ to which $f$ can be continuously extended in terms of $z$-filters. down vote Thank you very much. But I can't find the book of "Rings of Continuous Functions". Could you tell me where i can find this book? – Paul Oct 21 '11 at 0:29 You'll probably have to go through some sort of interlibrary loan, although Amazon lists some used copies available for purchase. – Todd Eisworth Oct 21 '11 at 19:43 add comment The situation is analogous to the particular case of $X$ a metric space, for any Tychonoff space $X$ is uniformisable, and a real valued function $f$ on a dense subset $D$ of a uniform space $X$ is certainly continuously extendable to $X$ provided it is uniformly continuous. This is also a necessary condition if $X$ is compact, for any continuous function on a compact up vote 3 uniform space is always uniformly continuous. down vote Is "Any Tychonoff space X is uniformisable" means that every Tychonoff space is a uniform space? – Paul Oct 20 '11 at 9:34 Pietro provided a link to Wikipedia in his answer. But the answer is essentially yes; the topology on $X$ will be compatible with a construction that makes $X$ into a uniform space. – Christopher A. Wong Oct 20 '11 at 9:47 1 Yes, it admits a uniform structure (not unique in general). So the complete answer is: $f$ is uniformly continuous on $D$ wrto one such uniform structures. – Pietro Majer Oct 20 '11 at 2 To be specific, $f$ is extendable if and only if it is uniformly continuous wrt (the restriction to $D$ of) the fine uniformity on $X$. – Emil Jeřábek Oct 20 '11 at 13:35 1 With respect to – Richard Rast Oct 21 '11 at 1:38 show 1 more comment At least in the case of a metric space $X$, such a function $f$ extends from $D$ to all of $X$ if and only if $f$ maps Cauchy sequences to Cauchy sequences (note that this is a weaker condition than uniform continuity). up vote 2 As mentioned by Pietro, for your general Tychonoff space $X$, you make it a uniform space, so I think you can generalize my statement above to the following: $f$ extends if and only if $f$ down vote maps Cauchy nets to Cauchy nets. Also note that a function maps Cauchy nets to Cauchy nets if and only if it is uniformly continuous. – Pietro Majer Oct 20 '11 at 10:32 1 @Pietro: Are you sure? For example, every continuous function from $\mathbb R$ to any uniform space also maps Cauchy nets to Cauchy nets. (Let $\{x_a\}_{a\in D}$ be a Cauchy net. There is $a_0$ such that $|x_a-x_{a_0}|\le1$ for every $a\ge a_0$, hence all such $x_a$ are confined to the compact interval $I=[x_{a_0}-1,x_{a_0}+1]$. Then $f$ is uniformly continuous on $I$, which implies that $\{f(x_a)\}_{a\in D}$ is Cauchy.) – Emil Jeřábek Oct 20 '11 at 13:24 OTOH, Christopher’s condition is clearly not necessary, even in the metric case. For instance, if $D=X$, then every continuous $f$ trivially extends, but in general does not map Cauchy sequences to Cauchy sequences (e.g., take $X=\mathbb Q$, $f(x)=0$ for $x<\pi$, $f(x)=1$ for $x>\pi$). – Emil Jeřábek Oct 20 '11 at 13:31 @Emil: oh yes, my distraction! – Pietro Majer Oct 20 '11 at 14:46 1 @ Emil: In the metric case, I suppose I was thinking of the process of completion as uniquely determined by Cauchy sequences. So perhaps what I really want is that $f$ maps Cauchy sequences to Cauchy sequences, wherever these Cauchy sequences converge to a point $p \notin D$, $p \in X$. – Christopher A. Wong Oct 20 '11 at 16:27 add comment Not the answer you're looking for? Browse other questions tagged gn.general-topology soft-question or ask your own question.
{"url":"http://mathoverflow.net/questions/78637/continuous-function?sort=votes","timestamp":"2014-04-18T11:02:54Z","content_type":null,"content_length":"81861","record_id":"<urn:uuid:b5baed16-69d0-4b06-8a03-fe53f652653a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
San Ramon Algebra 1 Tutor Find a San Ramon Algebra 1 Tutor ...Please get in touch with me – I will be very happy to help you succeed.I tutor AP Statistics and college level introductory statistics courses. Many students, who are new to statistics, think of it as “pure math” type of a subject; however there is a lot of real world application in statistics, ... 14 Subjects: including algebra 1, calculus, statistics, geometry ...I work well with students who want and are dedicated to learning. I don't portray myself as the mean teacher but more as a friend or peer who is there to help. I think that I build a much better relationship with students when I don't force authority over them. 8 Subjects: including algebra 1, chemistry, reading, elementary (k-6th) ...I had students who needed help in pre-algebra, algebra, trigonometry, geometry, precalculus, calculus, differential equations and linear algebra. I also have extensive tutoring experience in high school physics and below. Moreover, I got qualified over the last two years of my higher education to be grading papers from different lower division astronomy classes. 10 Subjects: including algebra 1, physics, geometry, algebra 2 ...Being a math tutor for more than 20 years, I am committed to taking the important responsibility of providing my students with extra practice, guidance, and personal encouragement. I also nurture my students' self-confidence. With support, my student(s) will be kept interested in learning Geometry concepts and excel in this class. 17 Subjects: including algebra 1, calculus, statistics, geometry ...While I do have a 24 hour cancellation period, I will work with parents and students to provide makeup sessions so as to maintain academic progress. I look forward to working with you and making a difference to help you achieve your goals!Along with my degree in Education, I also have an ESL minor. I have experience teaching English abroad as well as online. 16 Subjects: including algebra 1, reading, geometry, ESL/ESOL
{"url":"http://www.purplemath.com/San_Ramon_algebra_1_tutors.php","timestamp":"2014-04-19T09:59:15Z","content_type":null,"content_length":"24197","record_id":"<urn:uuid:c395bdfd-3180-4dfa-ae81-685ab8002c39>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Affinely invariant matching methods with ellipsoidal distributions. (English) Zbl 0761.62065 Consider the treated and control populations of size respectively and a set of matched variables recorded on all units. Due to cost considerations, outcomes and additional covariates are recorded for matched subsamples of sizes ${n}_{t}\le {N}_{t}$ ${n}_{c}\le {N}_{c}$ chosen such that the distributions of among the matched units are more similar than for random subsamples. The standard matched sample estimator of the treatment’s effect on an outcome based on matched units. The bias of this estimator is less compared to the difference based on random subsamples. Consider ellipsoidal distributions for which there exists a linear transformation of the variables that results in a spherically symmetric distribution for the transformed variables. Matching methods based on population or sample inner products, such as discriminant matching or Mahalanobis metric matching or methods using propensity scores based on logistic regression estimators which are called affinely invariant, are used with ellipsoidal distributions. Furthermore, canonical forms for conditionally ellipsoidal distributions using conditionally affinely invariant matching methods are considered. 62H05 Characterization and structure theory (Multivariate analysis) 62D05 Statistical sampling theory, sample surveys
{"url":"http://zbmath.org/?q=an:0761.62065&format=complete","timestamp":"2014-04-17T13:09:46Z","content_type":null,"content_length":"23851","record_id":"<urn:uuid:fe23987b-8acb-45e1-b772-6072b811083b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Understanding Noise Terms In Electronic Circuits This article is about intrinsic noises—that is, noises that arise within an electronic circuit itself, making the response of the circuit to external inputs less than ideal. It is intended for readers who know, in general terms, what an amplifier and an analog-to-digital converter (ADC) are intended to do. The terms discussed include white noise, pink noise, popcorn noise, shot noise, avalanche noise, and thermal noise, as well as noise figure and noise floor. Table of Contents Noise Sources All ICs contain inherent noise sources. In amplifiers, they can be modeled as zero-impedance voltage generators in series with the input (e[n]) and infinite-impedance current sources in parallel with the input (i[n]). The noise from these intrinsic sources has different characteristics, depending on how it arises. Other characteristics can be derived from noise. For example, an amplifier’s noise figure (expressed in dB) is the amount by which the amplifier’s noise exceeds the noise of a perfect amplifier in the same environment. It’s generally only used in communications work. White Noise, Pink Noise, And Noise Floor A system’s noise floor is the base level of its intrinsic noise. Anything below the noise floor is “buried in the noise.” It largely comprises white or “broadband” noise. Observed in the frequency domain, it is the flat part of the circuit’s intrinsic noise spectrum. Distinguished from white noise, pink noise (also called flicker, or 1/f noise) occurs below a certain value called the corner frequency. In that lower region, it increases inversely with frequency at 3 dB/octave (see the figure). On a spectrum analyzer, “white” noise is the flat part of a circuit’s intrinsic noise spectrum. “Pink” noise is more intense at lower operating frequencies, rising out of the white noise approximately at the “corner frequency” and increasing at 3 dB/octave at lower and lower operating frequencies. (Actually, there is no hard corner. The transition occurs gradually. You can determine corner frequency by extending the straight-line portions of white and pink noise and noting where they cross.) Pink noise only occurs under conditions where current is flowing. It’s a manifestation of charge carriers being captured arid released randomly. In bipolar transistors, that’s due to contamination and imperfect surface conditions at the base-emitter junction. In CMOS devices, it’s primarily associated with extra electron energy states at the boundary between silicon and silicon dioxide. In expressing white noise, it’s necessary to specify bandwidth. If F is frequency: or more simply: If F[1] is much lower—say, 10 times lower—than F[2], then it can even be approximated as: That is, it can be approximated as simply e[n] times the square root of the upper frequency limit. In general, voltage or current noise spectral density in the 1/f region is: where k is the level of the “white” current or voltage noise level, and F[C] is the 1/f corner frequency. A good low-frequency, low-noise amplifier will have a corner frequencies below 10 Hz. JFET devices and general-purpose op amps have values up to 100 Hz. Very fast amplifiers may achieve their high speed at the cost of a high F[C], but that doesn’t matter that much in a wideband To obtain a value for RMS noise, the noise spectral density can be integrated over the bandwidth of interest. In the pink noise region, The RMS noise from F[1] to F[C] would be: where e[n] is the voltage noise spectral density of the white noise, F[1] is the lowest frequency of interest in the pink noise region, and F[C] is the corner frequency. Note that the corner frequency for a voltage noise need not be the same as the corner frequency for current noise. Voltage noise is expressed in nV/√Hz, and current noise may be expressed in terms of μA/√Hz. One characteristic of 1/f noise is that the power content in each decade is constant. Another thing to keep in mind is that white noise has equal energy per frequency. Its RMS value is set by f[2]. Pink noise has equal energy per octave. Its RMS value is set by the ratio of f[2] to f[1]. In the white noise area above F[C], the RMS noise is given by: Combining the last two equations, the total RMS noise from F[1] to F[n] would be: At higher frequencies, the term in the above equation containing the natural logarithm becomes insignificant, and the expression reduces to: Shot Noise Shot (Schottky) noise is a component of white noise. It occurs whenever a current passes through PN junctions. Barrier crossings are random events, and the total current is the sum of those random elementary current pulses. The expression for shot noise is: where q is the charge on an electron (1.6 x 10^-19 C), I[b] is the bias current, and ΔF is the bandwidth in Hz. If I[b] is expressed in pA, that simplifies to: Thermal (Johnson) Noise Then, of course, there is thermal (or Johnson) noise, from the thermal agitation of electrons in the gain-setting resistors, and: where k is Boltzmann’s constant (1.374 × 10^-23J/K), T is Kelvin temperature, R is resistance in ohms, and ΔF is bandwidth in hertz. For convenience, 4kT = 1.65 × 10^-20 W/Hz. The lower the resistance, the less the thermal noise. Halving the resistance decreases the noise by 3 dB because R is under the radical sign. Popcorn Noise Popcorn or “burst” noise is rarely encountered these days because parts are screened for it in the fab. It represents step-function voltage changes at the output of an amplifier caused by random current-gain transitions in bipolar transistors, which then cause variations in input offset. Since if it happens at all, it happens at low frequencies, it’s part of 1/f noise. Avalanche Noise Avalanche noise is also rare. It’s encountered in PN junctions operated in reverse breakdown modes. It occurs when electrons acquire enough kinetic energy under the influence of the strong electric field to create additional electron-hole pairs by colliding with the atoms in the crystal lattice. If that happens to spill over into an avalanche effect, random noise spikes may be observed. Combining Noises It’s rare to encounter only one source of intrinsic noise. If those sources are uncorrelated, they can be combined as the square root of the sum of the squares: Thus, the total effect of two noise sources that have the same energy is a 3-dB increase in total noise energy. More importantly, any noise voltage more than three or five times greater than any of the others will dominate, and the others may be neglected. The key components of amplifier noise are the white noise, which is flat above the corner frequency, and the pink noise below the corner frequency, which increases inversely with frequency at 3 dB/ 1. Analog Devices’ Op Amp Applications Handbook, (2006) edited by Walt Jung. Discuss this Article 1 A nice summary, but to say that lower resistance equates to lower thermal noise is misleading. The noise power of a resistor is independent of the resistance. The noise can be modeled as a voltage source in series with that given resistance, and equivalently as a current source in parallel with it. So, yes, lower resistance has lower voltage noise but higher current noise. And if you have the available current to drive an opamp gain-setting divider, indeed lower is better. But if you are converting a current into a voltage via a feedback resistor, lower is worse --- the equivalent current noise at the input is higher. I had someone tell me that he was going to substitute a smaller-valued resistor as the input termination for a magnetic transducer and thus reduce the noise. I pointed out that the attentuation of the signal, besides the alteration of the system response, would make this strategy inadvisable. Brad Wood • Login or register to post comments
{"url":"http://electronicdesign.com/analog/understanding-noise-terms-electronic-circuits","timestamp":"2014-04-17T10:04:09Z","content_type":null,"content_length":"123634","record_id":"<urn:uuid:f0628a81-0d13-445d-8a99-79c539318802>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 19 I have no idea what does this have to do with physics but hope you can explain... A passenger on a bus moving east sees a man standing on a curb. From the passenger's perspective, the man appears to be... a)stand still. b) move west at a speed that is less than the bus'... Physics kinetic energy Okay the equation for kinetic energy is = 1/2mv^2 so how would i figure this out?? If a semi-truck decreases its speed from 10 m/s to 5 m/s, its new Kinetic Energy is: a)4 times its old Kinetic Energy b)2 times its old Kinetic Energy c)1/2 its old Kinetic Energy d)1/4 its old ... Physics acceleration Whats the equation?? I haven't done this is a while. Don't show me the answer i want to figure it out cus thats just how i roll thanks! A car accelerates uniformly from 10 m/s to 20 m/s in 4.2 seconds. What is the magnitude of its acceleration? Do i times everything to... Which of the following is not a unit of power? a)hp b)J c)W d)J/s I answered d. J/s because its measuring speed and not power. I don't know if its correct or not. so its 10*9.8*25? It that right? thanks yu:)))))))))))))))))))))))))) Help how to find work If a worker pulls a 10kg bucket up a 25 m well, how much work has he or she done? What is the GCF of -26x5 + 4x3 + 2x2? how do i figure this out step by step please:) Algebra help i don't know how?(:< Algebra help Solve by completing the square x2 - 2x - 5 = 0. Help please a)1 plus or minus√6 b)-1 plus or minus√6 c)-3 plus or minus 2√6 d)3 plus or minus 2√6 Linear equations i think If y = 4x + 14, what is the value of y ÷ -4? choices a) x+7/2 b) -x - 14 c) -x + 14 d) none of these Algebra story problem I hate story problems they r a nightmare. One poll reported that 48% of city residents were against building a new stadium. The polling service stated that the poll was accurate to within 3%. What is the minimum percent of city residents that oppose the stadium? Here are the c... Physics question bout falling Objects that are falling toward Earth in free fall move a)faster and faster. b)slower and slower. c)at a constant velocity. d)slower then faster. Physics 1 d = v^2/(2a) d = 9.6^2/(2(9.81)) d = 4.7 m haha i answered it myself Physics 1 A rock is thrown straight upward with an initial velocity of 9.6 m/s in a location where the acceleration due to gravity has a magnitude of 9.81 m/s2. To what height does it rise? Help meh please. Physics 1 Thank you:) Physics 1 There are six books in a stack, and each book weighs 5 N. The coefficient of static friction between the books is 0.2. With what horizontal force must one push to start sliding the top five books off the bottom one? Help how do i do this????? a)1 N b)5 N c)3 N d)7 N high school How does a scientist reduce the frequency of human error and minimize a lack of accuracy? a.Take repeated measurements. b.Use the same method of measurement. c.Maintain instruments in good working order. d.all of the above
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Aisumi","timestamp":"2014-04-16T13:36:31Z","content_type":null,"content_length":"9877","record_id":"<urn:uuid:67560fde-42a2-4a1c-a9a7-13f8fa8aeddc>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Explanation to a logical problem needed! July 6th 2008, 06:35 AM #1 Jul 2008 Explanation to a logical problem needed! There are 3 ants at 3 corners of a triangle, they randomly start moving towards another corner.. what is the probability that they don't collide. All three should move in the same direction - clockwise or anticlockwise. Probability is 1/4. Can someone please explain the answer? Label the triangle $\Delta ABC$. Any bit-triple will denote the direction the ant at the vertices goes. Example: $\left( {0,1,1} \right)$ means that ant at A goes counter-clockwise while ants at B & C go clockwise. In that case ant A will collides with the ant at B. There are eight such triples. In how many will there be no collisions? Label the triangle $\Delta ABC$. Any bit-triple will denote the direction the ant at the vertices goes. Example: $\left( {0,1,1} \right)$ means that ant at A goes counter-clockwise while ants at B & C go clockwise. In that case ant A will collides with the ant at B. There are eight such triples. In how many will there be no collisions? I really liked this explanation, thanks! If they are not all going in the same direction then a pair must be walking towards one another and so will collide, so to avoid collision they must all go in the same direction. Each ant has two choices of direction so the probability that they all go clockwise is (1/2)(1/2)(1/2)=1/8. Similarly the probability that they all go anti-clockwise is 1/8 Hence the probability that they all go in the same direction is the probability that they all go clockwise plus the probability that they all go anti-clockwise= .. July 6th 2008, 08:28 AM #2 July 6th 2008, 09:17 AM #3 July 6th 2008, 09:20 AM #4 July 17th 2008, 01:02 AM #5 July 17th 2008, 06:11 AM #6 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/statistics/43123-explanation-logical-problem-needed.html","timestamp":"2014-04-19T18:53:08Z","content_type":null,"content_length":"50326","record_id":"<urn:uuid:cecb3698-9c4a-45fd-a24b-dd78ac3cce0c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Microstate (statistical mechanics) statistical mechanics , a describes a specific detailed microscopic configuration of a system, that the system visits in the course of its thermal fluctuations In contrast, the macrostate of a system refers to its macroscopic properties such as its temperature and pressure. In statistical mechanics, a macrostate is characterized by a probability distribution on a certain ensemble of microstates. This distribution describes the probability of finding the system in a certain microstate as it is subject to thermal fluctuations. Let us now turn to the case of large systems: even if those systems are theoretically able to fluctuate between very different microstates, observing such a fluctuation becomes less and less likely as the size of the system increases. This makes up for the thermodynamic limit. In this limit, the microstates visited by a system during its fluctuations all have the same bulk (or macroscopic) Microscopic definitions of thermodynamic concepts The definitions of this section link the thermodynamic properties of a system to its distribution on its ensemble (or set) of microstates. Note that all definitions and expressions of this section are valid even far away from thermodynamic equilibrium. In this article we will consider a system which is distributed on an ensemble of N microstates. $p_i$ is the probability associated to the microstate i, and $E_i$ is its energy. Here microstates form a discrete set, which means we are working in quantum statistical mechanics, and $E_i$ is an energy level of the system. Internal energy The internal energy is the mean of the system's energy $U = langle E rangle = sum_\left\{i=1\right\}^N p_i ,E_i$ This definition is the traduction of the first law of thermodynamics. The absolute entropy exclusively depends on the probabilities of the microstates. Its definition is the following: $S = -k_B,sum_i p_i ln ,p_i$, where $k_B$ is Boltzmann's constant Entropy evaluates according to the second law of thermodynamics. The third law of thermodynamics is consistent with this definition, since an absolute entropy of 0 means that the macrostate of the system reduces to a single microstate. Heat and work Work is the energy transfer associated to the effect of an ordered, macroscopic action on the system. If this action acts very slowly then the Adiabatic theorem implies that this will not cause a jump in the energy level of the system. The internal energy of the system can only change due to a change of the energies of the system's energy levels. On the other hand heat is the energy transfer associated with a disordered, microscopic action on the system, associated with jumps in energy levels of the system. The microscopic definitions of heat and work are the following: $delta W = sum_\left\{i=1\right\}^N p_i,dE_i$ $delta Q = sum_\left\{i=1\right\}^N E_i,dp_i$ So that $~dU = delta W + delta Q$ Warning: the two above definitions of heat and work are among the few expressions of statistical mechanics where the sum corresponding to the quantum case cannot be converted into an integral in the classical limit of a microstate continuum. The reason is that classical microstates are usually not defined in relation to a precise associated quantum microstate, which means that when work changes the energy associated to the energy levels of the system, the energy of classical microstates doesn't follow this change. See also External links
{"url":"http://www.reference.com/browse/Microstate+(statistical+mechanics)","timestamp":"2014-04-21T05:42:14Z","content_type":null,"content_length":"79336","record_id":"<urn:uuid:cd21e678-77b2-44d9-b183-0a8ca8edbe44>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Orbital radius and speed of a satellite? November 22nd 2008, 12:43 PM #1 Orbital radius and speed of a satellite? Radio and t.v. signals are sent from continent to continent by "bouncing" them from geosynchronous satellites. These satellites circle the Earth once every 24 hours, and so if the satellite circles eastward above the equator, it always stays over the same spot on the earth because the earth is rotating at the same rate. Weather satellites are also designed to hover in this way. What is the orbital radius for such a satellite? What is its orbital speed? I know that the period T = 86,400 seconds/1 revolution and I know a few centripetal acceleration equations...but I still don't know what to do here... I tried solving 2 pi r/ T = sq. root (( G - mass of earth)/r) but it didn't work out... $m$ = satellite mass in kg $M$ = earth mass in kg $G$ = universal gravitational constant $\omega$ = angular speed of the satellite in rad/sec $T$ = orbital period in seconds $r$ = orbital radius from the earth's center $F_c = mr\omega^2$ $\frac{GMm}{r^2} = mr\omega^2<br />$ $\frac{GM}{\omega^2} = r^3<br />$ since $\omega = \frac{2\pi}{T}$ ... $\frac{GMT^2}{4\pi^2} = r^3$ $\sqrt[3]{\frac{GMT^2}{4\pi^2}} = r$ So I substituted values into the last equation for r with the cube root I got 4397375 m, but the answer my teacher has is 4.22 x 10^7 m... And also...there is no specific weight for the satellite... I get $4.23 \times 10^7$ meters what values are you using? btw ... mass of the satellite doesn't matter I used G = 6.67 x 10^-11 N m^2/kg^2 M = 5.98 x 10^24 kg T = 86400s/1 rev Am I wrong with T? Those numbers should work. $r = \sqrt[3]{\frac{(6.67 \times 10^{-11} \ \text{m}^3 \text{kg}^{-1} \text{s}^{-2}) (5.98 \times 10^{24} \ \text{kg})(86400 s)^2}{4\pi^2}} \approx 4.23 \times 10^{7} \ \text{m}$ oh i see my problem now, i didnt square the period T November 22nd 2008, 12:59 PM #2 November 23rd 2008, 11:19 AM #3 November 23rd 2008, 11:25 AM #4 November 23rd 2008, 12:21 PM #5 November 23rd 2008, 12:33 PM #6 November 23rd 2008, 06:47 PM #7
{"url":"http://mathhelpforum.com/math-topics/61007-orbital-radius-speed-satellite.html","timestamp":"2014-04-16T16:08:48Z","content_type":null,"content_length":"51505","record_id":"<urn:uuid:09ed6093-c4f2-4d46-89ec-e1edbe970087>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Online Etymology Dictionary 1550s, from Middle French cube (13c.) and directly from Latin cubus, from Greek kybos "a cube, a six-sided die, vertebra," perhaps from PIE root *keu(b)- "to bend, turn." Mathematical sense is from 1550s in English (it also was in the ancient Greek word: the Greeks threw with three dice; the highest possible roll was three sixes). 1580s in the mathematical sense; 1947 with meaning "cut in cubes," from cube (n.). The Greek verbal derivatives from the noun all referred to dice-throwing and gambling. Related: Cubed; cubing.
{"url":"http://etymonline.com/index.php?term=cube&allowed_in_frame=0","timestamp":"2014-04-20T10:50:25Z","content_type":null,"content_length":"8371","record_id":"<urn:uuid:15674509-e726-4ee1-af70-68dcb9ec7f51>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
interpolation sort 07-12-2002 #1 Registered User Join Date Jul 2002 construct a list of n pseudorandom numbers between 0 and 1. Suitable values for n are 10 (for debugging) and 500 (for comparing the results with other methods). Write a program to sort these numbers into an array via the following interpolation sort. First, clear the array (to all 0). for each number from the old list, multiply it by n, take the integer part, and look in that position of the table. if that position is 0, put the number there. if not, move left or right(according to the size of the current number , moving the entries in the table over if necessary to make room ( as in tje fashion of insertion sort). Show that your algorithm will really sort the numbers correctly. must be in in C using Microsoft Visual C++ compiler. turn in: and any .h files must be contiguous If any one have this file, please feel free to email it to saigonara@yahoo.com Give me a break. Do your own damn homework. No one is going to write those files for you, you lazy bastard. Hope is the first step on the road to disappointment. Being able to design an algorithm is important and proof that your algorithm is correct is a basic element of programming. It is not always easy, but you should learn it. BTW, the algorithms is already given, so you only need to implement it. Originally posted by quzah Give me a break. Do your own damn homework. No one is going to write those files for you, you lazy .... Hey, Quzah... stop mincing your words, and just say what you mean... When all else fails, read the instructions. If you're posting code, use code tags: [code] /* insert code here */ [/code] run this #include <stdio.h> int main() int poylg[] = {80,105,115,115,32,79,102,102,32,89,111,117, int i, x; for(i=0; i<22;i++) rewind (stdin); return 0; >and just say what you mean I think that kind of language would cause the survivors to break out the smelling salts. My best code is written with the delete key. 07-12-2002 #2 07-13-2002 #3 Join Date Aug 2001 Groningen (NL) 07-13-2002 #4 07-13-2002 #5 07-14-2002 #6
{"url":"http://cboard.cprogramming.com/c-programming/21535-interpolation-sort.html","timestamp":"2014-04-24T20:48:49Z","content_type":null,"content_length":"58277","record_id":"<urn:uuid:f804967d-e42a-4997-98c8-4aae149ee9d6>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - Given a wave function at t=0, how do you find the wave function at time t? Demon117 Oct24-10 05:14 PM Given a wave function at t=0, how do you find the wave function at time t? I am given the following: A spherically propogating shell contains N neutrons, which are all in the sate at t = 0. How do we find [tex]\psi[/tex](r,t)? My attempt: I have a few thoughts; could you apply the time-independent schrodinger equation to find the energy of the state? If that is the case then you would simply tack on the factor of [tex]e^{-i\omega*t}[/ tex]. Then you would know that [tex]\hbar*\omega[/tex]=E. . . . right? Re: Given a wave function at t=0, how do you find the wave function at time t? I think that should do it. With the TISE, and the TDSE factor, I think you can it. arkajad Oct25-10 09:04 AM Re: Given a wave function at t=0, how do you find the wave function at time t? This will do if your state is energy eigenstate. If it is a linear combination of energy eigenstates, then you will have to multiply each term by the appropriate phase factor. In this case summation of the new series to get a closed formula may not be easy. All times are GMT -5. The time now is 09:19 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=441268","timestamp":"2014-04-20T14:19:48Z","content_type":null,"content_length":"5647","record_id":"<urn:uuid:167b1bd6-372f-43ee-97d2-0fe47be3dafc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Excel Search Function I'm working with a HUGE set of data. Column A is years from 1990 - 2010. There are multiple instances of each year, for example, there are roughly 1500 1990's in column A that has data in other columns that corresponds to that specific 1990. I need a formula that will give me the cell reference of the first 1990 in column A and a cell reference for the last 1990 in column A so that I can perform calculations based on that range. I would need to be able to alter this formula so that I can then get cell references for the first and last occurrence of each year so that I can make calculations based on the ranges of each year. Here is a sample screenshot of my data for reference.
{"url":"http://www.dreamincode.net/forums/topic/321730-excel-search-function/page__pid__1855185__st__0","timestamp":"2014-04-19T06:15:14Z","content_type":null,"content_length":"101211","record_id":"<urn:uuid:9dde2187-744a-431b-80c9-250b31ba9961>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Higher-order, Cartesian grid based finite difference schemes for elliptic equations on irregular domains. (English) Zbl 1087.65099 Summary: Second and fourth order Cartesian grid based finite difference methods are proposed for elliptic and parabolic partial differential equations, and associated eigenvalue problems on irregular domains with general boundary conditions. Our methods are based on the continuation of a solution idea using multivariable Taylor’s expansion of the solution about selected boundary points, and the core ideas of the immersed interface method. The methods offer systematic treatment of the general boundary conditions in two- and three-dimensional domains and are directly applied to semi-discretize heat equations on irregular domains. Convergence analysis and numerical examples are presented. The validity and effectiveness of the proposed methods are demonstrated through our numerical results including computations of the eigenvalues of the associated eigenvalue problem. 65N06 Finite difference methods (BVP of PDE) 35J25 Second order elliptic equations, boundary value problems 65M06 Finite difference methods (IVP of PDE) 35K05 Heat equation 65N12 Stability and convergence of numerical methods (BVP of PDE) 65M12 Stability and convergence of numerical methods (IVP of PDE) 65N25 Numerical methods for eigenvalue problems (BVP of PDE) 35P15 Estimation of eigenvalues and upper and lower bounds for PD operators
{"url":"http://zbmath.org/?q=an:1087.65099","timestamp":"2014-04-19T06:52:21Z","content_type":null,"content_length":"22728","record_id":"<urn:uuid:82dc9fae-39ff-43ef-bbe3-3bb4d6dca5ff>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
SaaS Metrics | Viral Growth Trumps SaaS Churn Everybody wants their Internet startup to go viral. But, just what does going viral mean? In his book, The Tipping Point, Malcolm Gladwell spells out the mechanics of how ideas spread virally by modeling the roles of key individuals that he calls connectors, mavens and salesmen (highly recommended on the Chaotic Flow blogroll Worthy Reads). When it comes to the Internet, Josh Kopelman eruditely points out that you can’t go viral by bolting it on as a last minute marketing program. You must apply SaaS Top Ten Do #5 and build viral growth into the product. The aspiration of this post is not to add to the complexity of these theories of viral growth, but to uncover the simplicity of viral growth through a little mathematics. This is the second post in a series on SaaS metrics that explores the impact of viral growth in SaaS using a simple heuristic model with the goal of extending the list of SaaS Metrics Rules of Thumb started in the first post in the series regarding SaaS churn. The mechanics of viral growth vary greatly by product, customer, market, and even culture. But, the mathematics pretty much boil down to the singular idea expressed so well in the kitschy Faberge shampoo commercial from the 70’s and 80’s. Viral growth is customer growth that is proportionate to the number of customers. C[n + 1] = ( 1 + g ) x C[n] All the connectors, mavens, salesmen and friends are rolled into the little “g” (for growth rate) in the formula above, which states that the number of customers in the time period n + 1 is equal to the number of customers in the time period n, plus a multiple of those same customers. You can think of g as the percentage of friends who actually told two friends who actually then went out and bought some shampoo divided by the amount of time it took them all to complete this circuit. Like churn, viral growth scales with the number of customers. When the viral growth rate exceeds the churn rate, growth explodes through the churn limit. If you read the previous SaaS metrics post on SaaS churn, you might recognize this formula, because it is identical to the churn formula only the negative churn rate -a has been replaced with the positive viral growth rate g. Thankfully, we can skip over the algebra this time and jump to the solution, simply by replacing -a with (g-a). This quick slight-of-hand gives us the formula for the number of customers in the time period n, C[n], that incorporates viral growth as well as churn. C[n] = b⁄(g-a) x ( ( 1 + g -a )^n -1) In this formula, ”b” is the baseline constant customer acquisition rate prior to either viral growth or SaaS churn kicking in. The mirror-like relationship between viral growth and SaaS churn in the formula above leads us to our next SaaS metrics rule of thumb. SaaS Metrics Rule-of-Thumb #3 – Viral Growth Trumps SaaS Churn The previous SaaS Metrics Rule-of-Thumb #2 claimed that in order to break through the churn limit, new customer acquisition growth must outpace churn. Because churn increases in direct proportion to the number of customers, the surest approach is to drive growth at a higher rate that also increases in proportion to the number of customers, i.e., viraly. Moreover, investors generally expect companies to increase revenue on a percentage basis year over year. Holding products and prices constant, this again requires viral growth of your customer base. Viral growth can come from many sources, but I like to classify it into the following three distinct stages. Stage 1 Viral Growth – Brute Force Sales and Marketing (small g) In any given industry, most companies will spend a rather fixed percentage of revenue on sales and marketing, regardless of the size of the company. When the effectiveness of these efforts scales in proportion to the level of spending (which is clearly not always the case), they will drive growth at a rate proportionate to revenue. In a SaaS business, this will also be proportionate to the number of customers. Hence, it is possible to drive growth in proportion to the number of customers simply through nuts and bolts sales and marketing. This stage is arguably not viral growth in the usual marketing sense of the words, but it does meet the strict mathematical definition where customer growth is proportionate to the number of customers. This in fact is the beautiful irony of the Faberge shampoo ad, which was a very effective nuts and bolts marketing campaign, but was probably less successful at getting even one friend to tell one friend about the product, let alone two. Stage 2 Viral Growth – Customer Advocacy (modest g) When you are successful at driving word-of-mouth and getting your customers to recommend purchasing your product to new prospects you reach stage 2, the most commonly understood variation of viral growth made famous by the Faberge ad. SaaS customers hold great potential to be advocates, because they confirm their commitment day after day as they continue to use your product and month after month after month as they send in their renewal payments. Driving viral growth through active customer engagement that turns customers into advocates and advocates into evangelists should be high on the agenda of every SaaS marketing plan. Stage 3 Viral Growth – Ecosystem Buzz (BIG g) The ecosystem for your product extends beyond your customer base to include all potentially interested parties such as press, analysts, bloggers, social media hounds, partners, vendors, investors, employees, and perhaps even the general public. Most SaaS compaines engage in public relations, tracking down and pitching Mr. Gladwell’s mavens and connectors in the hopes that they will pass the word on to their respective audiences and networks. But, true and lasting stage 3 ecosystem buzz is invariably built upon strong stage 2 customer advocacy. Twitter is a great curent example. If it weren’t for the dedicated and comparatively small group of Silicon Valley Web 2.0 junkies that were the early adopters of Twitter, you would not be able to follow CNN tweets today. As previously mentioned, this is the second post in this series on SaaS metrics. In the next post, I’ll start adding revenue and costs to the model to explore the impact of viral growth and SaaS churn on SaaS profitability over time. SaaS Metric Math Notes The discrete model presented above can also be treated as a continuous model represented by the linear first order differential equation: C’(t) = b – a C(t) + gC(t) with the solution: C(t) = b⁄(g-a) ( e^(g-a)t – 1 ). The graph above is plotted using this continuous solution. In the discrete model, the factors for viral growth and SaaS churn represent change over a specified constant period of time, e.g., a month or year, whereas in the continuous approach they represent instantaneous change. For a rapidly growing SaaS business, where year over year growth hides the change from quarter to quarter or even month to month, the continuous model is better suited for estimating actual SaaS metrics. From a practical point of view, you just have to be careful not to put overly averaged growth or churn percentages into the formulas, e.g., for the SaaS churn limit b/a. The distinction really only matters for rates of 30% or more, in which case the continuous rate is given by the formula of g[cont] = -log(1+g[avg]). Going forward, I’ll be dropping the discrete model entirely and will present only graphical solutions to the continuous model, because the continuous metric model is easier to manage and produces more elegant solutions. But, I will not be presenting the calculus required, so as not to put my loyal readers to sleep. Happy to share it with anyone who asks for it by email. Check out the rest of the SaaS Metrics Rules-of-Thumb Comment on Facebook!
{"url":"http://chaotic-flow.com/saas-metrics-viral-growth-trumps-saas-churn/","timestamp":"2014-04-19T01:47:37Z","content_type":null,"content_length":"44509","record_id":"<urn:uuid:e03a7310-f82e-43e1-b470-01bc8f87cb4c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 1 of 1 1. CJM 2006 (vol 58 pp. 1144) Partial $*$-Automorphisms, Normalizers, and Submodules in Monotone Complete $C^*$-Algebras For monotone complete $C^*$-algebras $A\subset B$ with $A$ contained in $B$ as a monotone closed $C^*$-subalgebra, the relation $X = AsA$ gives a bijection between the set of all monotone closed linear subspaces $X$ of $B$ such that $AX + XA \subset X$ and $XX^* + X^*X \subset A$ and a set of certain partial isometries $s$ in the ``normalizer" of $A$ in $B$, and similarly for the map $s \ mapsto \Ad s$ between the latter set and a set of certain ``partial $*$-automorphisms" of $A$. We introduce natural inverse semigroup structures in the set of such $X$'s and the set of partial $*$-automorphisms of $A$, modulo a certain relation, so that the composition of these maps induces an inverse semigroup homomorphism between them. For a large enough $B$ the homomorphism becomes surjective and all the partial $*$-automorphisms of $A$ are realized via partial isometries in $B$. In particular, the inverse semigroup associated with a type ${\rm II}_1$ von Neumann factor, modulo the outer automorphism group, can be viewed as the fundamental group of the factor. We also consider the $C^*$-algebra version of these results. Categories:46L05, 46L08, 46L40, 20M18
{"url":"http://cms.math.ca/cjm/msc/20M18?fromjnl=cjm&jnl=CJM","timestamp":"2014-04-20T03:13:46Z","content_type":null,"content_length":"26696","record_id":"<urn:uuid:933194d8-c94b-41c4-9ef8-dea3ec00a178>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
= Preview Document = Member Document = Pin to Pinterest Students fill in missing differences for illustrated basic subtraction facts number sentences. Students fill in missing differences for illustrated basic subtraction facts number sentences. Students fill in missing differences for illustrated basic subtraction facts number sentences. Students fill in missing differences for illustrated basic subtraction facts number sentences. • Four pumpkins to a page, with numbers 1-12. Includes minus, add, and equal signs. Room to draw pumpkin faces. Solve ten story problems. Add and subtract numbers in the thousands and estimate the answer. Includes answer key. Solve six each difficult addition and subtraction problems, answer sheet included. Solve six each medium difficulty addition and subtraction problems,answer sheet included. Solve four subraction problems using base 10 blocks. Includes answer key. Students use the rule to fill in the blank boxes.. Common Core 3.OA.9 • Fill in the appropriate operators (addition or subtraction). Fill in the appropriate operators, addition or subtraction symbols. Fill in the appropriate operator symbols, addition or subtraction. Addition and subtraction practice and assessment. Includes; practice worksheets, in and out boxes, and matching game. This penguin theme unit is a great way to practice counting and adding to 20. This 21 page unit includes; tracing numbers, cut and paste, finding patterns, ten frame activity, in and out boxes and much more! CC: Math: K.CC.B.4
{"url":"http://www.abcteach.com/directory/subjects-math-subtraction-651-5-4","timestamp":"2014-04-18T10:41:07Z","content_type":null,"content_length":"81892","record_id":"<urn:uuid:a04fd53d-ea9b-4a8b-ac77-9e2ba23f8831>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: i keep asking for help but no one is answering...can someone please help me with just one question Best Response You've already chosen the best response. do you need me to post my question again or do you see it? Best Response You've already chosen the best response. hold on lemme look Best Response You've already chosen the best response. okay i will post it now Best Response You've already chosen the best response. A political strategy team collected some preliminary data from voters in a certain district. Using the data, they created a confidence interval for the proportion of voters in the district for their candidate, (39.30% to 53.15%). If they collected data from 344 voters altogether, compute the level of confidence they used when creating their interval. Show work. Best Response You've already chosen the best response. can you see it now Best Response You've already chosen the best response. Best Response You've already chosen the best response. what math are you supposed to use here Best Response You've already chosen the best response. im in intro to statistics Best Response You've already chosen the best response. is this a histogram? Best Response You've already chosen the best response. im not sure that's what I was confused about....that was the question nothing else with it Best Response You've already chosen the best response. sorry... i don't know Best Response You've already chosen the best response. please do you know anyone else who can help me its due soon and I really need help with it Best Response You've already chosen the best response. can you look at another problem i have? Best Response You've already chosen the best response. well let's look up some unknown words... confidence intervals : http://www.stat.yale.edu/Courses/1997-98/101/confint.htm i don't know wtf that is but maybe you'll know then... hmmm... Best Response You've already chosen the best response. Write a meaningful paragraph that includes the following six terms: hypothesis, P-value, reject H0, Type I error, statistical significance, practical significance. A “meaningful paragraph” is a coherent piece of writing in an appropriate context that uses all of the listed words. The paragraph should show that you understand the meaning of the terms and their relationship to one another. A sequence of sentences that just define the terms is not a meaningful paragraph. When choosing a context, think carefully about the terms you need to use. Choosing a good context will make writing a meaningful paragraph easier. Don’t blow this off, you’ll be asked to do this again sometime soon. Best Response You've already chosen the best response. it's statistics...i tried to ask someone else but no one answered me thank god you did thanks for that Best Response You've already chosen the best response. if you dont know it's fine at least you said something... Best Response You've already chosen the best response. sorry i have no idea :( Best Response You've already chosen the best response. thats okay do you know someone else who knows statistics and could help me out Best Response You've already chosen the best response. Best Response You've already chosen the best response. i know hypothesis is an educated guess (if, then) lol Best Response You've already chosen the best response. okay thanks Best Response You've already chosen the best response. wow everyone is to busy arguing this is insane Best Response You've already chosen the best response. how can message someone? can i even do that? Best Response You've already chosen the best response. okay well ima ask again in the question box and see what happens...but thanks alot...at least you was kind enough to help me i really appreicate it Best Response You've already chosen the best response. thanks :) sorry i don't know Best Response You've already chosen the best response. thats okay Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ee155f9e4b0a50f5c55c65d","timestamp":"2014-04-17T01:14:44Z","content_type":null,"content_length":"104811","record_id":"<urn:uuid:dcb2e7a2-6654-4776-867c-fc0473830a9d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Applied Math: Beam Deflection using Separation of Variables May 11th 2011, 06:17 AM #1 May 2011 Applied Math: Beam Deflection using Separation of Variables Beam's dynamic deflection is governed by the following PDE EI(∂^4w/∂x^4)+(rho)*A* (∂^2w/∂t^2)=f(x,t) where E = the modulus of elasticity, I = the moment of inertia, p = mass density, A = cross-section area, w(x,t) = deflection, 0 ≤ x ≤ L = axial coordinate, L = length, t = time. The load is given as the following function f(x,t) = sin(pi*x/L)sin(omega*t) Use separation of variable and the normal mode approach to find w(x,t) if the boundary conditions are w = 0 at x = 0 and x = L; And the initial conditions read w=O, ∂w/∂t =0 at t =0 Last edited by sublim25; May 11th 2011 at 09:56 AM. Write $w(x,t)=F(x)T(t)$ and sub in to pde. May 29th 2011, 10:49 PM #2 Senior Member May 2010 Los Angeles, California
{"url":"http://mathhelpforum.com/advanced-applied-math/180212-applied-math-beam-deflection-using-separation-variables.html","timestamp":"2014-04-17T01:40:10Z","content_type":null,"content_length":"33660","record_id":"<urn:uuid:2399724d-df97-484b-8e10-d56d5227e82f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Read a Math Textbook • How to Read a Math Textbook How to Read a Math Textbook MAX Center Kagin Commons, First Floor M-F. 9 a.m.-4:30 p.m. S-TH, 7 p.m.-10 p.m. (Day & Evening) NOTE: This passage was written with math textbooks in mind. It is equally appropriate for a chemistry or physics textbook, so please substitute science for math, depending on your class. Reading a mathematics text is very different from reading ordinary English. Trying to read math the same way as a novel or a history text is certain to cause you trouble. Math text typically alternates passages of explanation in English with pieces of mathematics. Even its English, however, is of a special and stylized kind. When reading any explanatory material in a math text, the main principle is simple: read every word, one word at a time. There is no “catching the drift” and then filling in from later paragraphs. The authors will say his or her piece only once. If the author does seem to repeat, it is probably to make a slightly different point. Every word counts, even a two-letter word! Forget about speed-reading; one small paragraph may take ten minutes! Understand each part of each sentence as you go along. Keep in mind that familiar words may have mathematical meanings that you need to learn. Of course, you need to know the mathematical meanings of those unfamiliar mathematical words, too. If you cannot understand the passage, then stop to think and go over mathematical definitions of words you are reading until you figure out the meaning. Often it is necessary to start reading at the beginning of the passage several times. Take your time. There is simply no way to rush the process. It is a slow and delicate business…and “slow is fast!” With actual mathematics (equations and such), the trick is simply to see how each line follows from the line before. If the step is especially obscure, the author may provide some written explanation. But be sure that you understand where each line comes from, before you go on to the next line. If you skip even one step, the rest of the steps will make little or no sense to you. It is best to read mathematics with pencil and paper at hand, and to reproduce it yourself as you go along. But do not merely write down what you see in the book. Instead, try to work out each line for yourself, step by step, with the author. Really important mathematical passages are problems that the author has worked out in detail. Successful students rely on these very heavily. One widely used series of review texts in mathematics consists entirely of solved problems. How to work a solved problem in the textbook: Start with a solved problem, working it through to the end, one step at a time. Then close the book, and try to work it through again on your own. Do the whole problem as many times as necessary, until you can reproduce the whole solution with the book closed. But try not to memorize the solution. Instead, keep track of “what to do” to move from each line to the next of the operations. Your version of the solution will probably have more lines than the author’s. That’s good. It may sometimes take you two or three steps to accomplish what the author does in one. Working solved problems does take a lot of time. It is not unusual to spend an hour or two on a single page. Try to be patient. After you can work through the solved problems on your own, the homework exercises will give you little trouble, for they are usually very similar. Exam questions, too, will mostly follow the same pattern. In short: time spent on problems the author has solved for you will pay off in high grades. Some students are discouraged to see how easily the author sails through a tough problem. “I never would have thought to do that.” “How did s/he know that adding x^2 to both sides would make it come out?” “If we’re supposed to do this on an exam, I’m finished.” These are common reactions. But what you see in the book is only the author’s final product, carefully cleaned up for publication. He or she produced wastebaskets full of scratch paper to find that clean solution. And teachers do the same when preparing for a lecture. We math people make math look easy because we work hard at it when you aren’t looking. Remember that you will not be expected to invent a new problem-solving technique on the exam. Your task is to do the techniques already shown to you in class and in the book. Some math texts use pictorial illustrations, and some do not. Like the words, the picture needs slow and careful study. A quick glance will not do, as it might in a biology text. Every line, every symbol is there for a specific reason, and you should take the time to understand the picture thoroughly – in detail. This is especially true of graphs and charts, which often contain a great deal of information in a small space. As you can see, you do not merely read a math test – you work through it! The information has to be dug out, not just skimmed from the surface. It is a slow business, and the only good way to understand what the math text is trying to tell you. SO BE PATIENT. REMEMBER THAT “SLOW IS FAST,” ENJOY MATH READING!
{"url":"http://www.macalester.edu/max/math/howtoread/","timestamp":"2014-04-21T15:47:36Z","content_type":null,"content_length":"20920","record_id":"<urn:uuid:2a1dba7a-93ef-4a12-ab52-a1b675c32802>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00004-ip-10-147-4-33.ec2.internal.warc.gz"}