content
stringlengths
86
994k
meta
stringlengths
288
619
Determining if a point occurs within a sphere Does anyone know of an algorithm or code sample that can determine if a point in 3d (x,y,z axis) space occurs within a 3d sphere who’s center point and radius is known? bool Sphere::inside(Vector3 point) { if(distance_from(this.center, point) <= this.radius) { return true; return false; Computing the distance is expensive as it requires a square root. However, both sides of the inequality can be squared so you lose the square root and end up with basically this: (x - x0)² + (y - y0)² + (z - z0)² <= r² Computing the distance is expensive as it requires a square root. However, both sides of the inequality can be squared so you lose the square root and end up with basically this: (x - x0)² + (y - y0)² + (z - z0)² <= r² Ahh I see. you are essentially applying a pythagorean theorem approach twice. once applied to a triangle who’s two sides are along the x and y planes, and once using either the x or y plane and z Very nice. Thanx! Yep, that’s right. That’s the standard way to measure distance between two points in 3D space. Since you seem to be new to 3D math, you should check out vectors and matrices. Yep, that’s right. That’s the standard way to measure distance between two points in 3D space. Since you seem to be new to 3D math, you should check out vectors and matrices. I am very new to 3d math in concept. I am not new to programming though. Most of the math I’ve done in code has involved business logic. I’ve taken an interest in 3d graphics only recently. I’ve been considering attempting to learn how to code with a 3d engine, but before doing that, I thought it’d be best to learn the basics first. That link to vectors and matrices is pretty much where I am at on the learning curve thus far.
{"url":"http://devmaster.net/posts/18105/determining-if-a-point-occurs-within-a-sphere","timestamp":"2014-04-21T04:50:13Z","content_type":null,"content_length":"19320","record_id":"<urn:uuid:fefdf184-7404-4a96-ac69-67b65f80c7dd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] odeint Ryan Gutenkunst rng7 at cornell.edu Tue Oct 4 11:00:49 CDT 2005 LOPEZ GARCIA DE LOMANA, ADRIAN wrote: > Hi people, > I'm using odeint from integrate to solve a system of ODEs. Although the output seems correct > a strange error pop up at the terminal: > [alopez at thymus scipy]$ python single_global.integ.py > Traceback (most recent call last): > File "single_global.integ.py", line 23, in func > xdot[7] = + (v_NACT * x[5]**m_NACT) / (x[5]**m_NACT + k_NACT**m_NACT) - k_deg_h * x[7] > ValueError: negative number cannot be raised to a fractional power > odepack.error: Error occured while calling the Python function named func > And the output file, looks OK, there is no single negative number. > Can anyone reproduce the error? Can anyone tell me what's going on? I get the same error running your code, although it stops the integration in my case. I ran into the same error running some of our own biochemical simulations. My hunch (someone correct me if this is nonsense) is that this error comes from the extrapolating steps the integrator takes. odeint is a variable step size integrator, so the routine tries a step, checks tolerances, adjusts step as necessary, and so on. If the value of your variable is getting close to zero, the extrapolated step may take it to a negative value. When odeint evaluates xdot at that point, you see those errors. The solution we came up with was to integrate in terms of the logarithm of the variable values. This ensures that they are always positive, and for our systems the speed penalty is pretty small. It's pretty easy to do since: d log(x)/dt = dx/dt * 1/x So you might want to try something like: def func_log(log_x, t, *args): x = exp(log_x) dx_dt = func(x, t, *args) return dx_dt/x Of course, your initial condition must be log(x_IC). And a zero IC will kill you, but maybe picking something like 1e-16 is reasonable for your > Thanks in advance, > Adrián. Ryan Gutenkunst | Cornell LASSP | "It is not the mountain | we conquer but ourselves." Clark 535 / (607)227-7914 | -- Sir Edmund Hillary AIM: JepettoRNG | More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2005-October/005368.html","timestamp":"2014-04-21T04:45:08Z","content_type":null,"content_length":"4896","record_id":"<urn:uuid:c3593b4d-15d0-42e4-bbe1-81a4fa88a971>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
LSA Grad Course Catalog Courses in LSA Mathematics MATH 404. Intermediate Differential Equations and Dynamics MATH 216, 256, 286 or 316. (3). (BS). May not be repeated for credit. Linear systems, qualitative theory of ordinary differential equations for planar and higher-dimensional systems, chaos, stability, non-linear oscillations, periodic orbits, Floquet theory, MATH 412. Introduction to Modern Algebra MATH 215, 255 or 285; and 217; only 1 credit after MATH 312. (3). (BS). May not be repeated for credit. No credit granted to those who have completed or are enrolled in MATH 493. One credit granted to those who have completed MATH 312. The initial topics include ones common to every branch of mathematics: sets, functions (mappings), relations, and the common number systems (integers, rational, real and complex numbers). These are then applied to the study of particular types of mathematical structures: groups, rings, and fields). MATH 416. Theory of Algorithms [MATH 312, 412 or EECS 280] and MATH 465. (3). (BS). May not be repeated for credit. Many common problems from mathematics and computer science may be solved by applying one or more algorithms, well-defined procedures that accept input data specifying a particular instance of the problem and produce a solution. Students in this course typically have encountered some of these problems and their algorithmic solutions in a programming course. The goal here is to develop the mathematical tools necessary to analyze such algorithms with respect to their efficiency (running time) and correctness. Each term offers varying degrees of emphasis on mathematical proofs and computer implementation of these ideas. WN 2014 | WN 2012 MATH 417. Matrix Algebra I Three courses beyond MATH 110. (3). (BS). May not be repeated for credit. No credit granted to those who have completed or are enrolled in MATH 214, 217, 419, or 420. MATH 417 and 419 not be used as electives in the Statistics concentration. F, W, Sp, Su. Matrix operations, vector spaces, Gaussian and Gauss-Jordan algorithms for linear equations, subspaces of vector spaces, linear transformations, determinants, orthogonality, characteristic polynomials, eigenvalue problems and similarity theory. Applications include linear networks, least squares method, discrete Markov processes, linear programming. MATH 419. Linear Spaces and Matrix Theory Four courses beyond MATH 110. (3). (BS). May not be repeated for credit. 2 credits granted to those who have completed MATH 214, 217, or 417. No credit for those who have completed or are enrolled in 420. MATH 417 and 419 not be used as electives in the Statistics concentration. F, W, Su. Finite dimensional linear spaces and matrix representations of linear transformations. Bases, subspaces, determinants, eigenvectors and canonical forms. Structure of solutions of systems of linear equations. Applications to differential and difference equations. Provides more depth and content than MATH 417. MATH 420 is the proper election for students contemplating research in mathematics. MATH 420. Advanced Linear Algebra Linear algebra course (MATH 214, 217, 417, or 419) and one of MATH 296, 412, or 451. (3). (BS). May not be repeated for credit. This is an introduction to the formal theory of abstract vector spaces and linear transformations. The emphasis is on concepts and proofs with some calculations to illustrate the theory. Students should have some mathematical maturity and in particular should expect to work with and be tested on formal proofs. MATH 422 / BE 440. Risk Management and Insurance MATH 115, junior standing, and permission of instructor. (3). (BS). May not be repeated for credit. Exploration of insurance as a means of replacing uncertainty with certainty; use of mathematical models to explain theory of interest, risk theory, credibility theory and ruin theory; how mathematics underlies important individual and societal decisions. WN 2013 | WN 2012 MATH 423. Mathematics of Finance MATH 217 and 425; EECS 183 or equivalent. (3). (BS). May not be repeated for credit. An introduction to mathematical models used in finance and economics with particular emphasis on models for pricing derivative instruments such as options and futures. Topics include risk and return theory, portfolio theory, capital asset pricing model, random walk model, stochastic processes, Black-Scholes Analysis, numerical methods and interest rate models. MATH 424. Compound Interest and Life Insurance MATH 215, 255, or 285 or permission of instructor. (3). (BS). May not be repeated for credit. MATH 425 / STATS 425. Introduction to Probability MATH 215. (3). (BS). May not be repeated for credit. F, W, Sp, Su. MATH 427. Retirement Plans and Other Employee Benefit Plans Junior standing. (3). (BS). May not be repeated for credit. The development of employee benefit plans, both public and private. Particular emphasis is laid on modern pension plans and their relationships to current tax laws and regulations, benefits under the federal social security system, and group insurance. MATH 431. Topics in Geometry for Teachers One of MATH 215, 255, or 285 (completed with a minimum grade of C- or better), and MATH 217 (completed with a minimum grade of C- or better). (Prerequisites enforced at registration.) (3). (BS). May not be repeated for credit. F. This course is an axiomatic treatment of Euclidean plane geometry and serves as an introduction to the process of doing mathematics rigorously. Intended for prospective geometry teachers, it will place the development of geometry within its historical context with an additional emphasis on expositing proofs. MATH 433. Introduction to Differential Geometry MATH 215 (or 255 or 285), and 217. (3). (BS). May not be repeated for credit. F. WN 2014 | WN 2013 MATH 450. Advanced Mathematics for Engineers I Permission required after credit earned in MATH 354 or 454. Consent of department required. (Prerequisites enforced at registration.) MATH 215, 255, or 285. (4). (BS). May not be repeated for credit. No credit granted to those who have completed or are enrolled in MATH 354 or 454. F, W, Su. Review of curves and surfaces in implicit, parametric and explicit forms; differentiability and affine approx., implicit and inverse function theorems; chain rule for 3-space; multiple integrals, scalar and vector fields; line and surface integrals; computations of planetary motion; work, circulation and flux over surfaces; Gauss' and Stokes' Theorems; heat equation. MATH 451. Advanced Calculus I Previous exposure to abstract mathematics, e.g. MATH 217 and 412. (3). (BS). May not be repeated for credit. No credit granted to those who have completed or are enrolled in MATH 351. F, W, Sp. MATH 452. Advanced Calculus II MATH 217, 419, or 420; and MATH 451. (3). (BS). May not be repeated for credit. W. Topics include: (1) partial derivatives and differentiability; (2) gradients, directional derivatives, and the chain rule; (3) implicit function theorem; (4) surfaces, tangent planes; (5) max-min theory; (6) multiple integration, change of variable, etc.; (7) Greens' and Stokes' theorems, differential forms, exterior derivatives. MATH 454. Boundary Value Problems for Partial Differential Equations Permission required after credit earned in MATH 354 or 450. (Prerequisites enforced at registration.) MATH 216, 256, 286 or 316. (3). (BS). May not be repeated for credit. Students with credit for MATH 354 can elect MATH 454 for one credit. No credit granted to those who have completed or are enrolled in MATH 450. F, W, Sp. This course is devoted to the use of Fours Series and Transforms in the solution of boundary-value problems for 2nd order linear partial differential equations. We study the heat and wave equations in one and higher dimension. We introduce the spherical and cylindrical Bessel functions, Legendre polynomials and analysis of data smoothing and filtering. MATH 463 / BIOINF 463 / BIOPHYS 463. Mathematical Modeling in Biology MATH 214, 217, 417, or 419; and MATH 216, 256, 286, or 316. (3). (BS). May not be repeated for credit. An introduction to the use of continuous and discrete differential equations in the biological sciences. Modeling in biology, physiology and medicine. MATH 465. Introduction to Combinatorics Linear Algebra (one of MATH 214, 217, 256, 286, 296, 417, or 419) or permission of instructor. (3). (BS). May not be repeated for credit. No credit granted to those who have completed or are enrolled in MATH 565 or 566. Rackham credit requires additional work. An introduction to combinatorics, covering basic counting techniques (inclusion-exclusion, permutations and combinations, generating functions) and fundamentals of graph theory (paths and cycles, trees, graph coloring). Additional topics may include partially ordered sets, recurrence relations, partitions, matching theory, and combinatorial algorithms. MATH 466 / EEB 466. Mathematical Ecology MATH 217, 417, or 419; MATH 256, 286, or 316; and MATH 450 or 451. (3). May not be repeated for credit. Rackham credit requires additional work. This course gives an overview of mathematical approaches to questions in the science of ecology. Topics include: formulation of deterministic and stochastic population models; dynamics of single-species populations; and dynamics of interacting populations (perdition, competition, and mutualism), structured populations, and epidemiology. Emphasis is placed on model formulation and techniques of analysis. FA 2013 MATH 471. Introduction to Numerical Methods MATH 216, 256, 286, or 316; and 214, 217, 417, or 419; and a working knowledge of one high-level computer language. No credit granted to those who have completed or are enrolled in MATH 371 or 472. (3). (BS). May not be repeated for credit. No credit granted to those who have completed or are enrolled in MATH 371 or 472. F, W, Su. MATH 472. Numerical Methods with Financial Applications Differential Equations (MATH 216, 256, 286, or 316); Linear Algebra (MATH 214, 217, 417, or 419); working knowledge of a high-level computer language. Recommended: MATH 425. (3). (BS). May not be repeated for credit. No credit granted to those who have completed or are enrolled in MATH 471 or 371. Theoretical study and practical implementation of numerical methods for scientific problems, with emphasis on financial applications. Topics: Newton's method for nonlinear equations; Systems of linear equations; Numerical integration; Interpolation and polynomial approximation; Ordinary differential equations; Partial differential equations, in particular the Black-Scholes equation; Monte Carlo simulation; Numerical modeling. MATH 475. Elementary Number Theory At least three terms of college Mathematics are recommended. (3). (BS). May not be repeated for credit. W. MATH 476. Computational Laboratory in Number Theory Prior or concurrent enrollment in MATH 475 or 575. (1). (BS). May not be repeated for credit. W. MATH 481. Introduction to Mathematical Logic MATH 412 or 451 or equivalent experience with abstract mathematics. (3). (BS). May not be repeated for credit. F. MATH 485 / EDUC 485. Mathematics for Elementary School Teachers and Supervisors One year of high school algebra or permission of the instructor. (3). May not be repeated for credit. No credit granted to those who have completed or are enrolled in MATH 385. May not be included in a concentration plan in Mathematics. F, Su. The history, development and logical foundations of the real number system and of numeration systems including scales of notation, cardinal numbers and the cardinal concept, the logical structure of arithmetic (field axioms) and their relations to the algorithms of elementary school instruction. MATH 486. Concepts Basic to Secondary Mathematics One of MATH 215, 255, or 285 (completed with a minimum grade of C- or better), and MATH 217 (completed with a minimum grade of C- or better). (Prerequisites enforced at registration.) (3). (BS). May not be repeated for credit. W. This course examines the principles of analysis and algebra underlying theorems, especially the rationals, reals, and complex numbers. It also considers concerning functions, especially polynomials, exponential functions, and logarithmic functions. The mathematical underpinnings of these ideas serve as an important intellectual resource for students pursuing teacher certification. MATH 489. Mathematics for Elementary and Middle School Teachers MATH 385. (Prerequisites enforced at registration.) (3). May not be repeated for credit. May not be used in any Graduate program in Mathematics. The course provides an overview of the mathematics underlying the elementary and middle school curriculum. It is required of all students intending to earn an elementary teaching certificate. Concepts are heavily emphasized with some attention given to calculation and proof. MATH 490. Introduction to Topology MATH 451 or equivalent experience with abstract mathematics. (3). (BS). May not be repeated for credit. W. Knots, orientable and non-orientable surfaces, Euler characteristic, open sets, connectedness, compactness, metric spaces. The topics covered are fairly constant but the presentation and emphasis will vary significantly with the instructor. MATH 493. Honors Algebra I MATH 296, 412, or 451. (3). (BS). May not be repeated for credit. Description and in-depth study of the basic algebraic structures: groups, rings, fields including: set theory, relations, quotient groups, permutation groups, Sylow's Theorem, quotient rings, field of fractions, extension fields, roots of polynomials, straight-edge and compass solutions, and other topics. MATH 494. Honors Algebra II MATH 512. (3). (BS). May not be repeated for credit. Vector spaces, linear transformations and matrices, equivalence or matrices and forms, canonical forms, and applications to linear differential equations. WN 2014 | WN 2013 MATH 497. Topics in Elementary Mathematics MATH 489 or permission of instructor. (3). (BS). May be repeated for a maximum of 6 credits. F. Selected topics in geometry, algebra, computer programming, logic and combinatorics for prospective and in-service elementary, middle, or junior-high school teachers. Content will vary from term to MATH 498. Topics in Modern Mathematics Senior Mathematics concentrators and Master Degree students in Mathematical disciplines. (3). (BS). May not be repeated for credit. MATH 499. Independent Reading Consent of instructor required. Graduate standing in a field other than Mathematics and permission of instructor. (1 - 4). (INDEPENDENT). May not be repeated for credit. MATH 501. Applied & Interdisciplinary Mathematics Student Seminar At least two 300 or above level math courses, and Graduate standing; Qualified undergraduates with permission of instructor only. (1). May be repeated for a maximum of 6 credits. Offered mandatory credit/no credit. MATH 501 is an introductory and overview seminar course in the methods and applications of modern mathematics. The seminar has two key components: (1) participation in the Applied and Interdisciplinary Math Research Seminar; and (2) preparatory and post-seminar discussions based on these presentations. Topics vary by term. MATH 506 / IOE 506. Stochastic Analysis for Finance Graduate students or permission of instructor. (3). (BS). May not be repeated for credit. The aim of this course is to teach the probabilistic techniques and concepts from the theory of stochastic processes required to understand the widely used financial models. In particular concepts such as martingales, stochastic integration/calculus, which are essential in computing the prices of derivative contracts, will be discussed. Pricing in complete/incomplete markets (in discrete/ continuous time) will be the focus of this course as well as some exposition of the mathematical tools that will be used such as Brownian motion, Levy processes and Markov processes. FA 2013 | FA 2012 MATH 520. Life Contingencies I MATH 424 and 425 with minimum grade of C-, plus declared Actuarial/Financial Mathematics Concentration. (Prerequisites enforced at registration.) (3). (BS). May not be repeated for credit. F. Quantifying the financial impact of uncertain events is the central challenge of actuarial mathematics. The goal of this course is to teach the basic actuarial theory of mathematical models for financial uncertainties, mainly the time of death. The main topics are the development of (1) probability distributions for the future lifetime random variable; (2) probabilistic methods for financial payments on death or survival; and (3) mathematical models of actuarial reserving. MATH 521. Life Contingencies II MATH 520 with a grade of C- or higher. (Prerequisites enforced at registration.) (3). (BS). May not be repeated for credit. W. This course extends the single decrement and single life ideas of MATH 520 to multi-decrement and multiple-life applications directly related to life insurance. The sequence 520-521 covers the Part 4A examination of the Casualty Actuarial Society and covers the syllabus of the Course 150 examination of the Society of Actuaries. Concepts and Calculation are emphasized over proof. MATH 523. Risk Theory MATH 425. (3). (BS). May not be repeated for credit. Classical approaches to risk including the insurance principle and the risk-reward trade off. Review of probability. Compound Poisson process. Modeling of individual losses that arise in a loss aggregation process, modeling the frequency of losses, and credibility theory. MATH 525 / STATS 525. Probability Theory MATH 451 (strongly recommended). MATH 425/STATS 425 would be helpful. (3). (BS). May not be repeated for credit. MATH 526 / STATS 526. Discrete State Stochastic Processes MATH 525 or STATS 525 or EECS 501. (3). (BS). May not be repeated for credit. MATH 528. Topics in Casualty Insurance MATH 217, 417, or 419. (3). (BS). May not be repeated for credit. An introduction to property and casualty insurance including policy forms, underwriting, product design and modification, rate making and claim settlement. WN 2014 | WN 2012 MATH 537. Introduction to Differentiable Manifolds MATH 420, and 590 or 591. (3). (BS). May not be repeated for credit. This course is intended for students with a strong background in topology, linear algebra, and multivariable advanced calculus equivalent to the courses 420 and 590. Its goal is to introduce the basic concepts and results of differential topology and differential geometry. Content covered includes Manifolds, vector fields and flows, differential forms, Stokes' theorem, Lie group basics, Riemannian metrics, Levi-Civita connection, geodesics. WN 2014 MATH 542 / IOE 552. Financial Engineering Seminar I MATH,IOE 453 or MATH 423. Business School students: FIN 580 or 618 or BA 855. (3). (BS). May not be repeated for credit. Theory and applications of financial engineering. Designing, structuring and pricing financial engineering products (including options, futures, swaps and other derivative securities) and their applications to financial and investment risk management. Mathematical methodology that forms the basis of financial engineering, applied stochastic processes and numerical methods in particular. MATH 543 / IOE 553. Financial Engineering Seminar II IOE 552 or MATH 542. (Prerequisites enforced at registration.) (3). (BS). May not be repeated for credit. Advanced issues in financial engineering: stochastic interest rate modeling and fixed income markets, derivatives trading and arbitrage, international finance, risk management methodologies include in Value-at-Risk and credit risk. Multivariate stochastic calculus methodology in finance: multivariate Ito's lemma, Ito's stochastic integrals, the Feynman-Kac theorem and Girsanov's theorem. MATH 547 / BIOINF 547 / STATS 547. Probabilistic Modeling in Bioinformatics MATH,Flexible, due to diverse backgrounds of intended audience. Basic probability (level of MATH/STATS 425), or molecular biology (level of BIOLOGY 427), or biochemistry (level of CHEM/BIOLCHEM 451), or basic programming skills desireable or permission. (3). (BS). May not be repeated for credit. Probabilistic models of proteins and nucleic acids. Analysis of DNA/RNA and protein sequence data. Algorithms for sequence alignment, statsistical analysis of similarity scores, hidden Markov models. Neural networks, training, gene finding, protein family profiles, multiple sequence alignment, sequence comparison and structure prediction. Analysis of expression array data. WN 2014 MATH 550 / CMPLXSYS 510. Introduction to Adaptive Systems MATH 215, 255, or 285; MATH 217; and MATH 425. (3). (BS). May not be repeated for credit. MATH 555. Introduction to Functions of a Complex Variable with Applications MATH 451 or equivalent experience with abstract mathematics. (3). (BS). May not be repeated for credit. Intended primarily for students of engineering and of other cognate subjects. Doctoral students in mathematics elect Mathematics 596. Complex numbers, continuity, derivative, conformal representation, integration, Cauchy theorems, power series, singularities, and applications to engineering and mathematical physics. MATH 556. Applied Functional Analysis MATH 217, 419, or 420; MATH 451; and MATH 555. (3). (BS). May not be repeated for credit. F. This is an introduction to methods of applied functional analysis. Students are expected to master both the proofs and applications of major results. The prerequisites include linear algebra, undergraduate analysis, advanced calculus and complex variables. This course is a core course for the Applied and Interdisciplinary Mathematics (AIM) graduate program. MATH 557. Applied Asymptotic Analysis MATH 217, 419, or 420; MATH 451; and MATH 555. (3). (BS). May not be repeated for credit. W. Topics include: asymptotic sequences and (divergent) series; asymptotic expansions of integrals and Laplace's method; methods of steepest descents and stationary phase; asymptotic evaluation of inverse Fourier and Laplace transforms; asymptotic solutions for linear (non-constant coefficient) differential equations; WBK expansions; singular perturbation theory; and boundary, initial, and internal layers. MATH 558. Ordinary Differential Equation MATH 451. (3). (BS). May not be repeated for credit. This course is an introduction to dynamical systems (differential equations and iterated maps). The aim is to survey a broad range of topics in the theory of dynamical systems with emphasis on techniques and results that are useful in applications, including chaotic dynamics. This is a core course for the Applied and Interdisciplinary Mathematics (AIM) graduate program. MATH 559. Selected Topics in Applied Mathematics MATH 451; and 217, 419, or 420. (3). (BS). May be repeated for a maximum of 6 credits. This course will focus on particular topics in emerging areas of applied mathematics for which the application field has been strongly influenced by mathematical ideas. It is intended for students with interests in mathematical, computational, and/or modeling aspects of interdisciplinary science, and the course will develop the intuitions of the field of application as well as the mathematical MATH 561 / IOE 510 / TO 518. Linear Programming I MATH 217, 417, or 419. (3). (BS). May not be repeated for credit. F, W, Sp. Formulation of problems from the private and public sectors using the mathematical model of linear programming. Development of the simplex algorithm; duality theory and economic interpretations. Postoptimality (sensitivity) analysis application and interpretations. Introduction to transportation and assignment problems; special purpose algorithms and advanced computational techniques. Students have opportunities to formulate and solve models developed from more complex case studies and to use various computer programs. MATH 562 / IOE 511. Continuous Optimization Methods MATH 217, 417, or 419. (3). (BS). May not be repeated for credit. Survey of continuous optimization problems. Unconstrained optimization problems: unidirectional search techniques; gradient, conjugate direction, quasi-Newton methods. Introduction to constrained optimization using techniques of unconstrained optimization through penalty transformations, augmented Langrangians, and others. Discussion of computer programs for various algorithms. WN 2014 | WN 2013 MATH 563 / BIOINF 563. Advanced Mathematical Methods for the Biological Sciences Graduate standing. (3). (BS). May not be repeated for credit. This course focuses on discovering the way in which spatial variation influences the motion, dispersion, and persistence of species. Specific topics may include i) Models of Cell Motion: Diffusion, Convection, and Chemotaxis; ii) Transport Processes in Biology; iii) Biological Pattern Formation; and iv) Delay-differential Equations and Age-structured Models of Infectious Diseases. WN 2014 | WN 2013 MATH 564. Topics in Mathematical Biology (MATH 217 or 417 or 419) and (450 or 454) and (216 or 316). (3). May be repeated for a maximum of 9 credits. This is an advanced course on topics in mathematical biology. Topic will vary according to the instructor. Possible topics include modeling infectious diseases, cancer modeling, mathematical neurosciences, and biological oscillators. A sample description is available for a course in biological oscillators. WN 2012 MATH 565. Combinatorics and Graph Theory MATH 412 or 451 or equivalent experience with abstract mathematics. (3). (BS). May not be repeated for credit. F. Topics in the graph theory part of the course include (if time permits) trees, k-connectivity, Eulerian and Hamiltonian graphs, tournaments, graph coloring, planar graphs, Euler's formula, the 5-Color theorem, Kuratowski's theorem, and the matrix-tree theorem. The second part of the course will deal with topics in the theory of finite partially ordered sets. This will include material about Mobius functions, lattices, simplicial complexes, and matroids. MATH 566. Combinatorial Theory MATH 412 or 451 or equivalent experience with abstract mathematics. (3). (BS). May not be repeated for credit. Permutations, combinations, generating functions, and recurrence relations. The existence and enumeration of finite discrete configurations. Systems of representatives, Ramsey's Theorem, and extremal problems. Construction of combinatorial designs. MATH 567. Introduction to Coding Theory One of MATH 217, 419, 420. (3). (BS). May not be repeated for credit. Introduction to coding theory focusing on the mathematical background for error-correcting codes. Topic include: Shannon's Theorem and channel capacity; review of tools from linear algebra and an introduction to abstract algebra and finite fields; basic examples of codes such and Hamming, BCH, cyclic, Melas, Reed-Muller, and Reed-Solomon; introduction to decoding starting with syndrome decoding and covering weight enumerator polynomials and the Mac-Williams Sloane identity WN 2014 | WN 2013 MATH 571. Numerical Linear Algebra MATH 214, 217, 417, 419, or 420; and one of MATH 450, 451, or 454. (3). (BS). May not be repeated for credit. Direct and iterative methods for solving systems of linear equations (Gaussian elimination, Cholesky decomposition, Jacobi and Gauss-Seidel iteration, SOR, introduction to multi-grid methods, steepest descent, conjugate gradients), introduction to discretization methods for elliptic partial differential equations, methods for computing eigenvalues and eigenvectors. MATH 572. Numerical Methods for Differential Equations MATH 214, 217, 417, 419, or 420; and one of MATH 450, 451, or 454. (3). (BS). May not be repeated for credit. W. Finite difference methods for ordinary differential equations and hyperbolic partial differential equations. Topics include one-step and multi-step methods, A-stability, root condition, Lax-Richtmyer Equivalence Theorem, CFL-condition, von Neumann stability condition, discrete energy methods, Kreiss matrix theorem. MATH 575. Introduction to Theory of Numbers I MATH 451 and 420 or permission of instructor. (1 - 3). (BS). May not be repeated for credit. Students with credit for MATH 475 can elect MATH 575 for 1 credit. F. Topics covered include divisibility and prime numbers, congruences, quadratic reciprocity, quadratic forms, arithmetic functions, and Diophantine equations. Other topics may be covered as time permits or by request. MATH 582. Introduction to Set Theory MATH 412 or 451 or equivalent experience with abstract mathematics. (3). (BS). May not be repeated for credit. W. The main topics covered are set algebra (union, intersection), relations and functions, orderings (partial, linear, well), the natural numbers, finite and denumerable sets, the Axiom of Choice, and ordinal and cardinal numbers. MATH 590. Introduction to Topology MATH 451. (3). (BS). May not be repeated for credit. F. MATH 591. General and Differential Topology MATH 451. (3). (BS). May not be repeated for credit. F. Topological and metric spaces, continuity, subspaces, products and quotient topology, compactness and connectedness, extension theorems, topological groups, topological and differentiable manifolds, tangent spaces, vector fields, submanifolds, inverse function theorem, immersions, submersions, partitions of unity, Sard's theorem, embedding theorems, transversality, classification of surfaces. MATH 592. Introduction to Algebraic Topology MATH 591. (3). (BS). May not be repeated for credit. W. Fundamental group, covering spaces, simplicial complexes, graphs and trees, applications to group theory, singular and simplicial homology, Eilenberg-Steenrod axioms, Brouwer's and Lefschetz' fixedpoint theorems, and other topics. MATH 593. Algebra I MATH 412, 420, and 451 or MATH 494. (3). (BS). May not be repeated for credit. F. Topics include basics about rings and modules, including Euclidean rings, PIDs, UFDs. The structure theory of modules over a PID will be an important topic, with applications to the classification of finite abelian groups and to Jordan and rational canonical forms of matrices. The course will also cover tensor, symmetric, and exterior algebras, and the classification of bilinear forms with some emphasis on the field case. MATH 594. Algebra II MATH 593. (3). (BS). May not be repeated for credit. W. Topics include group theory, permutation representations, simplicity of alternating groups for n>4, Sylow theorems, series in groups, solvable and nilpotent groups, Jordan-Holder Theorem for groups with operators, free groups and presentations, fields and field extensions, norm and trace, algebraic closure, Galois theory, and transcendence degree. MATH 596. Analysis I MATH 451. (3). (BS). May not be repeated for credit. Students with credit for MATH 555 may elect MATH 596 for two credits only. MATH 597. Analysis II MATH 451 and 420. (3). (BS). May not be repeated for credit. W. Topics include: Lebesgue measure on the real line; measurable functions and integration on R; differentiation theory, fundamental theorem of calculus; function spaces, Lp (R), C(K), Holder and Minkowski inequalities, duality; general measure spaces, product measures, Fubini's Theorem; RadonNikodym Theorem, conditional expectation, signed measures, introduction to Fourier transforms. MATH 602. Real Analysis II MATH 590 and 597. (3). (BS). May not be repeated for credit. MATH 604. Complex Analysis II MATH 590 and 596. (3). (BS). May be elected twice for credit. Selected topics such as potential theory, geometric function theory, analytic continuation, Riemann surfaces, uniformization and analytic varieties. WN 2013 | WN 2012 MATH 605. Several Complex Variables MATH 596 and 597. Graduate standing. (3). (BS). May not be repeated for credit. FA 2012 MATH 612. Algebra II MATH 593 and 594; and Graduate standing. (3). (BS). May not be repeated for credit. MATH 614. Commutative Algebra MATH 593 and Graduate standing. (3). (BS). May not be repeated for credit. MATH 615. Commutative Algebra II MATH 614 or permission of instructor. Graduate standing. (3). (BS). May not be repeated for credit. MATH 623 / IOE 623. Computational Finance MATH 316 and MATH 425 or 525. (3). (BS). May not be repeated for credit. This is a course in computational methods in finance and financial modeling. Particular emphasis will be put on interest rate models and interest rate derivatives. The specific topics include; Black-Scholes theory, no arbitrage and complete markets theory, term structure models: Hull and White models and Heath Jarrow Morton models, the stochastic differential equations and martingale approach: multinomial tree and Monte Carlo methods, the partial differential equations approach: finite difference methods. MATH 625 / STATS 625. Probability and Random Processes I MATH 597 and Graduate standing. (3). (BS). May not be repeated for credit. MATH 626 / STATS 626. Probability and Random Processes II MATH 625/STATS 625 and Graduate standing. (3). (BS). May not be repeated for credit. MATH 627 / BIOSTAT 680. Applications of Stochastic Processes I Graduate standing; BIOSTAT 601, 650, 602 and MATH 450. (3). (BS). May not be repeated for credit. MATH 631. Introduction to Algebraic Geometry MATH 594 or permission of instructor. Graduate standing. (3). (BS). May not be repeated for credit. MATH 632. Algebraic Geometry II MATH 631 and Graduate standing. (3). (BS). May not be repeated for credit. MATH 635. Differential Geometry MATH 537 or permission of instructor. Graduate standing. (3). (BS). May not be repeated for credit. WN 2014 | WN 2012 MATH 636. Topics in Differential Geometry MATH 635 and Graduate standing. (3). (BS). May not be repeated for credit. MATH 637. Lie Groups MATH 635 and Graduate standing. (3). (BS). May not be repeated for credit. WN 2014 | WN 2012 MATH 650. Fourier Analysis MATH 596, 602, and Graduate standing. (3). (BS). May not be repeated for credit. WN 2014 MATH 651. Topics in Applied Mathematics I MATH 451, 555 and one other 500-level course in analysis or differential equations. Graduate standing. (3). (BS). May be elected twice for credit. Topics such a celestial mechanics, continuum mechanics, control theory, general relativity, nonlinear waves, optimization, statistical mechanics. MATH 654. Introduction to Fluid Dynamics MATH 450; MATH 555 or 596; and MATH 454 or 556. Graduate standing. (3). (BS). May not be repeated for credit. This is an introduction to fluid dynamics. The syllabus includes a derivation of the governing equations, as well as basic background information on potential flow, boundary layers, vortex dynamics, hydrodynamic stability, and turbulence. This course uses a variety of mathematical techniques including asymptotic methods and numerical simulations. WN 2014 | WN 2013 MATH 656. Partial and Differential Equations I MATH 558, 596 and 597 or permission of instructor. Graduate standing. (3). (BS). May not be repeated for credit. MATH 657. Nonlinear Partial Differential Equations MATH 656. (3). (BS). May not be repeated for credit. WN 2013 | WN 2012 MATH 658. Ordinary Differential Equations A course in differential equations (e.g., MATH 404 or 558). Graduate standing. (3). (BS). May not be repeated for credit. FA 2014 | FA 2012 MATH 660 / IOE 610. Linear Programming II MATH 561 and Graduate standing. (3). (BS). May not be repeated for credit. MATH 663 / IOE 611. Nonlinear Programming MATH,MATH 561. (3). (BS). May not be repeated for credit. FA 2014 | WN 2013 MATH 665. Combinatorial Theory II MATH 664 or equivalent. Graduate standing. (3). (BS). May not be repeated for credit. MATH 669. Topics in Combinatorial Theory MATH 565, 566, or 664; and Graduate standing. (3). (BS). May not be repeated for credit. MATH 671. Analysis of Numerical Methods I MATH. 571, 572, or permission of instructor. Graduate standing. (3). (BS). May not be repeated for credit. MATH 675. Analytic Theory of Numbers MATH 575, 596, and Graduate standing. (3). (BS). May not be repeated for credit. FA 2013 MATH 676. Theory of Algebraic Numbers MATH 575, 594, and Graduate standing. (3). (BS). May not be repeated for credit. FA 2014 | FA 2012 MATH 678. Modular Forms MATH 575, 596, and Graduate standing. (3). (BS). May be repeated for credit. MATH 679. Arithmetic of Elliptic Curves MATH 594 and Graduate standing. (3). (BS). May be repeated for credit. FA 2013 MATH 682. Set Theory MATH 681 or equivalent. (3). (BS). May not be repeated for credit. FA 2014 | WN 2014 MATH 684. Recursion Theory MATH 681 or equivalent. (3). (BS). May not be repeated for credit. WN 2013 MATH 695. Algebraic Topology I MATH 591 or permission of instructor. Graduate standing. (3). (BS). May not be repeated for credit. FA 2014 | FA 2012 MATH 697. Topics in Topology Graduate standing. (2 - 3). (BS). May not be repeated for credit. MATH 700. Directed Reading and Research Consent of instructor required. Graduate standing and permission of instructor. (1 - 3). (INDEPENDENT). May be elected three times for credit. MATH 709. Topics in Modern Analysis I MATH 597 and Graduate standing. (3). (BS). May not be repeated for credit. FA 2014 | FA 2012 MATH 710. Topics in Modern Analysis II MATH 597 and Graduate standing. (3). (BS). May not be repeated for credit. WN 2013 | WN 2012 MATH 711. Advanced Algebra MATH 594 or 612 or permission of instructor. Graduate standing. (3). (BS). May not be repeated for credit. MATH 731. Topics in Algebraic Geometry Graduate standing. (3). (BS). May not be repeated for credit. MATH 732. Topics in Algebraic Geometry II MATH 631 or 731. (3). (BS). May not be repeated for credit. MATH 756. Advanced Topics in Partial Differential Equations MATH 597 and Graduate standing. (3). (BS). May not be repeated for credit. FA 2014 | WN 2014 MATH 775. Topics in Analytic Number Theory MATH 675. (3). (BS). May be repeated for credit. WN 2014 | WN 2012 MATH 776. Topics in Algebraic Number Theory MATH 676 and Graduate standing. (3). (BS). May be repeated for credit. WN 2013 MATH 821. Actuarial Math (1). May be repeated for a maximum of 2 credits. This course has a grading basis of "S" or "U". MATH 990. Dissertation/Precandidate Election for dissertation work by doctoral student not yet admitted as a Candidate. Graduate standing. (1 - 8; 1 - 4 in the half-term). (INDEPENDENT). May be repeated for credit. This course has a grading basis of "S" or "U". MATH 993. Graduate Student Instructor Training Program Graduate standing and appointment as GSI in Mathematics Department. (1). May not be repeated for credit. This course has a grading basis of "S" or "U". MATH 995. Dissertation/Candidate Graduate School authorization for admission as a doctoral Candidate. (Prerequisites enforced at registration.) (8; 4 in the half-term). (INDEPENDENT). May be repeated for credit. This course has a grading basis of "S" or "U".
{"url":"https://webapps.lsa.umich.edu/CrsMaint/Public/CB_PublicBulletin.aspx?crselevel=grad&subject=MATH","timestamp":"2014-04-20T23:36:52Z","content_type":null,"content_length":"437275","record_id":"<urn:uuid:f22d9994-d235-4773-b626-8c7c9bbd2d36>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
Treatise On Analysis Vol-Ii 11 THE SPECTRAL THEORY OF HUBERT 399 (15.11 .7) For N to be self-adjoint (resp. unitary) it is necessary and sufficient that Sp(N) c R (resp. Sp(JV) c:U).IfHis self-adjoint, then inf(SpCff)) = inf (H x \ x), (15.11.7.1) II*""1 sup(Sp(#)) = sup (H-x|x), (15.11.7.2) ||H||= sup|(H-x|x)|. In the first assertion, we have already seen (15.4.12) that the conditions are necessary. To show that they are sufficient, it is enough (by virtue of (1 5.1 1 .5)) to prove them for the Nn\ and this is immediately done, because, when Nn is identified with multiplication by the class of lc in L^(Sp(Nt)9 /4n), the operator N* is identified with multiplication by the class of the function Ł*-->Ł To prove (15.11.7.1), it is enough to show that, for a self-adjoint operator H to be such that (H x \ x) ^ 0 for all x e E (in which case we say that H is positive and we write H ^ 0), it is necessary and sufficient that Sp(/f) c R+ . In view of (15.11.5) and the relation (15.11.7.3) (H-xIx^KH.-xJxJ (with notation analogous to that of (15.11.3)) we are reduced to proving the assertion for simple self-adjoint operators H* . If we identify Hn with multipli- cation by the class of lc in Lc(Sp(Hn), /* ), what we have to prove is that Sp(Hn) <= R+ if and only if JV(0 4UŁ) ^ 0 for every function /Ł 0 in ^rc(Sp(/fn)). Now, if M is the intersection of Sp(Hn) with the complement ] oo, 0[ of R+ in R, then the relation M ^0 would imply /JB(M) > 0 (15.1.14). Since ] oo.0[ is the union of the intervals ] oo, 1/Ť], there would exist m > 0 such that and consequently f C^CO 4*n(C) ^ -a/w < 0, contrary to hypothesis. Finally, the relation (15.11.7.2) follows from (15.11.7.1) and (15.4.14,1), because the spectral radius of H is equal to the larger of |inf (Sp(^T))|, (15.11.8) (i) For each function fe ^rc(Sp(N)), the spectrum of f(N) is contained in f(Sp(N)) (closure in C), and (15.11.8.1) ||/(N)|| g sup |/(C)|.ntegers n such that a
{"url":"http://archive.org/stream/TreatiseOnAnalysisVolII/TXT/00000418.txt","timestamp":"2014-04-18T13:41:26Z","content_type":null,"content_length":"12635","record_id":"<urn:uuid:85c13040-f69b-4fc0-8970-fe3e3dfaa214>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
The Shakespeare Conference: SHK 15.0974 Wednesday, 28 April 2004 From: Gabriel Egan < This e-mail address is being protected from spambots. You need JavaScript enabled to view it Date: Wednesday, 28 Apr 2004 11:25:32 +0100 Subject: 15.0964 Stylometrics Comment: Re: SHK 15.0964 Stylometrics I thank Marcus Dahl for elucidating his understanding of Godel: >According to my non-mathematician's understanding >of Godel he demonstrated that there is no fundamental >axiom for all mathematics thus putting pay to about >30 years of work by Russell and Whitehead et al who >were trying to show just the opposite in (I think) their >'Principia Mathematica' (or some such pseudo Newtonic title). If 'no fundamental axiom for all mathematics' is a shorthand way of saying that no single, consistent set of axioms can prove all the truths of arithmetic-for each set there are always truths of arithmetic it can't prove-then this non-mathematician agrees. But the next bit of Dahl's post is a leap I can't follow: >My comment was then intended to refer to certain negative >outcomes of Stylometry and one conclusion: >(1) Rather than making a positive demonstration of authorship it may be >possible to 'prove' instead that we cannot know who wrote certain texts >attributed to certain authors. i.e. there may be a limit to either the >theoretical or empirical evidence avilable to 'prove' positive authorship. Or, put another way, Godel found a limit to what a single, consistent set of axioms can prove about arithmetic, so it's not surprising that other attempts to prove things have limits too. The unspoken link here is Artificial Intelligence and the popular (but false) idea that Godel had shown that no Turing Machine could do the intelligent things that we humans can do. The special thing we can do is spot when a candidate for arithmetic truth really is true, which Godel showed is something that no machine (because no algorithm) can do in every case. In fact, Godel's theorem doesn't apply even here, since in fact no human can spot arithmetic truths unfallibly either: we undoubtedly use algorithms just as a machine would. Godel dealt in philosophical absolutes, while our minds only have to be clever enough to get ourselves reproduced in order to fulfil the role they're made for. Having made the leap, Dahl has doubts about it: >(2) That such negative apriori or apostiori proofs may be non-the-less >formally acceptable as 'proof'. Thus it may be possible to make some >priori requirements for evidence and statements about that evidence >before surveying that evidence which would formally curtail our ability >to demonstrate positive proof. >(3) That this situation could be beneficial to Attribution Studies >Now: I am to some extent conflating two things here (but I know I am) - >the possibility of mathematical proofs (i.e. in a closed language >system) and the possibility (imagined or real) of empirical proofs (or >sets of statements etc for which falsifiable and repeatable evidence >exists or might exist). Precisely. I wasn't trying to be mean in refusing to make the leap; the reasons against it are strong. I agree that 'good enough' proofs will often do in life, even though that's clearly an oxymoron. >My essential point remains the same - it is sometimes better >to show that 'you don't know' by some form of universally >acceptable proof or evidence than it is to show that 'you >might know' by a set of non-universally accepted proofs >or evidence. Again, I can't see how Godel helps here, since his work is on what can or cannot in principle be proven. The acceptability of stylometic methods is quite a distinct matter. The major barrier to the acceptance of stylometric 'proofs' is ignorance of how they're arrived at. If lots more people knew what the jargon and the statistics meant, the debates would become considerably more productive. Ward E. Y. Elliott and Robert J. Valenza, unlike others, are rather good at explaining what they do. Their "Glass slippers and seven-league boots: C-prompted doubts about ascribing _A Funeral Elegy_ and _A Lover's Complaint_ to Shakespeare" Shakespeare Quarterly 48 (1997) pp. 177-207 is an excellent introduction to the subject. Gabriel Egan S H A K S P E R: The Global Shakespeare Discussion List Hardy M. Cook, This e-mail address is being protected from spambots. You need JavaScript enabled to view it The S H A K S P E R Web Site <http://www.shaksper.net> DISCLAIMER: Although SHAKSPER is a moderated discussion list, the opinions expressed on it are the sole property of the poster, and the editor assumes no responsibility for them. Other Messages In This Thread
{"url":"http://shaksper.net/archive/2004/214-april/20849-stylometrics","timestamp":"2014-04-21T07:14:46Z","content_type":null,"content_length":"25708","record_id":"<urn:uuid:afbe2a7b-5be4-4707-a780-8e97301eee69>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
CROSS TABULATIONS & CHI-SQUARED TESTS A. Overview of chi-squared tests B. Step-by-step instructions for creating pivot tables in Excel C. Step-by-step instructions for doing a chi-squared test in Excel D. Interpreting your Excel output We often need to know whether multiple groups are equally likely to do something, buy something, etc...Moreover, if two or more groups are not equally likely to do something, which group or population is more likely to do it? So, like in all statistics, we take a sample from each group and try to see if their outcomes are similar. Of course, simply because the results from each sample aren't exactly equal doesn't mean that this observed difference is "statistically significant." The differences we observe in our sampling could simply be the result of random sampling error. For example, we might just randomly get a lot of men in our sample who really like to shop, even if, in reality, women are more likely to shop. So, regardless of the sample results we obtain, we have to test to see if the differences we see could simply be due to random sampling error or if they are really indicative of a true difference between our two groups. B. HOW TO CREATE A PIVOT TABLE IN EXCEL Cross tabulations are an easy, convenient way to summarize data, especially when you have a lot of categorical responses (Yes/No, Coke/Pepsi, etc..). Excel can generate these kind of tables by creating what is called a Pivot Table. Take an example from Dr. Paul's Marketing Analysis course several years ago. One group of students surveyed 57 women, between the ages of 18 to 25. The respondents were divided into 2 age groups, 18-21(=1) and 22-25(=2). The following question was asked: "On a scale from 1 to 5 (5="very important", 1="not important at all"), How important is it to you that you buy name brand clothes?" A portion of their Excel spreadsheet appears below: Suppose they want to construct a table that shows the relationship between the age group and the relative importance of name brands. Step 1: A cross tabulation can be easily constructed in Excel by making a Pivot Table. Go to Data, Pivot Table and hit next. Excel then asks you to input the range. Make sure that the entire range of the data is selected. In this example, the data runs from row 2 to 58, and is found in columns A, B and C. Hit next. Step 2: Now, you are ready to set up the table. The labels of your series are in boxes to the right. You must always put at least one label as a heading (row or column). In addition, you must always put in the data field "what you want Excel to measure." Step 3: You need to put the "how important" box as a heading field also. This is because you are going to be asking Excel to count the number of women in each age group who answered "1", "2", "3", "4" and "5". Be sure to double-click on your data box to make certain that you have asked Excel to "count" the number of women answering each response: Follow the prompts and you table is placed on another page: At first glance, you can see that in your sample, 12 out of 38 (31.5%) of women aged 18-21 do not consider brand name to be important at all, whereas only 2 of 19 (10.5%) of the older women made that claim. Clearly, within this sample of 57 women, the younger ones are less concerned with the brand name. But (again, a HUGE but) can you infer that in the entire population women aged 22-25 on average are more concerned with brand names than women aged 18-21? Again, to make such an inference requires a test. In this case, the proper test is called a Chi-Squared test. C. HOW TO PERFORM A CHI-SQUARED TEST IN EXCEL Step 1: State your null hypothesis. The hypothesis is that "the two age groups are equally likely to respond "1", "2", "3", "4" and "5" to the question". In other words, the probability that women in the 18-21 age group rates the importance of name brands as a "1", "2", etc...is equal to the probability that women in the 22-25 age group rate name brands equally important: H[0]: p[ages 18-21] = p[ages 22-25] Notice that this data is categorical (not numeric). As a result, it would not be appropriate to do a "t-test" to test this hypothesis. In this case, the proper test is what we call a Chi-squared Step 2: Choose a critical level for the test and find the critical value. The sampling distribution of a statistic which compares the "expected" frequency of a sample with the actual, or "observed" frequency is called Chi-Squared. For a sample this statistic is distributed like a Chi-square with (rows-1)*(columns-1) df (here, df = (2-1)*(5-1)=4). See the Chi-Squared distribution for critical values. At the 10% level (90% confidence level) the critical Chi-Squared is 7.77. Step 3: Calculate the test statistic. The key to calculating the chi-squared statistic and testing this hypothesis is to compare the actual, or "observed" values in each or the table's cells with the "expected" frequency of response that would have occurred if the hypothesis were true. The test statistic is computed by the formula: where O = observed frequency in the sample in this class and E = expected frequency in the sample in this class. Remember, you can do a Chi-squared test with any dimension of table. You can have more than two groups and you can have any number of survey categories that you are comparing. It does change the df, though. The expected frequency, E, is found by multiplying the relative frequency of this class in the hypothesized population (57) by the sample size. For example, the expected relative frequency of women who say that name brands are not important at all (1) is 14/57 (# who answered "1" divided by the total # surveyed). This gives you the number in that class in the sample if the relative frequency distribution across the classes in the sample exactly matches the distribution in the population. Notice that Chi-square is always greater than 0 and equals 0 only if the observed is equal to the expected in each class. Look at the equation and make sure that you see that a larger value of goes with samples with large differences between the observed and expected frequencies. For example, the expected frequencies for each of the 5 categories, 2 age groups are given as follows (the first number in the subscript represents the age group (1=18-21, 2=22-25) and the second subscript is how they rated name brands on the 1-5 scale): E[11] = 14/57*38 = 9.33 E[12] = 17/57*38 = 11.33 E[13] = 9/57*38 = 6 E[14] = 15/57*38 = 10 E[15] = 2/57*38 = 1.33 E[21] = 14/57*19 = 4.67 E[22 ]= 17/57*19 = 5.67 E[23] = 9/57*19 = 3 E[24] = 15/57*19 = 5 E[25] = 2/57*19 = 0.67 Doing this in the Excel spreadsheet give the following: Notice that the table of "expected values" has the exact same dimensions as the original (observed) table. Now that we have calculated the expected values, we are ready to compare them to the observed values. Notice that, if the two age groups were equally likely to rate the importance of name brand as "1" (highly important), we should have seen 9.33 women in the younger age group mark "1" and 4.67 of women in the older age group mark "1". but, in our sample 12 (not 9.33) younger women chose "1". Only 2 (not 4.67) of the older group chose "1". In our sample, the younger women were more likely to rate name brands as very important. However, is this result due to a true underlying difference in the two populations, or was it simply a result of random sampling error? To answer that question, we have to test it. We need to calculate the Chi-squared test statistic using the formula below and compare it to the critical chi-squared number we get from the chi-squared table. Again, the formula is: The Chi-Squared Statistic = (12-9.33)^2/9.33 + (11-11.33)^2/11.33 + (6-6)^2/6 + (10-9)^2/10 + (0-1.33)^2/1.33 + (2-4.67)^2/4.67 + (6-5.67)^2/5.67 + (3-3)^2/3 + (6-5)^2/5 + (2-0.67)^2/0.67 = 6.59 As we noted above, at the 10% level (90% confidence level) the critical Chi-Squared is 7.77 (see the Chi-Squared distribution). So, the hypothesis that the two groups are equally likely to respond that name brands are important, very important, etc... is not rejected at the 10% level. In this case, we cannot claim with much confidence that the differences we see in this sample are generalizable to the population at large (at least on this one question). So, why did we observe these differences in our samples there really aren't any? All this test result tells us is that we are not at least 90% confident that there is a difference between these two age groups in their preferences towards purchasing name brand clothes. In other words, we don't have strong enough evidence that the sample differences are due to anything but random sampling error. We should not, in this case, make the claim that the younger women care more about name brands. In the future, a larger ample size might allow us to make a stronger claim. Elon University Last Modified: 08/07/03 Campus Box 2700 Copyright © Elon University Elon, NC 27244 (800) 334-8448 E-mail: web@elon.edu
{"url":"http://org.elon.edu/econ/sac/chisquare.htm","timestamp":"2014-04-18T10:34:11Z","content_type":null,"content_length":"18578","record_id":"<urn:uuid:d99332d3-66f1-4c52-af47-a188da25ebd1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Induced metric on the brane Thanks for the reply, I checked on Poisson's book and also Gourgoulhon's review but I couldn't found the reason. I finally understood my mistake, [itex]h_{\mu\nu}[/itex] is not the induced metric but only the projection tensor. For to have the induced metric we have to look to the tangential components of the tensor and not to [itex]h_{00}[/itex]. In fact the 3 vectors orthogonal to the normal vector and which define a basis on the hypersurface are [itex]V1^\mu=(1,\dot a,0,0)[/itex] so it is perfectly fine to look for [itex]h_{22}[/itex] and [itex]h_{33}[/itex]. But the last component is not [itex]h_{00}=h_{tt}[/itex] but [itex]h_{V1 V1}[/itex] So now we have [itex]\partial_{V1}=\partial_t+\dot a \partial_\rho[/itex] which implies that [itex]h_{V1V1}=h_{00}+2\dot a h_{01}+\dot a^2 h_{11}[/itex] which gives the correct result [itex]h_{V1V1}=-1+\dot a^2[/itex]. So it is a modification of the coordinates ...
{"url":"http://www.physicsforums.com/showthread.php?s=0376525dccc10b7629aec653f171944d&p=3807518","timestamp":"2014-04-24T12:01:41Z","content_type":null,"content_length":"29721","record_id":"<urn:uuid:0592bb2b-7614-4e86-ae6f-01876b04cedd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible Outcomes January 11th 2006, 09:15 PM #1 Mary Cluck Possible Outcomes My problem is-- 1. Draw a tree diagram to show all possible outcomes if you draw a ball from box 1 and a ball from box 2. Box 1 shows A R T. Box 2 shows 1 2 3 4. ? 2. How many outcomes are there if you draw a ball from boxes 1 and 2? Any help appreciated--- Mary Cluck My problem is-- 1. Draw a tree diagram to show all possible outcomes if you draw a ball from box 1 and a ball from box 2. Box 1 shows A R T. Box 2 shows 1 2 3 4. ? I'm not sure what form of tree diagram you are expected to produce but the attached figure shows one type of tree diagram for this 2. How many outcomes are there if you draw a ball from boxes 1 and 2? Any help appreciated--- The number of outcomes is equal to the number of leaf nodes on the diagram (in this case nodes on the right), which in this case is 12. January 11th 2006, 11:40 PM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/algebra/1599-possible-outcomes.html","timestamp":"2014-04-18T14:02:11Z","content_type":null,"content_length":"32180","record_id":"<urn:uuid:491c35e8-2d97-4a47-a389-7703067e8087>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
A Lunar Occultation On the evening of Thursday, October 2nd, 2003, the Moon will pass in front of the star tau Sgr. This event can provide dramatic evidence concerning the angular sizes of stars, and yield information useful in estimating the Moon's distance. An occultation occurs when a nearby celestial object passes in front of a more distant one and completely hides it from view. In a lunar occultation, the Moon passes in front of a star or planet. Lunar occultations of faint stars occur all the time, but they're hard to see because the Moon's light swamps dim objects. On the evening of October 2nd, 2003 at 18:57 (6:57 pm), observers in Hawaii will see the Moon occult tau Sgr, a 3rd magnitude star in the constellation of Sagittarius. This is the best and most easily observed occultation visible from Hawaii this semester. Fig. 1 shows simulated images of the Moon and stars in Sagittarius on 10/02/03, 18:40. At that time, the Moon will be just past first quarter, but already dazzling enough to obscure all but the brightest stars. In these images, tau Sgr appears just to the left of the Moon; it may not be visible to the unaided eye, but you should have no trouble seeing it with binoculars. Fig. 1. The Moon and stars in Sagittarius on 10/02/03, 18:40, simulated using Celestia. tau Sgr is the star just to the left of the Moon. Left: wide-field view, showing the `teapot'. Right: view through 10×50 binoculars. To watch this occultation, you need an observing site with a good view toward the south. You will need binoculars to see tau Sgr with the Moon so close in the sky. Also bring a watch, set as accurately as possible, and the chart included with this handout. ┃ IMPORTANT: If you are going to be on another island on Thursday night, please let me know. Predicted times differ by a few minutes on the neighbor islands. ┃ If possible, begin looking at about 18:30. With binoculars, you may already be able to see tau Sgr to the west of the Moon; the angular separation between them will be about half of the Moon's angular diameter, which is 0.5°. (Note: the angular diameter of an object is just the angular separation between opposite sides of the object; thus the angle from your eye to the left and right sides of the Moon is 0.5°.) As the Moon moves eastward in its orbit, its dark side will advance toward tau Sgr and finally cover up the star at 18:57 (6:57 pm). When this happens, the star's light will be cut off in a fraction of a second. Your main goal is to be looking at the star at that exact moment! There are two different measurements we want you to make: 1. At about 18:50, draw the Moon on the chart included with this handout. Try to show the Moon's position with respect to tau Sgr and any other stars you can see as accurately as possible. The Moon's angular diameter of 0.5° corresponds to 1 cm on the scale of this chart, so draw the globe of the Moon as a circle 1 cm in diameter, and shade the dark side so that the direction of sunlight is clear. 2. When you see tau Sgr disappear behind the Moon, start counting seconds, and look at your watch as soon as you are sure the star is really gone. Subtract the number of seconds you counted from the reading on your watch to get an accurate time for the star's disappearance. Please be sure to record your observing location along with the times you measure. Observers in different places will see the star disappear at slightly different times as the Moon's shadow sweeps across the island. If enough people can make accurate timings, we can estimate the speed of the Moon's shadow. To set your watch accurately, call 983-3211. The star will reappear from behind the Moon roughly one hour later, at 19:56 (7:56 pm). It will probably be harder to tell the exact instant of reappearance, since the star emerges on the bright side of the Moon. Watching the star reappear is optional. Just how fast is tau Sgr's light cut off by the edge of the Moon? If the star was large enough to appear as a disk, and not just a point of light, you'd see it fade out gradually as the Moon covered it up. In fact, the star will wink out extremely fast - you will definitely not notice it fading out gradually. We can use this fact to make a very rough estimate of the distance to tau Sgr. Let's say the star takes less than 0.1 sec to vanish (this is about the shortest time we can easily perceive). Let's also say that tau Sgr has the same diameter as our Sun, which is 1.4×10^6 km. These are the only assumptions used in this estimate; neither is very accurate, but they are OK for a very rough answer. In particular, they will serve to find the smallest distance the star could possibly have. As seen from Earth, the Moon moves with respect to the stars at an average rate of 0.00015&deg/sec (360&deg in 27.3 days); in other words, each second it's position changes by 0.00015&deg. So, if tau Sgr takes less than 0.1 sec to fade out, it must have an angular diameter which is less than one-tenth of this angle, or <0.000015°. In other words, 0.000015° is an upper limit for the star's angular diameter - we don't know the true value, but we do know that it is less than 0.000015°. (This is about 20 times smaller than anything we can see with our telescopes; in fact, even the most powerful telescopes have trouble seeing detail this small!) Now if we know how big the star really is, and we know how big it appears to be, we should be able to work out its distance. The equation required is the same one used for parallax distances: Here we use our guess for tau Sgr's actual diameter (1.4×10^6 km) for the baseline b, and our upper limit for the star's angular diameter (0.000015°) as the angle D is about 5.3×10^12 km, or 0.55 l.y. (light-years); remember that this is the smallest possible distance, and the actual distance can be much greater. In fact, the actual distance to tau Sgr is about 120 l.y., so our simple estimate yields a distance about 220 times too small. Still, this is at least a rough figure for the distance to a star; it's pretty good for an estimate made using just a pair of binoculars! From the fact that our calculation drastically underestimated the distance by such a large factor, it follows that tau Sgr actually disappears in much less than 0.1 sec. We can calculate how long it really takes by putting tau Sgr's correct distance D = 1.1×10^15 km and diameter b = 1.4×10^7 km into the above equation, and solving for tau Sgr's angular diameter, giant star!) We get much too fast for human perception, but electronic detectors can measure such rapid disappearances, and astronomers have used occultations to measure the angular diameters of stars. • International Occultation Timing Association Provides information on upcoming occultations by the Moon, planets, and asteroids. • Chart for Moon: GIF file or Postscript. The GIF file should be printed at 100 dpi to get a scale of 2 cm per degree. • Which side of Oahu will see the shadow first, the east side or the west side? • When the Moon is waxing (before it's full), the side of the Moon which moves forward across the stars is dark. What about when the Moon is waning (after it's full); which side leads, the dark side or the bright side? Would it be as easy to see an occultation if the bright side came first? • Why does our estimate yield the smallest distance tau Sgr could possibly have? (Hint: we've underestimated the actual diameter of this star, and overestimated the time it takes to disappear behind the Moon.) • Why can we use the parallax formula for our estimate of the distance to tau Sgr? (Hint: parallax involves constructing a very skinny triangle. Which way does the triangle point in this Make the observations described above, and write a report on your work. This report should include, in order, 1. a general motivation for the observations, 2. a description of the observing site and equipment you used, 3. a summary of your observations, and 4. the conclusions you have reached. In more detail, here are several things you should be sure to do in your lab report: □ Explain what an occultation is, using your own words. □ State the location of your observing site; you don't have to give an actual street address, but provide enough information to pin down your position to the nearest half-mile or so. □ Report the time when tau Sgr disappeared, and estimate the accuracy of your measurement. □ Include the attached chart with your measurement of the Moon's position. This report is due in class on October 7. Joshua E. Barnes (barnes@ifa.hawaii.edu) Last modified: September 30, 2003
{"url":"http://www.ifa.hawaii.edu/~barnes/ASTR110L_F03/lunaroccultation.html","timestamp":"2014-04-18T18:30:24Z","content_type":null,"content_length":"11627","record_id":"<urn:uuid:b25d856f-6cb2-4545-aa50-1c668be06351>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
use the given information to complete the triangle use the given information to complete the triangle: A = 60 degrees a = 9 c = 10 A = 60 degrees a = 9 c = 10 We can use the Law of Sines to solve for angle C, as we know that sin(A)/a = sin(C)/c. Plug the known values for A, a, and c in, and solve. From there, we will have two out of the three angles, and we can solve for angle B as we know that the sum of all three angles in a triangle is 180 degrees. It should be fairly straightforward to solve for side b, as we can use the Law of Sines again.
{"url":"http://mathhelpforum.com/pre-calculus/138121-use-given-information-complete-triangle.html","timestamp":"2014-04-18T14:25:28Z","content_type":null,"content_length":"31416","record_id":"<urn:uuid:f563e7a1-e79c-4376-8a8d-73bbf3058507>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Home Page of Ludmil Zikatanov Undergraduate research (Summer 2013): In July and August 2013 I and John C. Urschel worked with four Penn State undergraduate students on research in graph drawing and monotone schemes for convection diffusion equations as part of an REU training. Descriptions of the projects and reports on the research are found at sites.psu.edu/cmus2013 Graduate students: A list of of Ph.D. students that I have advised or co-advised is found on the Mathematics Genealogy Project. Research support: The research support by the National Science Foundation (NSF), US Department of Energy (DoE), and the Penn State's Center for Computational Mathematics and Applications (CCMA) is gratefully acknowledged. List of past and current NSF awards in which I have been (or I am) a CoPI or the PI is found here. Contact info: Ludmil Zikatanov, Department of Mathematics, 310 McAllister building, Penn State, University Park, PA, 16802 Phone: (814)-863-9682, E-mail: <ludmil AT psu DOT edu>
{"url":"http://www.personal.psu.edu/ltz1/","timestamp":"2014-04-19T09:25:42Z","content_type":null,"content_length":"6235","record_id":"<urn:uuid:ae1c5452-c204-47d3-8c4c-00fd68f3b42b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Fusion trees can be implemented with AC instructions only - Journal of Computer and System Sciences , 2001 "... We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed compactly stored set. Our algorithms are for the unit-cost word RAM with multiplication and are extended to give dynamic algorithms. The lower bounds are proved ..." Cited by 55 (0 self) Add to MetaCart We obtain matching upper and lower bounds for the amount of time to find the predecessor of a given element among the elements of a fixed compactly stored set. Our algorithms are for the unit-cost word RAM with multiplication and are extended to give dynamic algorithms. The lower bounds are proved for a large class of problems, including both static and dynamic predecessor problems, in a much stronger communication game model, but they apply to the cell probe and RAM models. - J. Assoc. Comput. Mach , 1997 "... The single source shortest paths problem (SSSP) is one of the classic problems in algorithmic graph theory: given a weighted graph G with a source vertex s, find the shortest path from s to all other vertices in the graph. Since 1959 all theoretical developments in SSSP have been based on Dijkstra' ..." Cited by 49 (3 self) Add to MetaCart The single source shortest paths problem (SSSP) is one of the classic problems in algorithmic graph theory: given a weighted graph G with a source vertex s, find the shortest path from s to all other vertices in the graph. Since 1959 all theoretical developments in SSSP have been based on Dijkstra's algorithm, visiting the vertices in order of increasing distance from s. Thus, any implementation of Dijkstra 's algorithm sorts the vertices according to their distances from s. However, we do not know how to sort in linear time. Here, a deterministic linear time and linear space algorithm is presented for the undirected single source shortest paths problem with integer weights. The algorithm avoids the sorting bottle-neck by building a hierechical bucketing structure, identifying vertex pairs that may be visited in any order. 1 Introduction Let G = (V; E), jV j = n, jEj = m, be an undirected connected graph with an integer edge weight function ` : E ! N and a distinguished source - In DIMACS’96 implementation challenge , 1996 "... Introduction Recently there have been several theoretical improvements in the area of sorting, priority queues, and searching [1, 2, 8, 9]. All these improvements use indirect addressing to surpass the comparison-based lower bounds. Inspired by these advances, and by the fact that algorithms based ..." Cited by 1 (0 self) Add to MetaCart Introduction Recently there have been several theoretical improvements in the area of sorting, priority queues, and searching [1, 2, 8, 9]. All these improvements use indirect addressing to surpass the comparison-based lower bounds. Inspired by these advances, and by the fact that algorithms based on indirect addressing have proven to be efficient in many practical applications, we have implemented a trie-based priority queue as part of the DIMACS implementation challenge. Having Dijkstra's single source shortest path algorithm in mind, we decided to restrict our attention to monotone priority queues, as defined in [9]. A monotone priority queue, is a priority queue where the minimum is non-decreasing -- the minimum of an empty monotone priority queue is defined to be 0. The monotonicity condition is not a problem for greedy algorithms such as Dijkstra's single source shortest paths algorithm. Also, monotonicity is satisfied in event-simulations. According to t
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=189226","timestamp":"2014-04-18T17:18:48Z","content_type":null,"content_length":"18430","record_id":"<urn:uuid:c2d0e930-13df-4754-b39a-cadc8642acf2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Help Videos | MindBites Included: (Click to preview) About this Series • Lessons: 19 • Total Time: 4h 51m • Use: Watch Online & Download • Access Period: Unlimited • Created At: 07/22/2010 • Last Updated At: 04/09/2014 Do you need to learn Geometry? Do you wish your Geometry teacher could slow down and you could hear that explanation again? Do you need to improve your standardized test scores? Get these videos to learn Geometry basics. Go over and over it until you get it. The Geometry lessons in this Geometry Made Easy Series will help you understand and learn Geometry. If you are a beginning Geometry student...or just need a basic review of Geometry, you will find the help you need. If you are stressing over Geometry, these will help you. This series includes 4 hours and 51 minutes of quality instruction spaced out over 19 lessons. Definitions, terms, drawings, and examples are very clear. You are introduced to angles in a simple manner. Algebra problems are included to help you work with angles and problem solving. Most students like this part best! The series does not focus on proofs but 'includes' a few proofs to help you see how proofs work. Most students like this part least! Sorry. Instead of reaching for a Coke and popcorn, grab a notebook and pencil when you watch. Really. And don't just watch. Work right along with me. Get your feet on the ground from the beginning. You can "watch online" OR "watch online and download" them to your computer. At the time you buy the series you get to choose. There are no time restrictions. They are yours forever. Please share them freely with your friends if you would like to. That would be a good thing! One of the keys to learning is repetition. Watch a concept over and over again. Stop, rewind, watch again. It will help. Sandra Wilkes National Board Certified Teacher Teacher of the Year About this Author 30 lessons Welcome! I'm so glad you are here! Math help is here for you when you need it. I believe that using these Algebra and Geometry videos will help you understand the basics of Algebra and Geometry. Some students try very hard and still struggle to pass math. They start off strong but things quickly begin to fall apart. That happens as soon as the student becomes lost. Teenagers who find themselves in this position often let it "get away from them" before they seek help. Because Math is always a class of stepping stones, it rarely gets better without help. I urge you to seek help from your child's teacher first. Always. These Algebra and Geometry videos can help too. You can watch... Lessons Included won't load ~ dthibodeaux I am having trouble viewing this one.. It plays up till about 14 mins then it stops. The time keeps going like it is playing but the video is stopped. ~ milly91 I seriously can't do proofs. But this is good and I am getting better at it. I wish you lived next door to me! Below are the descriptions for each of the lessons included in the series: • Point, Line, Plane Well, this is where you start in Geometry. You're going to learn a lot more about a point, a line, and a plane than you can imagine. Actually it is impossible to "define" these terms so we will just describe them and make sure you know all about them. If I can find a "Pointalism" picture from one of my students, I will post it here...just for fun. Seriously, all geometric figures are made up of points. We connect points to form shapes of all kinds. So without points, lines, and planes Geometry would not exist. Stick with me and you'll be off to a good start! • Postulates and Theorems Even the words sound intimidating! What are Postulates and Theorems anyway? They've been around for a long time and contain all the "rules" of Geometry. This video explains the difference between Postulates and Theorems in a way that is not as strange as they sound. Once you know your Postulates, Theorems, and definitions, there is nothing in Geometry that you cannot do. Isn't that a good feeling? • Midpoint Well, now that we've gotten started in Geometry we'll just forge ahead and talk more about points, lines, and planes. What happens when two lines intersect? What does intersect mean? What is a midpoint of a line segment? If two line segments are congruent, what does that mean about the length of each? This might be more detail that you really want to know but, just stay with me. In the long run you will be so glad you are comfortable with all these details about distance, length, and Geometry is all about logical thinking. It is about moving from one fact logically to another fact and being able to back them up. You can back up your geometry facts with definitions, Postulates, and Theorems...but you can't make it up and you can't go by "how things look". If you agree not to just "go by how things look", that means that we understand everything must have a valid reason. • Introduction To Angles You hear people talking about angles all the time but what, exactly, are they? I know it sounds like a simple questions, and it is really, but looking closely at the definition will start you off with a firm footing in Geometry. Math help isn't really very helpful if it skips too much material and starts you off in the dark! This video will define "angle" for you and show you exactly what an angle is and what makes an angle. Then it will talk with you about different kinds of angles so when you come across these terms, you will know what they are talking about. For example, if I go on and on about adjacent angles, and you have no idea what that means...it really doesn't matter what else I say because you will be lost. That's what happens in many classrooms. Always ask when there is something you don't understand. However, there is only so much time in a "regular" classoom so it's impossible for your teacher to stay on the basics for very long. Me? I can stay on them as long as I want to. And you can replay me as many times as it takes. • More About Angles When you listen to your teacher in Geometry,she, or he, will mention angles every day about a hundred times! You see, then, how important they are. If you don't want to get lost, stick with this video until you really understand all of it. The Angle Addition Postulates is explained on this math help video. There are a lot of times you need to add angles together. It might seem kind of obvious but it is necessary to learn how to add them, how to name them, and what allows you to add them. That's where the Angle Addition Postulate comes in. Your book might call it something else similar. Some pairs of angles are called complementary. Some pairs are called supplementary. And some pairs are called Linear Pairs. Whew! That's a lot of different pairs of angles. Not hard though. You'll see. Watch the video whenever you need to review these terms and again right before the test. • Bisectors and Vertical Angles As you study Geometry you will become best friends with bisectors and vertical angles! This video makes sure you know what they are and how to recognize them. I will first define them for you and then show you many examples. Once you see what they are and how they work, geometry math problems will be so much easier for you to do. Some will seem "flat out" easy! Algebra examples are included in this geometry video lesson. They are great practice. You will surely have these type problems in your class at school. Watch me work them, then work them on your own until you totally "get it". This video introduces you to vertical angles. It makes sure you know what vertical angles are. It teaches you that vertical angles are formed whenever two lines cross. Bisectors of segments and angles are found throughout Geometry. This lesson helps you know what both kinds of bisectors are....for sure, before you start using them in theorems and postulates. If you need a good foundational video to begin your study of geometry, this is one that is essential. • Bisectors and Vertical Angles Before you can start working problems with bisectors and vertical angles you have to know what they are. My teaching methods help you learn by seeing and hearing the information. I use a lot of repetition and drawings to help you learn. This video introduces you to vertical angles. It makes sure you know what vertical angles are. It teaches you that vertical angles are formed when two lines cross. Bisectors of segments and angles are found throughout Geometry. This lesson helps you know what a bisector is....for sure, before you start using them in theorems and postulates. If you need a good foundational video to begin your study of geometry, this one is essential. • How To Prove Vertical Angles Are Congruent Vertical angles! We love them! They are so easy to use and provide a real comfort level when beginning Geometry. This proof shows you "why" vertical angles are congruent. It proves that they are 'always' congruent. If you need to see how and why vertical angles are congruent, watch this. Once this theorem is proven, it can be used anytime in the future when you have vertical angles. First of all make sure you recognize vertical angles so you will know when you have them. Then be on the look out for them! Vertical angles are formed whenever two lines cross (intersect). You will find them hidden in many drawings that contain more than two lines. That is why you want to look for them. Think of it as a puzzle and keep your eyes open for vertical angles. After this video you will understand why they are always congruent and you will understand how to work with them. Whenever you get a problem with vertical angles, go ahead and mark them congruent as soon as you sketch the drawing. Don't draw too small. The more you can actually see, the better and quicker solutions will begin to come together. Enjoy working with vertical angles. They usually lead you to more information to help you solve a problem. • If-Then Statements Geometry is just full of If-Then Statements! Usually they are called Conditional Statements. We all use if-then statements everyday as we go about our day. This video will help you see what conditional statements are actually saying to you...in Geometry. Converse statements, Inverse statements, and Contrapositive statements all start with a Conditional statement. What ARE all those kinds of statements? This video will tell you. That way, when you see it on your SAT, you will know exactly what it is talking about. Gotta speak the language to have any shot at success. We will see what happens to a statement when we swith the if-then parts, or if we make them both negative. You probably know "hypothesis" and "conclusion" from Science. That will help too. • Parallel Lines And Angles Parallel lines are everywhere you look in Geometry. They are found in squares, rectangles, and parallelograms. The information you gain through parallel lines is mostly about angles so this video will focus on angles formed by parallels. Students like this section because there are patterns which are easy to see. Once you see the patterns and learn the names of the angles you can work almost any problem with parallel lines. Just remember. Whenever you see lines that are parallel, it is all about the angles formed by the transversal. Look for them! • Examples With Parallel Lines and Triangles Parallel lines form many angles when cut by another line called a transversal. You can imagine how many angles are formed when you have more than one transversal! All those angles have names, thank goodness, so we can keep them straight! Such pairs are the alternate interior angles, corresponding angles, and same side interior angles. And what about the converses of these postulates? In this video you will learn all these angles and more. You will see how they show up in triangle problems. Once you learn all the names and how they work, you'll be looking everywhere for parallel lines because you'll find them easy problems to work. Geometry will, sooner or later, start fitting together like a big jigsaw puzzle. It is not very enjoyable until that happens. As long as it makes no sense it will be a dreaded subject. But once you begin to see how it all fits together, you will begin to like it...a lot! Watch, work the problems, watch again as many times as you need. Master this material and you'll be on your way to smoother sailing! • Parallel And Perpendicular Lines - Proofs This video proves two theorems. 1. If two lines are perpendicular to the same line, then they are parallel. 2. If two lines are parallel to the same line, then they are parallel. Follow along with me as I prove these two concepts. We certainly don't have time to prove every theorem but these two are proven here. They are not necessarily the most important but they may help you learn how proofs work. After you work through them with me, turn off the screen and try it by yourself. Copy the 'given' and the drawing before you turn off the screen. Even if the proof itself is confusing, and even if you can't prove it by yourself, it's okay for now. The important thing right now is to be able to understand how each step flows logically from the previous step. "How To Write A Proof" will come later. :) • Kinds Of Triangles Triangles are one of the most familiar shapes in the world. As simple as a triangle is, there are several different kinds of triangles and they have different properties. We will also look at the interior, the exterior, as well as the triangle itself. In this video you will learn all about each type of triangle. You will learn about their angles and their sides and the relationships between sides and angles. Make sure you commit all definitions to memory so when you need to write a proof involving a triangle, you will have all the tools you need. Enjoy this lesson. It's very straightforward and not difficult at all. Just get all the details and terminology! Success is in the details! • 180 Degrees In A Triangle In this video you will learn two very important theorems about triangles. Not only will you learn the theorems but you will see me prove them. As I walk you through the proofs you will become more comfortable with writing your own proofs in the future. Right now just focus on learning these two concepts. Ever wondered why there were 180 degrees in every triangle? No matter the size, type, or location of a triangle, the angles always add up to 180 degrees. This video will show you why. This is a theorem you must, must, must learn. Or else....as they say! Another theorem on this video is the Exterior Angle Theorem. I am proving this theorem for you by writing a formal proof. Watch me, listen carefully, follow the steps. It is intended only to help you get more comfortable with proofs. The more comfortable you get, the less you will freak out when you have to write a proof of your own. It's coming you know. Better to get all these basics out of the way first! Work hard! • 3rd Angle Of A Triangle Proof It's fairly obvious that if two angles of one triangle are congruent to two angles of another triangle, then the third angles are congruent. However, how to write the proof may not be so obvious. For that matter, how to write ANY proof may not be so obvious! That is probably the understatement of the world! Most students find proofs to be very difficult. This video "illustrates" a proof but doesn't really teach you how to write a proof. That will need a video all it's own! Coming soon. For now, the most important part of this video is to get the concept of the 3rd angle...and, thank goodness, that IS very easy to get. Learn this theorem. Do not worry if you cannot write the proof as I did. Try to follow me though. At this stage, being able to just follow my proof is enough. The concept is important to know. The proof...just follow me for now. • Properties Used In Proofs Many students have trouble writing proofs by themselves. One essential step when writing Geometry proofs is knowing the definitions, properties, and theorems. This video explains the most used Whether in Geometry or Algebra, for example, the Addition Property of Equality is used over and over. Very simply put, you use this property whenever you add the same value to both sides of an equaition. In this video you will see examples of this property plus the Subtraction Property of Equality, Multiplication Property of Equality, and Division Property of Equality. Other properties that occur frequently are the Reflexive Property, Symmetric Property, and Transitive Property. Watch the video. Refer back to it as often as you need to until these properties are as easy as pie and as clear as day! • Using Algebra To Find Angle Measures Did you like Algebra? I hope so, because, many times in Geometry you are asked to solve for x. So if you thought you could get away from Algebra, oops, you were wrong! If you liked Algebra you are probably cheering to get back where you are comfortable. You must also know your geometry definitions to know how to set up your algebra equation. Solving the equation is almost always easy...once it is set up. The hard part comes when you are trying to set it up! The more definitions (in Geometry) that you really understand, the easier it is to set up your equation. Watch me and you'll see what I mean. In this video I will work problems with angle bisectors that form angles with equal measures. Also we'll work with linear pairs and supplementary angles. • Commutative, Associative, Distributive, Identity This video explains the Commutative, Associative, and Distributive Properties. They are not very fun or interesting,to tell you the truth, but you need to know them. When I first learned them I thought they were pointless. I didn't understand why in the world they were in every math course I took. You may be like that too. Eventually, as you take more and more math classes, you will see that they never go away...and you'll begin to understand why you need these properties. This video lesson is more like eating your veggies than eating dessert! Just telling you the truth. • Algebraic Examples In Geometry Sometimes you just need extra practice in solving geometry problems using algebra. Most students like this part of Geometry more than proofs and definitons. Do you? You have four problems to solve on this video. Work them along with me. One is vertical angles. One uses angle bisector. A third problem uses linear pairs and the last problem uses supplementary First of all, solve for x. Then find the measure of the angle. It is so easy to double check these and know for sure if you are right. I think you'll like these. As I say in the videa "Math is not a spectator sport." You can't just watch and just listen if you want to really learn and remember what you learn. You have to do the problems too. Work them with me, then work them by yourself. Supplementary Files: • Once you purchase this series you will have access to these files: Buy Now and Start Learning Also from RealMath: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/series/888-geometry-help","timestamp":"2014-04-16T08:00:39Z","content_type":null,"content_length":"57178","record_id":"<urn:uuid:26a6ba59-3ed2-4dc8-b6b0-4dda01713f10>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
March 10th, 2010, 11:10 AM I am trying to implement a bottom up heap construction. I seem to have the recursion working but im not sure how i am supposed to combine the trees at each stage. My heap is based on the arraylist complete binary tree. For example, the image below shows what I am trying to achieve initially. From my recursion the first number is get is 16 then 15 then 25. Which I am assuming is correct. However 16 is a new binary tree 15 is a new binary tree and then 25 will be a new binary tree. In this case 16 will be the left subtree and 15 will be the right subtree but am I right in thinking rather than the new binary just created pointing to the other binary trees it should create a new binary tree with 25 as the root and 16 and 15 as the children? Basically the question is how do I at each stage in the recursion create a new binary tree storing the elements from the subtrees? March 10th, 2010, 11:31 AM Re: Heaps There should already be elements in the first three spots. To build the heap bottom-up all you need to do is move elements around. This is done by comparing the parent with both children (if there are two) and moving the smallest of the 3 into the parent position. You can run iterate this breadth-depth wise to build the whole heap in O(n) time (no recursion required) by running this algorithm backwards from the end of the array. this looks like: Code : The whole bottom row don't need to be moved because there are no children. Look next at 2. It's bigger than both it's children (6 and no child), so it stays. look at 5. 1 is smaller than 5 and 4, so swap the 5 and the 1. Code : look at 4. 1 is smaller than 4 and 2, swap the 4 and the 1. Code : done building the heap. March 10th, 2010, 12:41 PM Re: Heaps There should already be elements in the first three spots. To build the heap bottom-up all you need to do is move elements around. This is done by comparing the parent with both children (if there are two) and moving the smallest of the 3 into the parent position. You can run iterate this breadth-depth wise to build the whole heap in O(n) time (no recursion required) by running this algorithm backwards from the end of the array. this looks like: Code : The whole bottom row don't need to be moved because there are no children. Look next at 2. It's bigger than both it's children (6 and no child), so it stays. look at 5. 1 is smaller than 5 and 4, so swap the 5 and the 1. Code : look at 4. 1 is smaller than 4 and 2, swap the 4 and the 1. Code : done building the heap. Not sure I follow your working what would you get for the following list: I'm told the correct answer would be: March 11th, 2010, 12:23 AM Re: Heaps here's the algorithm's pseudo-code: Code : 1. put everything as-is into your heap array. 2. Starting from the back, compare that element with it's two children's. Take the smallest of the three (a 'missing child' is not considered smaller). 3. swap the smallest with the element you're currently looking at. 4. move backwards through your array and repeat 2-4 until you've reached the front of the array. 5. done.
{"url":"http://www.javaprogrammingforums.com/%20algorithms-recursion/3669-heaps-printingthethread.html","timestamp":"2014-04-19T20:11:48Z","content_type":null,"content_length":"10909","record_id":"<urn:uuid:1f7dd734-5209-49a1-8417-19d3fd8d4540>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 6 If a straight line is bisected and a straight line is added to it in a straight line, then the rectangle contained by the whole with the added straight line and the added straight line together with the square on the half equals the square on the straight line made up of the half and the added straight line. Let a straight line AB be bisected at the point C, and let a straight line BD be added to it in a straight line. I say that the rectangle AD by DB together with the square on CB equals the square on CD. Describe the square CEFD on CD, and join DE. Draw BG through the point B parallel to either EC or DF, draw KM through the point H parallel to either AB or EF, and further draw AK through A parallel to either CL or DM. Then, since AC equals CB, AL also equals CH. But CH equals HF. Therefore AL also equals HF. Add CM to each. Therefore the whole AM equals the gnomon NOP. But AM is the rectangle AD by DB, for DM equals DB. Therefore the gnomon NOP also equals the rectangle AD by DB. Add LG, which equals the square on BC, to each. Therefore the rectangle AD by DB together with the square on CB equals the gnomon NOP plus LG. But the gnomon NOP and LG are the whole square CEFD, which is described on CD. Therefore the rectangle AD by DB together with the square on CB equals the square on CD. Therefore if a straight line is bisected and a straight line is added to it in a straight line, then the rectangle contained by the whole with the added straight line and the added straight line together with the square on the half equals the square on the straight line made up of the half and the added straight line. Explanation of the proof This proposition is remarkably similar to the last one, II.5, except the point D does not lie on the line AB but on that line extended. Let b denote the line AB, x denote AD, and y denote BD as in II.5. Then x – y = b (as opposed to x + y = b as in II.5). According to this proposition the rectangle AD by DB, which is the product xy, is the difference of two squares, the large one being the square on the line CD, that is the square of x – b/2, and the small one being the square on the line CB, that is, the square of b/2. x(x – b) = (x – b/2)^2 – (b/2)^2. This equation is easily verified with modern algebra, but it’s also easily verified in geometry, as done here in the proof. The geometric proof is primarily an exercise in cutting and pasting. The rectangle AB by DB is the rectangle AM, which is the sum of the rectangles AL and CM. But the rectangles AL, CH, and HF are all equal. Therefore, the rectangle AB by DB equals the gnomon formed by the rectangles CM and HF. That gnomon is the square CF minus the square LG, but the latter equals the square on BC. Thus, the rectangle AB by DB equals the square on DB minus the square on CB. Solution to a quadratic problem As was II.5, this proposition is set up to help in the solution of a quadratic problem: Find two numbers x and y so that their difference x – y is a known value b and their product is a known value c^2. In terms of x alone, this is equivalent to solving the quadratic equation x(x – b) = c^2. Since this proposition says that x(x – b) = (x – b/2)^2 – (b/2)^2, the problem reduces to solving the c^2 = (x – b/2)^2 – (b/2)^2, that is, finding CD so that CD^2 = (b/2)^2 + c^2. By I.47, if a right triangle is constructed with one side equal to b/2 and another equal to c, then the hypotenuse will equal the required value for CD. Algebraically, the solutions AD for x and BD for y have the values This analysis yields a construction to solve the quadratic problem stated above. To apply a rectangle equal to a given square to a given straight line but exceeding it by a square. Let AB be the given straight line. Bisect it at C. Construct a perpendicular BQ to AB at B equal to the side of the given square. Draw CQ. Extend AB to D so that BD equals CQ. Then, as described above, AB has been extended to D so that AD times BD equals the given square. This construction is not found in the Elements, but a generalization of it to parallelograms is proposition VI.29. Use of this proposition This proposition is used in II.11, III.36, and a lemma for X.29.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookII/propII6.html","timestamp":"2014-04-17T21:39:21Z","content_type":null,"content_length":"10200","record_id":"<urn:uuid:c23e7456-30ed-4954-a98e-58497702c0b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Trigonometry Practice Problems Trigonometry deals with the measurements of triangles both angles and sides. Many a real life situations are modeled using trigonometric functions. Trigonometric functions are transcendental in nature which are a study of importance in calculus. A good skill in solving trigonometric problems is therefore essential to acquire a high level of proficiency in Calculus. A practice test is given here. You can solve the test problems first on your own and verify with the answers given in the end. Hints to solve or important steps for each problem are also given. 1. Rewrite the each degree measure a radian measure as a multiple of π. (a) 30º (b) 315º (c) -20º (d) 720º 2. Rewrite each radian measure as a degree measure (a) $\frac{7\pi }{6}$ (b) $-\frac{11\pi }{30}$ (c)$\frac{3\pi }{2}$ (d) $\frac{2\pi }{9}$ 3. A sprinkler system can spray water over a distance of 40 meters and can rotate through an angle or 135º. Find the maximum area that can be irrigated by the sprinkler. 4. If sin θ = `3/8` where 0 ≤ θ ≤ $\frac{\pi }{2}$, then evaluate the exact values of the other five trigonometric functions. 5. If sin θ = $\frac{2\sqrt{13}}{13}$, find the value of sec (90 - θ) where θ is an acute angle. 6. Find the value of x in the adjoining 7. Bill was driving on a straight and flat road. He observed the peak of a mountain directly in front of him. He estimated the angle of elevation of the peak as 4.5 º. After driving another 15 miles ahead in the same road, he again estimated the angle of elevation of the peak to be 10 º. Find the approximate height of the mountain using trigonometric relationships. 8. Use the trigonometric identities to transform the left side of the equation to the right side. $\frac{sin \theta }{cos \theta }+\frac{cos\theta }{sin\theta }= csc\theta .sec\theta $ 9. The displacement from equilibrium of an oscillating weight suspended by a spring is given by y(t) = 2 cos 8t, where y(t) is the displacement in cm for time 't' seconds. Find the displacements when t = 0, `1/4` and `1/2`. Also find the period and frequency of the motion. 10. While running on a circular track, the angle of incline θ is the acute angle which the runner's body makes with the vertical. The relationship between θ, the velocity ' v ' of the runner and the radius ' r ' of the track is given by $tan\theta =\frac{v^{2}}{gr}$, where ' g ' is the acceleration due to gravity and is equal to 9.8 meters/second If the radius of the track is 16 meters, 1. Find the runner's velocity if the angle of incline is 13 º. 2. What is the angle of incline if the runner's velocity is read as 7 meters /second ? 11. An air plane flying horizontally 750 meters above the ground is first observed at an angle of elevation of 60º. If the angle is elevation is is observed to be 30º after 5 seconds, find the speed of the air plane rounded to a Km per hour. 12. A jet plane leaves city A and is heading toward city at a bearing of 120º. If the distance between the two cities are 2600 KM, 1. How far north and how far west is city A relative to city B. Round the answers to the nearest KM. 2. If the jet is to return directly from city B to city A, at what bearing should it travel? Hint: For air navigation a bearings are measured clockwise from north. Trigonometry Practice test - answers and hints 1. (a) $\frac{\pi }{6}$ (b) $\frac{7\pi }{4}$ (c) -$\frac{\pi }{9}$ (d) 4π Hint: Conversion factor for multiplication is $\frac{\pi }{180}$ and retain the same sign. 2. (a) 210º (b) -66º (c) 270º (d) 40º The conversion factor for multiplication is $\frac{180}{\pi }$ 3. 600 π m or ≈ 1884.96 m . (Use the formula for the area of the sector = `1/2`.r θ where θ is expressed in radians. 4. Cos θ = $\frac{\sqrt{55}}{8}$ tan θ = $\frac{3\sqrt{55}}{55}$ Sec θ = $\frac{8\sqrt{55}}{55}$ Csc θ = `8/3` Cot θ = $\frac{\sqrt{55}}{3}$ Hint make a sketch of a right triangle as shown below. Find the value of X using Pythagorean theorem. 5. $\frac{\sqrt{13}}{2}$ Use the cofunction identity to find the value of Cos (90 - θ). 6. 30√3. Use the properties of the special 30-60-90 right triangle. 7. 11,258.12 ft. Sketch the situation as shown and evaluate h eliminating x. 8. Simplify the left side by taking the LCD. 9. y(0) = 2, y(`1/4`) = -0.8323 y(`1/2) = -1.3073 Period of the motion = $\frac{\pi }{4}$ seconds Frequency of the motion = $\frac{4}{\pi }$ cm/second. Hint; Use the radian mode in the calculator. 10. (1) 6.017 meter/second (2) 17.35º Solve the equation for the required variable. 11. 624 Km/hour. The sketch for the situation is given below. Considering the two right triangles OCA and ODB, solve for 'd' which is the distance traveled by the air plane in 5 minutes. Use this value to determine the speed of the air plane. 1. A is at a distance of 1300 KM North and 2252 KM west of city B. 2. The bearing on the return journey is 240 º. The sketch for the situation is given with the bearing marked. Bearings are measured in clockwise for air navigation. The North and West distances are represented by the lengths BC and CA respectively.
{"url":"http://www.trigonometryhelp.org/trigonometry-practice-problems.html","timestamp":"2014-04-16T19:36:06Z","content_type":null,"content_length":"12645","record_id":"<urn:uuid:53ad3e4a-c4ae-41cc-ad24-9b512e19ee9f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics On the integrable variant of the Boussinesq system: Painlevé property, rational solutions, a related many-body system, and equivalence with the AKNS hierarchy. (English) Zbl 0694.35207 Summary: An infinite family of rational solutions of the integrable version of the Boussinesq system and the associated higher-order flow is constructed. The differential equations governing the motion of the poles are derived and their complete integrability demonstrated. This system embeds into two disjoint Calogero-Moser systems coupled only through a constraint. A novel treatment of this constraint is given. Using the equivalence of (*) to the real version of the nonlinear Schrödinger system, it is seen that rational solutions of the AKNS hierarchy have been constructed as well. Finally, it is shown that all solutions of these hierarchies, including the rational ones, also solve the first- modified KP equation. 35Q99 PDE of mathematical physics and other areas 37J35 Completely integrable systems, topological structure of phase space, integration methods 37K10 Completely integrable systems, integrability tests, bi-Hamiltonian structures, hierarchies
{"url":"http://zbmath.org/?q=an:0694.35207","timestamp":"2014-04-19T22:09:27Z","content_type":null,"content_length":"22102","record_id":"<urn:uuid:793ed53a-d30c-45f6-924c-aa4dbddc39d4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
54: General topology [Search][Subject Index][MathMap][Tour][Help!] Topology is the study of sets on which one has a notion of "closeness" -- enough to decide which functions defined on it are continuous. Thus it is a kind of generalized geometry (we are still interested in spheres and cubes, for example, but we might consider them to be "the same", yet distinct from a bicycle tire, which has a "hole") or a kind of generalized analysis (we might think of the functions f(x)=x^2 and f(x)=|x| as being "the same", and yet distinct from f(x)=signum(x)=x/|x|, which has a discontinuity). More formally, a topological space is a set X on which we have a topology -- a collection of subsets of X which we call the "open" subsets of X. The only requirements are that both X itself and the empty subset must be among the open sets, that all unions of open sets are open, and that the intersection of two open sets be open. This definition is arranged to meet the intent of the opening paragraph. However, stated in this generality, topological spaces can be quite bizarre; for example, in most other disciplines of mathematics, the only topologies on finite sets are the discrete topologies (all subsets are open), but the definition permits many others. Thus a general theme in topology is to test the extent to which the axioms force the kind of structure one expects to use and then, as appropriate, introduce other axioms so as to better match the intended application. For example, a single point need not be a closed set in a topology. Does this seem "inappropriate"? Then perhaps you are envisioning a special kind of topological space, say a a metric space. This alone still need not imply the space looks enough like the shapes you may have seen in a textbook; if you really prefer to understand those shapes, you need to add the axioms of a manifold, perhaps. Many such levels of generality are possible. Since the axioms of topology are stated in terms of subsets of X, it should be no surprise that one branch of topology is closely related to set theory, particularly "descriptive set theory". Here one considers general constructs such as closures of sets, limits, convergence, and nets. One can look at topologies related to order or cardinality, and so create extraordinarily large topological spaces. By using the axiom of choice, one may prove the existence of topological spaces with peculiar properties. In particular, there are questions about topology which can be reduced to questions of set theory, whose answer then depends on the axioms of set theory chosen. As in other branches of axiomatic mathematics, we may (in category-theoretic style) make some basic constructs. The most important functions between topological spaces are the continuous ones (a definition borrowed from analysis), which we use to define homeomorphisms -- functions which can be used to demonstrate that two spaces are "the same". We can define products of spaces (and coproducts -- unions), subspaces, quotient spaces, and so on. On the one hand, each is a "universal" solution to some problem which can be stated in terms of the existence of maps and spaces related via commutative diagrams; this aspect makes them useful tools for algebraic topology. On the other hand, each can be studied (and generalized) internally, making them useful tools for analysis (including semicontinuous functions for example). In order to get more significant results, one must restrict to spaces with some additional properties. The precise set of additional axioms depends on the intended results. For example, if we would like to know which spaces might have a topology which is consistent with a metric, we know individual points must be closed; the axiom that this is true (the "T_1" axiom) is but one of a number of separation axioms. A great deal of work has been done to see the independence of these axioms, the extent to which they are preserved under the constructions of the previous paragraph, and so on. The same is true of other types of axioms designed to focus attention on "well-behaved" spaces. For example, there are cardinality axioms (e.g., metric spaces have the additional property that the topology at a point is countable), compactness axioms (e.g. a space would have to be locally compact to be a manifold), connectedness axioms, and so on. In each case, there are a number of choices for how tight the axioms should be: is one interested in weak conclusions about a large family of spaces, or stronger conclusions about a family of more particular interest? Well-known results concerning these properties include a version of the Baire Category Theorem (nowhere dense subsets), Tychonoff's theorem (products of compact spaces), Urysohn's Lemma and Tietze's Theorem (functions on well-separated spaces), and compactness criteria (Bolzano-Weierstrass, Heine-Borel). A number of families of spaces are defined by the presence of some extra structure which is related to the topology in a natural way. In each case, one can for example ask, given a topological space, whether the extra can be imposed on that space. We have already mentioned metric spaces: spaces on which there is a distance function; the latter question is then the question of whether a space is metrizable. Other categories include measure spaces (spaces with a given real-valued measure on families of subsets), manifolds (spaces with a given collection of coordinate charts), simplicial complexes (a generalization of polyhedra), CW-complexes (spaces with a given decomposition into subsets homeomorphic to balls of various dimensions), ordered topological spaces, topological groups or vector spaces, and so on. The distinction between this and the previous paragraph is that additional axioms are assumed about a new construct provided at the outset, rather than additional axioms about the topology; thus the questions asked about these structures can be about either the topology or about the new construct. (The discussion of these additional properties gives us subdisciplines Metric Topology, Combinatorial Topology, and so on.) Moving toward applications, we can ask about topological spaces which arise in some fairly common way. One significant family of examples is sets S of functions between topological spaces X and Y. Depending on the properties or additional structures possessed by X and Y, S may be given one or more topologies, and in some cases itself possesses an additional structure. Many questions of functional analysis, for example, are most conveniently expressed in terms of the topology or metric on families of functions on the real line (Is a periodic function "equal" to its Fourier series? That's asking whether the original function lies in the closure of the sequence of partial sums, which depends on the topology.) Thus we see in topology some treatments of the Arzelà-Ascoli theorem and the Stone-Weierstrass theorem. Other families of examples include the Euclidean spaces themselves and various subspaces (curves and surfaces, spheres, and so on). Here we may use the combined structures of the Lebesgue measure, the Euclidean metric, and the natural coordinate charts to ask and answer questions of a generally topological nature. Well-known results on Euclidean spaces include the Peano curves covering the square (Hahn-Mazurkiewicz theorem), the Jordan curve theorem, and the Banach-Tarski paradoxical volume-altering decompositions; it's nontrivial even to that R^n and R^m are not homeomorphic unless n= m. Results on spheres include Borsuk-Ulam separation results, Lyusternik-Shnirel´man covering results, and the whole of degree theory (which can count the Fundamental Theorem of Algebra among its consequences!). Much of the topological nature of these spaces is developed in the branches of topology known as Algebraic Topology and Differential Topology. Other important applications of topology include a number of fixed-point theorems and the topological nature of dynamical systems. The Brouwer fixed-point theorem is perhaps the best-known example, but arguably everything from Newton's method to fractals is a study of the stability and convergence of iterates of functions from a topological space to itself. Clearly many of the most-applicable research in this area is limited to metric spaces or manifolds, but some of the topics may be approached in a fairly general way. See the article on Topology at St Andrews. This section is limited to what is sometimes known as "point-set topology". Some important topological results can be proved with substantial algebraic machinery; particularly strong topological results can be expected of spaces which have additional structure, especially manifolds. See 55: Algebraic Topology for the definitions, and computations, and applications of fundamental groups, homotopy groups, homology and cohomology. This includes topics in homotopy theory -- studies of spaces in the homotopy category -- whether or not they involve algebraic invariants. The study of 57: Manifolds and Cell Complexes is roughly the part of the topological category most amenable to nice pictures. This includes differential topology -- what happens when we add differential or other structures? -- actions of groups on spaces, knot theory, low-dimensional topology, and so on. Simplicial complexes are essentially polyhedra. Problems specific to Euclidean space may be treated in the geometry pages, particularly if the questions themselves center heavily on the metric. Aspects of real analysis (differentiation and integration) treated topologically are the purview of global analysis. This includes both the topological study of spaces of maps (e.g. the space of embeddings of one manifold into another) and the study of differential equations and so on with manifolds other than Euclidean space as the domain. In particular, this includes the more detailed studies of dynamical systems, although the related area of fractals is most likely in Measure Theory. The study of topological vector spaces is treated in detail in functional analysis and related fields such as operator theory. There is also a separate classification for topological groups. In most other cases, "topological xxx theory" is treated as a subset of "xxx theory" Use of the word "category" (e.g. in the Lyusternik-Shnirel´man theorem) is unrelated to its use in category theory Until 1958 an additional classification 56.0X was used for general topology. Browse all (old) classifications for this area at the AMS. There are a number of good textbooks, with varying perspectives on topology. • A good, commonly-used, general text is Munkres, James R., "Topology: a first course", Prentice-Hall, Inc., Englewood Cliffs, N.J., 1975. 413 pp. • Another text aging gracefully: Kelley, John L., "General Topology", Springer-Verlag, New York-Berlin, 1975. 298 pp. • Engelking, Ryszard; Sieklucki, Karol, "Introduction to topology", Heldermann Verlag, Berlin, 1992. 429 pp. ISBN 3-88538-004-8 MR94d:54001 -- a nice undergraduate text covering many of the branches of topology. • Edgar, Gerald A., "Measure, topology, and fractal geometry", Springer-Verlag, New York, 1990. 230 pp. ISBN 0-387-97272-2 -- focuses on metric spaces, accessible to undergraduates. • Jänich, Klaus, "Topology", Springer-Verlag, Berlin, 1994. 239 pp. ISBN 3-540-57471-9[German] and 0-387-90892-7 [English] -- a pleasant introduction; almost chatty. • Two welcome Dover reprints are the books by Mendelson, Bert, "Introduction to topology" Dover Publications, Inc., New York, 1990. 206 pp. ISBN 0-486-66352-3, and • Kahn, Donald W., "Topology, An introduction to the point-set and algebraic areas", Dover Publications, Inc., New York, 1995. 217 pp. ISBN 0-486-68609-4 • Good texts discussing topology from the perspective an an analyst include Dixmier, Jacques, "General topology", Springer-Verlag, New York-Berlin, 1984. 140 pp. ISBN 0-387-90972-9 (MR85f:54001) • Wilansky, Albert, "Topology for analysis", Robert E. Krieger Publishing Co., Inc., Melbourne, Fla., 1983. 383 pp. ISBN 0-89874-343-5 (MR85d:54001) • (Older yet but still used: Willard, Stephen, "General topology", Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont. 1970, 369 pp. MR 41 #9173) An online textbook provides a nice development of the basic theory. [Aisling McCluskey and Brian McMaster] The definitive axiomatic development is of course by Bourbaki, Nicolas, "General topology" (2 volumes), Springer-Verlag, Berlin-New York, 1989. 363+437 pp. ISBN 3-540-19372-3 and 3-540-19374-X The foundational aspects of the subject are thoroughly reviewed in "Handbook of set-theoretic topology", edited by Kenneth Kunen and Jerry E. Vaughan. North-Holland Publishing Co., Amsterdam-New York, 1984. 1273 pp. ISBN 0-444-86580-2, 85k:54001 There is an excellent, if somewhat dated, collection of "Reviews in Topology" by Norman Steenrod, a sorted collection of the relevant reviews from Math Reviews (1940-1967). Many now-classical results date from that period. Most of the reviews in that collection are of algebraic and differential topology, however. Some survey articles: • Kline, J. R.: "What is the Jordan curve theorem?", Amer. Math. Monthly 49, (1942). 281--286. MR3,318d • Borsuk, Karol; Dydak, Jerzy: "What is the theory of shape?", Bull. Austral. Math. Soc. 22 (1980), no. 2, 161--198. MR82d:54015 • Borsuk, K.: "What is topology?" (Polish) Wiadom. Mat. (2) 1, (1955). 65--74. MR16,1041f Got a question in this area? Ask a Topologist (bulletin board). It is difficult to imagine software or tables for topology distinct from those for algebraic topology (e.g. tables of homotopy groups) or what is essentially (numerical) analysis (e.g. implementations of fixed-point routines)! Perhaps the most appropriate candidate would be tables outlining the implications and interplay among the various axioms on topologies; this is essentially the intent of • Steen, Lynn Arthur; Seebach, J. Arthur: "Counterexamples in topology", Dover Publications, Inc., Mineola, NY, 1995. ISBN 0-486-68735-X, MR 96k:54001 You can reach this page through http://www.math-atlas.org/welcome.html Last modified 2000/01/14 by Dave Rusin. Mail:
{"url":"http://www.math.niu.edu/~rusin/known-math/index/54-XX.html","timestamp":"2014-04-18T02:58:56Z","content_type":null,"content_length":"24898","record_id":"<urn:uuid:094327f9-63c6-4012-9e0a-caa0e719e039>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Prenex normal form vs. quantifier rank up vote 2 down vote favorite Consider first-order logic with some fixed, relational vocabulary $\tau$. A sentence is a formula in this logic with no free variables. A sentence is in prenex normal form, if all quantifiers are moved to the front. For example $\exists x\exists y(P(x)\to P(y))$ is in prenex normal form whereas $\exists x( P(x)\lor \exists y(P(x)\to P(y)))$ is not. Let quantifier rank of a sentence be the maximum number of nested quantifications in it. Now, we know that quantifier rank is a measure of complexity of a first-order formula in the sense that there are only a finite number of sentences of a fixed quantifier rank up to logical equivalence (lets consider only finite models). If we have a sentence of the form $\varphi \equiv \exists x(\exists y \alpha(x,y)\lor\exists y \beta(x,y))$, where $\alpha$ and $\beta$ are formulas with two free variables $x$ and $y$, it is straightforward to transform it to a prenex form by simply renaming the occurences of $y$s and shifting quantifiers in front, i.e. $\exists x\exists y_1\exists y_2(\alpha(x,y_1)\lor \beta(x,y_2))$. But now the originating formula had quantifier rank of $2$ when the prenex form formula has quantifier rank $3$. How would one transform a formula to prenex normal form in a way that would keep the quantifier rank untouched? add comment 2 Answers active oldest votes If you begin with $\psi \equiv (\exists x)R(x) \land (\exists x)(\lnot R(x)) \land (\forall y)S(y)$, the usual prenex form will have depth 3, while the original formula has depth 1. So you are asking how to find an equivalent formula to $\psi$ that only has one quantifier. Let's assume our language only has two unary relation symbols $R$ and $S$. Then the formula $\psi$ above cannot be equivalent to any one quantifier formula $\phi$. First assume $\phi$ is existential, and true in some model. Then we can make a new model $M$ by adding one more element for which $S$ does not hold. Then $\phi$ is still true in $M$ (it will be preserved up vote 7 down under extensions) but $\psi$ is not true in $M$ because $(\forall y)S(y)$ is not true. vote accepted Suppose $\phi$ is universal. Take any model with more than 2 elements in which $\psi$ holds, and call one of the elements $c$. Take the submodel $N$ containing just $c$. Then $\phi$ is still true in $N$, because it is preserved under taking substructures, but $\psi$ is not true because $\psi$ requires at least two elements in the domain. Yep, this is it. Thanks for clarifying this up - I somehow took it for granted that such form exists. Silly me... – user10891 May 3 '11 at 13:02 add comment Although Carl has answered the question, I'd like to use it as an excuse to preach about one of my pet peeves, namely the idea that, when defining the Skolem form (or its dual, the Herbrand form) of a sentence, one should first put the sentence into prenex form and then perform the appropriate replacement of certain quantified variables by new function symbols. This is fine for most purely theoretical purposes, but if one has any computation in mind then it's much better to skip the initial prenex step and introduce the Skolem (or Herbrand) functions directly. The reason it's better is precisely the subject of Frank's question and Carl's answer: The prenex step can convert two originally separate (unnested) quantifiers into a nested pair, which can up vote mean that Skolem (or Herbrand) function symbols have more argument places than they should. Those extra arguments can considerably complicate the task of finding suitable unifications, a task 5 down that lies at the heart of many of the computational uses of Skolem and Herbrand forms. Actually, the extra arguments can also cause trouble in a few purely theoretical situations, if one needs precise quantitative information. If I remember correctly, this sort of trouble is the root of the errors in Herbrand's original proof of his theorem (see "False Lemmas in Herbrand," Bull. A.M.S. 69 (1963) 699-706). add comment
{"url":"https://mathoverflow.net/questions/63788/prenex-normal-form-vs-quantifier-rank/63794","timestamp":"2014-04-17T19:00:07Z","content_type":null,"content_length":"53904","record_id":"<urn:uuid:6c6d2a39-4ae1-4029-9ff7-47398b2e50ad>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert ton to grams - Conversion of Measurement Units ›› Convert ton-force [metric] to gram-force Did you mean to convert ton-force [long] to gram-force ton-force [metric] gram ton-force [short] ton [short, US] long ton ton [metric] ton [long, UK] ›› More information from the unit converter How many ton in 1 grams? The answer is 1.0E-6. We assume you are converting between ton-force [metric] and gram-force. You can view more details on each measurement unit: ton or grams The SI derived unit for force is the newton. 1 newton is equal to 0.000101971621298 ton, or 101.971621298 grams. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between tons-force and grams-force. Type in your own numbers in the form to convert the units! ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0031 seconds.
{"url":"http://www.convertunits.com/from/ton/to/grams","timestamp":"2014-04-17T01:13:13Z","content_type":null,"content_length":"20562","record_id":"<urn:uuid:360ebe18-ed26-40ac-ab2b-3b12d6d651c9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
A Perspective on Multiaccess Channels Results 1 - 10 of 130 - IEEE TRANSACTIONS ON INFORMATION THEORY , 2000 "... When n identical randomly located nodes, each capable of transmitting at bits per second and using a fixed range, form a wireless network, the throughput @ A obtainable by each node for a randomly chosen destination is 2 bits per second under a noninterference protocol. If the nodes are optimally p ..." Cited by 2220 (32 self) Add to MetaCart When n identical randomly located nodes, each capable of transmitting at bits per second and using a fixed range, form a wireless network, the throughput @ A obtainable by each node for a randomly chosen destination is 2 bits per second under a noninterference protocol. If the nodes are optimally placed in a disk of unit area, traffic patterns are optimally assigned, and each transmission’s range is optimally chosen, the bit–distance product that can be transported by the network per second is 2 @ A bit-meters per second. Thus even under optimal circumstances, the throughput is only 2 bits per second for each node for a destination nonvanishingly far away. Similar results also hold under an alternate physical model where a required signal-to-interference ratio is specified for successful receptions. Fundamentally, it is the need for every node all over the domain to share whatever portion of the channel it is utilizing with nodes in its local neighborhood that is the reason for the constriction in capacity. Splitting the channel into several subchannels does not change any of the results. Some implications may be worth considering by designers. Since the throughput furnished to each user diminishes to zero as the number of users is increased, perhaps networks connecting smaller numbers of users, or featuring connections mostly with nearby neighbors, may be more likely to be find acceptance. - IEEE/ACM Transactions on Networking , 2000 "... Abstract—In wireless LANs (WLANs), the medium access control (MAC) protocol is the main element that determines the efficiency in sharing the limited communication bandwidth of the wireless channel. In this paper we focus on the efficiency of the IEEE 802.11 standard for WLANs. Specifically, we anal ..." Cited by 290 (10 self) Add to MetaCart Abstract—In wireless LANs (WLANs), the medium access control (MAC) protocol is the main element that determines the efficiency in sharing the limited communication bandwidth of the wireless channel. In this paper we focus on the efficiency of the IEEE 802.11 standard for WLANs. Specifically, we analytically derive the average size of the contention window that maximizes the throughput, hereafter theoretical throughput limit, and we show that: 1) depending on the network configuration, the standard can operate very far from the theoretical throughput limit; and 2) an appropriate tuning of the backoff algorithm can drive the IEEE 802.11 protocol close to the theoretical throughput limit. Hence we propose a distributed algorithm that enables each station to tune its backoff algorithm at run-time. The performances of the IEEE 802.11 protocol, enhanced with our algorithm, are extensively investigated by simulation. Specifically, we investigate the sensitiveness of our algorithm to some network configuration parameters (number of active stations, presence of hidden terminals). Our results indicate that the capacity of the enhanced protocol is very close to the theoretical upper bound in all the configurations analyzed. Index Terms—Multiple access protocol (MAC), performance analysis, protocol capacity, wireless LAN (WLAN). I. - IEEE Transactions on Information Theory , 2001 "... This paper characterizes the capacity region of a Gaussian multiple access channel with vector inputs and a vector output with or without intersymbol interference. The problem of finding the optimal input distribution is shown to be a convex programming problem, and an efficient numerical algorithm ..." Cited by 190 (11 self) Add to MetaCart This paper characterizes the capacity region of a Gaussian multiple access channel with vector inputs and a vector output with or without intersymbol interference. The problem of finding the optimal input distribution is shown to be a convex programming problem, and an efficient numerical algorithm is developed to evaluate the optimal transmit spectrum under the maximum sum data rate criterion. The numerical algorithm has an it#8 at#8 e wat#8-filling int#j pret#4968 . It converges from any starting point and it reaches with in s per output dimension per transmission from the K-user multiple access sum capacity af t#j just one it#4 at#49 . These results are also applicable to vector multiple access fading channels. - IEEE Transactions on Information Theory , 2002 "... We consider a user communicating over a fading channel with perfect channel state information. Data is assumed to arrive from some higher layer application and is stored in a buffer until it is transmitted. We study adapting the user's transmission rate and power based on the channel state informati ..." Cited by 170 (7 self) Add to MetaCart We consider a user communicating over a fading channel with perfect channel state information. Data is assumed to arrive from some higher layer application and is stored in a buffer until it is transmitted. We study adapting the user's transmission rate and power based on the channel state information as well as the buffer occupancy; the objectives are to regulate both the long-term average transmission power and the average buffer delay incurred by the traffic. Two models for this situation are discussed; one corresponding to fixed-length/variable-rate codewords and one corresponding to variable-length codewords. The trade-off between the average delay and the average transmission power required for reliable communication is analyzed. A dynamic programming formulation is given to find all Pareto optimal power/delay operating points. We then quantify the behavior of this tradeoff in the regime of asymptotically large delay. In this regime we characterize simple buffer control policies which exhibit optimal characteristics. Connections to the delay-limited capacity and the expected capacity of fading channels are also discussed. , 1999 "... This paper deals with 2 ` --ary transmission using multilevel coding (MLC) and multistage decoding (MSD). The known result that MLC and MSD suffice to approach capacity if the rates at each level are appropriately chosen is reviewed. Using multiuser information theory, it is shown that there is a ..." Cited by 128 (24 self) Add to MetaCart This paper deals with 2 ` --ary transmission using multilevel coding (MLC) and multistage decoding (MSD). The known result that MLC and MSD suffice to approach capacity if the rates at each level are appropriately chosen is reviewed. Using multiuser information theory, it is shown that there is a large space of rate combinations such that MLC and full maximum--likelihood decoding (MLD) can approach capacity. It is noted that multilevel codes designed according to the traditional balanced distance rule tend to fall in the latter category and therefore require the huge complexity of MLD. The capacity rule, the balanced distances rules, and two other rules based on the random coding exponent and cut--off rate are compared and contrasted for practical design. Simulation results using multilevel binary turbo codes show that capacity can in fact be closely approached at high bandwidth efficiencies. Moreover, topics relevant in practical applications such as signal set labeling, - IEEE Trans. Inform. Theory , 2004 "... In a point-to-point wireless fading channel, multiple transmit and receive antennas can be used to improve the reliability of reception (diversity gain) or increase the rate of communication for a fixed reliability level (multiplexing gain). In a multiple access situation, multiple receive antennas ..." Cited by 124 (4 self) Add to MetaCart In a point-to-point wireless fading channel, multiple transmit and receive antennas can be used to improve the reliability of reception (diversity gain) or increase the rate of communication for a fixed reliability level (multiplexing gain). In a multiple access situation, multiple receive antennas can also be used to spatially separate signals from different users (multiple access gain). Recent work has characterized the fundamental tradeoff between diversity and multiplexing gains in the point-to-point scenario. In this paper, we extend the results to a multiple access fading channel. Our results characterize the fundamental tradeoff between the three types of gain and provide insights on the capabilities of multiple antennas in a network context. 1 - SIAM Journal on Computing , 1998 "... Abstract. We show that for any randomized broadcast protocol for radio networks, there exists a network in which the expected time to broadcast a message is Ω(D log(N/D)), where D is the diameter of the network and N is the number of nodes. This implies a tight lower bound of Ω(D log N) for any D ≤ ..." Cited by 112 (4 self) Add to MetaCart Abstract. We show that for any randomized broadcast protocol for radio networks, there exists a network in which the expected time to broadcast a message is Ω(D log(N/D)), where D is the diameter of the network and N is the number of nodes. This implies a tight lower bound of Ω(D log N) for any D ≤ N 1−ε, where ε>0 is any constant. - In Proc. of FOCS , 2003 "... In this paper we present new randomized and deterministic algorithms for the classical problem of broadcasting in radio networks with unknown topology. We consider directed n-node radio networks with specified eccentricity D (maximum distance from the source node to any other node). In a seminal wor ..." Cited by 102 (1 self) Add to MetaCart In this paper we present new randomized and deterministic algorithms for the classical problem of broadcasting in radio networks with unknown topology. We consider directed n-node radio networks with specified eccentricity D (maximum distance from the source node to any other node). In a seminal work on randomized broadcasting, Bar-Yehuda et al. presented an algorithm that for any n-node radio network with eccentricity D completes the broadcasting in O(D log n + log 2 n) time, with high probability. This result is almost optimal, since as it has been shown by Kushilevitz and Mansour and Alon et al., every randomized algorithm requires Ω(D log(n/D)+log 2 n) expected time to complete broadcasting. Our first main result closes the gap between the lower , 2000 "... in WLANs, the medium access control (MAC) protocol is the main element that determines the efficiency of sharing the limited communication bandwidth of the wireless channel. The fraction of channel bandwidth used by successfully transmitted messages gives a good indication of the protocol efficiency ..." Cited by 81 (2 self) Add to MetaCart in WLANs, the medium access control (MAC) protocol is the main element that determines the efficiency of sharing the limited communication bandwidth of the wireless channel. The fraction of channel bandwidth used by successfully transmitted messages gives a good indication of the protocol efficiency, and its maximum value is referred to as protocol capacity. In a previous paper we have derived the theoretical limit of the IEEE 802.11 MAC protocol capacity. In addition, we showed that if a station has an exact knowledge of the network status, it is possible to tune its backoff algorithm to achieve a protocol capacity very close to its theoretical bound. Unfortunately, in a real case, a station does not have an exact knowledge of the network and load configurations (i.e., number of active stations and length of the message transmitted on the channel) but it can only estimate it. In this work we analytically study the performance of the IEEE 802.11 protocol with a dynamically tuned backoff based on the estimation of the network status. Results obtained indicate that under stationary traffic and network configurations (i.e., constant average message length and fixed number of active stations), the capacity of the enhanced protocol approaches the theoretical limits in all the configurations analyzed. In addition, by exploiting the analytical model, we investigate the protocol performance in transient conditions (i.e., when the number of active stations sharply changes).
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=241805","timestamp":"2014-04-16T16:21:18Z","content_type":null,"content_length":"39402","record_id":"<urn:uuid:c90b06ca-a2fc-4b6e-aa70-f0862086f65f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Science Forums Biology Forum Molecular Biology Forum Physics Chemistry Forum - Solution to Einstein's Field Equations where T^uv not= 0? Many of the widely-studied solutions to Einstein's field equations are taken in vacuo, that is, at events where the energy momentum tensor T^uv=0. This includes Schwarzchild and Kerr geometries, for example. Have there been many exact solutions found where T^uv not= 0? I am speaking of analytical solutions where the differential equations are solved exactly, *not* numerical approximations. I am especially interested in any exact solutions based on the usual Maxwell energy tensor of electrodynamics T^u_v = (1/4pi) [F^ut F_vt - (1/4) lambda^u_v F^st F_st]. I am interested in solutions both where F^uv_u=0 (free space) and also where F^uv_u=J^v (space with current sources). Conditions of interest include static spherical symmetry in the nature of Schwarzchild, and rotation with spherical symmetry about the z-axis in the nature of Kerr. To be clear, I am *not* looking for solutions where the metric is assumed to be a Minkowski metric. Lots of analyses assume a flat-space background for Rather, I am looking for *exact* solutions, to the extent that such solutions are known, which derive a curved spacetime metric from the electromagnetic field strength tensor, that is, which derive g_uv = g_uv(F^uv) via the Maxwell tensor T^u_v, whereby T^u_v(g_uv, F_uv) simply becomes T^u_v(F_uv) once the g_uv(F^uv) are found. Jay R. Yablon [Only registered and activated users can see links. Click Here To Register...]
{"url":"http://www.molecularstation.com/forum/physics-forum/38097-solution-einsteins-field-equations-where-t%5Euv-not%3D-0-a-print.html","timestamp":"2014-04-16T13:45:37Z","content_type":null,"content_length":"15546","record_id":"<urn:uuid:7438cd0e-fd2d-4cf8-994a-fb62b807b912>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Pine Lake SAT Math Tutor Find a Pine Lake SAT Math Tutor ...I have tutored people in most subjects from SAT (math and verbal prep) to children just learning their alphabets and spatial recognition. I am also very knowledgeable on the subject of the human body, from years of studying the MCAT. I am a tutor who takes teaching seriously. 29 Subjects: including SAT math, chemistry, reading, physics ...As an undergraduate and graduate student in genetics, this subject is one that I know inside and out. I can tutor basic Mendelian genetics, Complex patterns of inheritance, Molecular biology/ genetics, and eukaryotic and prokaryotic genetics. I have also tutored genetics to undergraduate students. 15 Subjects: including SAT math, chemistry, geometry, biology ...I am a certified teacher (Georgia, Secondary English grades 6-12), and as such, I have had coursework in the field of teaching students with special needs. I have taught students with the following learning exceptionalities: 1. EBD 2. 22 Subjects: including SAT math, reading, English, GED ...As a Special Education teacher, I have 7 years of experience helping students study, take notes, complete assignments, and get the most out of their school work. I taught Study Skills for 7 years in the classroom teaching students how to organize their notebooks, study for tests, and use graphic organizers. I am certified Special Needs Educator K-12. 21 Subjects: including SAT math, calculus, elementary (k-6th), ACT Math ...I've also written numerous academic journal articles, book chapters and conference papers. I've run and helped run several qualitative research projects and understand the research process very well. I am also an editor, having edited numerous Ph.D. and MA thesis for friends and colleagues. 37 Subjects: including SAT math, English, reading, writing
{"url":"http://www.purplemath.com/Pine_Lake_SAT_Math_tutors.php","timestamp":"2014-04-17T01:00:30Z","content_type":null,"content_length":"23861","record_id":"<urn:uuid:48a2ba23-4b1d-4ef5-8190-1567ca35f8ee>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Existential type From HaskellWiki (Difference between revisions) (Existential types in Essential Haskell. Comparisons to Haskell.) m (typographic correction) ← Older edit Newer edit → Line 63: Line 63: See the [http://www.cs.uu.nl/wiki/Ehc/#On_EHC documentation on EHC], each paper at the ''Version See the [http://www.cs.uu.nl/wiki/Ehc/#On_EHC documentation on EHC], each paper at the ''Version 4'' part: 4'' part: * Chapter 8 (EH4) of Atze Dijksta's [http://www.cs.uu.nl/groups/ST/Projects/ehc/ehc-book.pdf * Chapter 8 (EH4) of Atze Dijksta's [http://www.cs.uu.nl/groups/ST/Projects/ehc/ehc-book.pdf Essentai Haskell PhD thesis] (most recent version). A detailed explanation. It explains also that Essential Haskell PhD thesis] (most recent version). A detailed explanation. It explains also existential types can be expressed in Haskell, but their use is restricted to data declarations, that existential types can be expressed in Haskell, but their use is restricted to data − and the notation (using keyword <hask>forall</hask>) may be confusing. In Essential Haskell, + declarations, and the notation (using keyword <hask>forall</hask>) may be confusing. In Essential existential types can occur not only in data declarations, and a separate keyword <hask>exists</ Haskell, existential types can occur not only in data declarations, and a separate keyword <hask> hask> is used for their notation. exists</hask> is used for their notation. * [http://www.cs.uu.nl/wiki/pub/Ehc/WebHome/20050107-eh-intro.pdf Essential Haskell Compiler * [http://www.cs.uu.nl/wiki/pub/Ehc/WebHome/20050107-eh-intro.pdf Essential Haskell Compiler overview] overview] * [http://www.cs.uu.nl/wiki/Ehc/Examples#EH_4_forall_and_exists_everywher Examples] * [http://www.cs.uu.nl/wiki/Ehc/Examples#EH_4_forall_and_exists_everywher Examples] Revision as of 17:49, 2 May 2006 1 Dynamic dispatch mechanism of OOP Existential types in conjunction with type classes can be used to emulate the dynamic dispatch mechanism of object oriented programming languages. To illustrate this concept I show how a classic example from object oriented programming can be encoded in Haskell. class Shape_ a where perimeter :: a -> Double area :: a -> Double data Shape = forall a. Shape_ a => Shape a type Radius = Double type Side = Double data Circle = Circle Radius data Rectangle = Rectangle Side Side data Square = Square Side instance Shape_ Circle where perimeter (Circle r) = 2 * pi * r area (Circle r) = pi * r * r instance Shape_ Rectangle where perimeter (Rectangle x y) = 2*(x + y) area (Rectangle x y) = x * y instance Shape_ Square where perimeter (Square s) = 4*s area (Square s) = s*s instance Shape_ Shape where perimeter (Shape shape) = perimeter shape area (Shape shape) = area shape -- Smart constructor circle :: Radius -> Shape circle r = Shape (Circle r) rectangle :: Side -> Side -> Shape rectangle x y = Shape (Rectangle x y) square :: Side -> Shape square s = Shape (Square s) shapes :: [Shape] shapes = [circle 2.4, rectangle 3.1 4.4, square 2.1] See also the concept of Smart constructor. The type of the parser for this GADT is a good example to illustrate the concept of existential type. See the documentation on EHC, each paper at the Version 4 part: • Chapter 8 (EH4) of Atze Dijksta's Essential Haskell PhD thesis (most recent version). A detailed explanation. It explains also that existential types can be expressed in Haskell, but their use is restricted to data declarations, and the notation (using keyword ) may be confusing. In Essential Haskell, existential types can occur not only in data declarations, and a separate keyword is used for their notation.
{"url":"http://www.haskell.org/haskellwiki/index.php?title=Existential_type&diff=3957&oldid=3956","timestamp":"2014-04-16T10:49:40Z","content_type":null,"content_length":"26717","record_id":"<urn:uuid:ed7a03cd-20a2-4952-9106-f31aaa4872ff>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving a trigonometric equation Solve on the domain 0° ≦ x ≦ 360°. Use a calculator and round to the nearest degree if unable to use special triangles. cos x-1 = -cos x Can anyone give me some pointers on this one? I'm taking grade 12 math through an independent learning course, where I have no access to a teacher, and this question has me completely stuck. I've solved other trig equations by isolating x and using a special triangle, a calculator, or simply looking at the graph sometimes. However, the right side of the equation has always been a real number in the past, and I'm not sure what to do with this -cos x.
{"url":"http://mathhelpforum.com/trigonometry/208965-solving-trigonometric-equation.html","timestamp":"2014-04-19T09:32:31Z","content_type":null,"content_length":"52046","record_id":"<urn:uuid:bbe485fb-1072-4b4f-ae66-4a2e828f7274>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Building statistical models by visualization Thomas P. Minka Practical applications of statistical methods usually involve assumptions about the domain, such as independence, normality, linearity, and the choice of variables. I will describe a set of graphical tools which simplify the process of learning about a domain and checking model assumptions. Some of these tools have been around for years, but vastly underused by statistical learning researchers. Using real-world examples, I will illustrate how visualization can be used for model selection, identifying exceptional cases, and interpreting the results of learning algorithms. Talk slides, pdf (221K) Also see my page on Discriminative projections and the class materials on my main page. My visualization software is available: mining.zip (R 1.7.1) mining_1.1.zip (R 1.9.1) In R for Windows, select menu option Packages -> Install Package from local zip file... Then type These packages only work with the version of R listed. Under Windows Vista, R has trouble unzipping packages. In this case, you need to manually unzip the package into the R/library directory. Last modified: Wed Apr 26 12:39:31 GMT 2006
{"url":"http://research.microsoft.com/en-us/um/people/minka/papers/viz.html","timestamp":"2014-04-21T01:23:47Z","content_type":null,"content_length":"2913","record_id":"<urn:uuid:11edaedf-411e-4e26-b956-98ffd601788f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Newtonville, MA Algebra Tutor Find a Newtonville, MA Algebra Tutor ...Theses courses have focused on descriptive and inferential statistics (e.g., t-tests, ANOVA, regression, Chi-square). Throughout my career over the last 20 years, I have conducted various statistical analyses as well as provided written descriptions of the results. As a former middle/high school... 11 Subjects: including algebra 1, statistics, GRE, GED Science for me is fun and exciting. It is fascinating to find out why things happen the way they happen. Similarly, I experience the same enjoyment in math when I understand the way numbers behave and then apply that understanding to real life situations. 12 Subjects: including algebra 1, algebra 2, physics, chemistry ...I am majoring in psychology (BA) and minoring in Hispanic studies, but as a student at a liberal arts college, I am continuing to explore different interests as much as my schedule allows me to in academic fields such as linguistics, French, math, history, and English. I have a true passion for ... 30 Subjects: including algebra 1, Spanish, reading, English ...I have helped students improve their math skills in middle and high school, and I can help them become better organized in the classroom, working on note-taking, test-taking, and time management skills as well. I've worked with students specifically in the following subjects: Pre-Algebra, Algebra 1, Geometry, U.S. History, World History, and European History. 29 Subjects: including algebra 1, reading, English, writing ...I have taken a large range of classes in Biology, Chemistry, Physics and Social Sciences. During high school and college, I worked as a peer tutor in all areas of science and since then have continued to tutor all levels of high school and college Biology and Chemistry. I am currently a medical... 16 Subjects: including algebra 2, algebra 1, Spanish, chemistry
{"url":"http://www.purplemath.com/Newtonville_MA_Algebra_tutors.php","timestamp":"2014-04-16T13:52:00Z","content_type":null,"content_length":"24177","record_id":"<urn:uuid:203fdf41-47af-4980-9395-7db9174585d5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Support Forum: 'Integrate problem in version 4' topic Author Comment/Response email me >The following integral works in version 3 but not in version 4. The result I have given is from version 3. >Why does it fail in version 4? > {r, 0, Infinity}, Assumptions -> > {Im[\[Kappa]] == 0, Im[\[Xi]] == 0, Re[\[Xi]] > 0, Re[\[Kappa]] > 0, > \[Kappa] > 0, \[Xi] > 0}] >-((Pi*(\[Kappa]^4 + 2*\[Kappa]^2*\[Xi]^2 + 8*\[Xi]^4))/ > (E^(\[Kappa]^2/(4*\[Xi]^2))*\[Kappa]^4*\[Xi]^4)) >Version 4 gives the error: >Integrate::''idiv'' : ''Integral of \!\(r\^2\\ \(\(Erf[\(\(r\\ \[Xi]\)\)]\)\)\\ \ >\(\(Sin[\(\(r\\ \[Kappa]\)\)]\)\)\) does not converge on \!\({0, \ >\*InterpretationBox[\''\[Infinity]\'', DirectedInfinity[1]]}\).'' >My version evaluates to: >''4.0 for Microsoft Windows (July 16, 1999)'' >Thank you very much. Since this integral is not convergent as an ordinary integral, the Version 4 result shows the intended behavior. The Version 3 result could be considered correct if Integrate is understood as a generalized integration operation, but that is not the intended design. The intended default behavior is for Integrate to check for basic convergence as in Version 4. A convenient way to see that the integral is not convergent as an ordinary integral is to look at a plot of the integrand for numerical values of the symbolic parameters. For example, after replacing both of the symbolic parameters by 1, the input Plot[4 Pi r^2 (Sin[r] r Erf[r])/(r), {r, 0, 100}] reveals an integrand with ever-increasing oscillations which can not be expected to converge. You can get the Version 3 result in Version 4 by including the GenerateConditions -> False option so that Integrate does not check for convergence of the integral: Integrate[4 Pi r^2 (Sin[\[Kappa] r] r Erf[r \[Xi]])/(\[Kappa] r), {r, 0, Infinity}, GenerateConditions -> False] This is the intended way of getting the Version 3 result in Version 4. Forum Moderator Response by David Withoff URL: ,
{"url":"http://forums.wolfram.com/student-support/topics/4163","timestamp":"2014-04-19T22:13:58Z","content_type":null,"content_length":"27817","record_id":"<urn:uuid:76b2b2ca-e1e2-41fe-996d-1f8e8527e62a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
North Wales I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including calculus, trigonometry, SAT math, ACT Math ...I also received a concentration in Mid-Level Mathematics. I have been teaching for the past five years and am looking to help more students by providing tutoring services. My patience and ability to assist struggling students are a few of my strengths. 14 Subjects: including probability, prealgebra, algebra 1, discrete math ...My bachelors degree is in writing. Prior to selecting that major I was enrolled for 2 years as an engineering major (with a 3.8 GPA). I completed high level math classes up to Calculus 2. I spend most of my days now teaching 8th & 9th graders math. 7 Subjects: including algebra 1, logic, probability, prealgebra I have been working as a statistician at the University of Pennsylvania since 1991, providing assistance to researchers in various areas of health behavior. I am proficient in several statistical packages, including SPSS, STATA, and SAS. One of my particular strengths is the ability to explain sta... 1 Subject: statistics ...Most people don't write the way they speak. I hear bad grammar in classes and in the media daily. Let's understand and focus on the differences. 35 Subjects: including geometry, reading, ACT Math, SAT math
{"url":"http://www.purplemath.com/north_wales_pa_math_tutors.php","timestamp":"2014-04-18T19:19:36Z","content_type":null,"content_length":"23412","record_id":"<urn:uuid:766cc4b4-6fe6-451e-8893-8d4b8fdbcc2d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Reader Comments and Retorts Go to end of page Statements posted here are those of our readers and do not represent the BaseballThinkFactory. Names are provided by the poster and are not verified. We ask that posters follow our submission policy. Please report any inappropriate comments. The John Wetland Memorial Death (CoB) Posted: July 23, 2011 at 03:10 AM (#3883706) Looks publicly viewable to me. Drew (Primakov, Gungho Iguanas) Posted: July 23, 2011 at 03:13 AM (#3883708) I always assume everything BP is behind the paywall. Glad this one isn't. Forsch 10 From Navarone (Dayn) Posted: July 23, 2011 at 04:25 AM (#3883726) And yet they were using SIERA on their site just, what, a couple of months ago? Walt Davis Posted: July 23, 2011 at 04:34 AM (#3883729) Not a big deal but his description of the effects of multicollinearity is wrong. Multicollinearity violates no assumptions of OLS -- in fact, controlling for multicollinearity is the main purpose of OLS -- no matter how high it gets. [EDIT: OK, if the collinearity is perfect, you've got a problem.] What multicollinearity affects is the power of your regression to detect effects. The interpretation of a regression coefficient (for say X1) is the impact of a one-unit change in X1 controlling for all the other Xs. If X1 and X2 are highly collinear, this independent effect of X1 (and the independent effect of X2) are harder to detect. All that really means is you need more sample size or, if you don't have that, it means that you can't tell which of those two variables is important. However, in the case of a regression with quadratic and interaction terms, you simply don't care about collinearity because it is impossible to interpret the effect of X1 controlling for X1-squared. Yhat = b0 + b1*X1 + b2*X1^2 dYhat/dX1 = b1 + 2b2*X1 So the effect varies with the value of X1 -- i.e. the relationship is curvilinear. That means that you are estimating a parabolic relationship between the Y and X1 and, at certain points on that curve, the impact of X1 will be positive, will go to zero, then will go negative (or vice versa). Wyers refers to such sign-switching as a "mistake" but it's not -- the quadratic model is assuming a different functional form from the linear. From a strict testing perspective, all you care about is whether the coefficient b2 is statistically significant. If it is, then the quadratic provides a better fit than the linear. In short, the linear model is the "mistake" if b2 is significant. Now Colin appears to be right _in the case of SIERA_ that the extra complexity provides such a minimal improvement to the overall fit that you might as well stick with the simpler linear model. In essence, he at one point refers to the "fundamental" relationship between two variables but assumes (or strongly implies) that the fundamental relationship between variables is always linear. That of course isn't true. In the extreme, for the function y=k + x^2, there is no linear relationship (no correlation) between y and x whatsoever but obviously you can perfectly predict y if you know x. Back to the equation, something similar occurs with interactions: yhat = b0 + b1*X1 + b2*X2 + b3*X1*X2 dyhat/dX1 = b1 + b3*X2 dyhat/dX2 = b2 + b3*X1 So it's impossible and pointless to think about interpreting the impact of X1 independent of X2 (and vice versa) because you explicitly modelled it so you can't. My main point here being: Don't throw #### into a regression equation unless you know how to interpret the results. Christopher Linden Posted: July 23, 2011 at 04:37 AM (#3883730) SIERA has been a big part of their 2011 annual and season coverage since the offseason. It was apparent a huge amount of work went into it, and was presented with no small degree of fanfare as the #1 ERA estimator/predictor. Now Colin publishes a document where he says SIERA works pretty well, but no so much well that it's worth the added complexity compared to FIP (which BP will now be using) and xFIP. Some commenters on the BP thread are seeing a connection in the timing; Matt, SIERA's point man and apparent progenitor leaves for Fangraphs, and Colin knocks a swipe at SIERA, only his strongest statement boils down to "SIERA's good as 'em all, but it's not worth the upkeep." BP is going back to FIP (not xFIP) as its primary ERA estimator. Happy Base Ball Chris Needham Posted: July 23, 2011 at 04:41 AM (#3883731) Maybe it's late. Maybe I'm cranky. But this kind of article is exactly what I hate about stathead writing. It's the kind of article that makes me wonder if the Bill Conlins of the world might be on to something. Needlessly long, random charts, incomprehensible stats and references... if you haven't been following the 'debate' about that particular stat, that article is completely incomprehensible. Hell, even if you have been, it is. Tricky Dick Posted: July 23, 2011 at 04:52 AM (#3883735) Some commenters on the BP thread are seeing a connection in the timing; Matt, SIERA's point man and apparent progenitor leaves for Fangraphs, and Colin knocks a swipe at SIERA, only his strongest statement boils down to "SIERA's good as 'em all, but it's not worth the upkeep." I also noticed that he criticized the "refinements" that Fangraphs is making to the original SIERA, saying that Fangraphs is "welcome to this demonstrably redundant measure." I don't have an interest in either side of this debate, but it is a rather tart response. Walt Davis Posted: July 23, 2011 at 05:14 AM (#3883741) I think you're being cranky Chris. In his third paragraph he tells you everything those not in the debate need to know: Some fights are worth fighting. The fight to replace batting average with better measures of offense was worth fighting. The fight to replace FIP with more complicated formulas that add little in the way of quality simply isn’t. FIP is easy to understand and it does the job it’s supposed to as well as anything else proposed. It isn’t perfect, but it does everything a measure like SIERA does without the extra baggage, so FIP is what you will see around here going forward. If you haven't been following the debate (and I haven't) and aren't interested in the nuts and bolts (I sometimes am) there's no need to read further. if you haven't been following the 'debate' about that particular stat, that article is completely incomprehensible. But at some point, "specialist" literature has to assume you've been following the "debate" because it's aimed at a specialist audience. Not that sabermetrics is at a similar level but nobody expects articles in physics journals to rehash Einstein -- if you don't know Einstein, what are you doing reading the journal? If a person doesn't know what FIP, xFIP, SIERA and whatever else are; isn't interested in how they differ; isn't interested in whether they produce different results or lead to different conclusions ... why is that person reading an article on the comparison of FIP, xFIP and SIERA? I will agree it's needlessly long and other issues. In its way, it's overkill in the same way that SIERA seems to be overkill. I suppose that's partly the result of the monster we've created -- if we ##### at Conlin for not using OPS, some subset of us also ##### at BPro for not using "stat of the day". B-R, Bpro, fangraphs, etc. have to be able to defend the stats they choose to present. Christopher Linden Posted: July 23, 2011 at 05:17 AM (#3883742) To me the true takeaway of the article is simply that even after adding batted-ball data and sifting through ground balls, fly balls, data assembled by MLBAM stringers, and who knows how many dead chickens, our ERA estimators don't seem to be a whole lot better than what they've been. That, more than anything, was Colin's point here, I think. In one test there was no discernible difference between any of the four "estimators" (including, AIR, raw ERA) when the error rate was rounded to the tenth. Either we are just at the beginning of learning how our proxies for a pitcher's talent and tendencies can be alchemized into projection gold, or we are at the end, and to paraphrase another former BPer, the numbers have told us all they have to tell us. I feel more comfortable using FIP (or in certain cases with pitchers who might have big outlier HR/FBs, xFIP) for my Vorosian evaluations. Regarding the timing, yeah, it's hard to not see the timing of Colin's article as a slam on the SIERA metric Matt will be bringing to Fangraphs. Happy Base Ball Christopher Linden Posted: July 23, 2011 at 05:18 AM (#3883743) EDIT: Double post. Chris Needham Posted: July 23, 2011 at 05:27 AM (#3883746) I think you're being cranky Chris. In his third paragraph he tells you everything those not in the debate need to know: Wouldn't be the first time I've been called cranky... I'll just point out that the third paragraph appears to be about 300 words into the article. But at some point, "specialist" literature has to assume you've been following the "debate" because it's aimed at a specialist audience. Not that sabermetrics is at a similar level but nobody expects articles in physics journals to rehash Einstein -- if you don't know Einstein, what are you doing reading the journal? Completely fair point. And one I thought about putting in in the prior post. But at the same time, I don't see BP doing as much of the explanatory articles for the non-math, non-acronym folks. Fangraphs does it more frequently -- their stuff certainly can be much more approachable at times -- but I think that 75% (add your own error bar) of the writing out there is preaching to the converted -- or preaching to a sliver of the already converted. AROM Posted: July 23, 2011 at 05:36 AM (#3883747) I pretty much agree with Colin's take here. Same thing I've been ready from Tango since SIERA came out. FIP is a simple formula that I can keep in my head and plug into a calculator even if I'm away from my main computer, and get a result that is 99% as good. If I want something more complex and accurate I'd use my projection system anyway. Colin mentions my name in the article, and links to several minor league pitchers who share my name. I find it funny this case of mistaken identity happens all the time, yet nobody bothers to link to my football stats for the Dolphins. SoSHially Unacceptable Posted: July 23, 2011 at 07:17 AM (#3883760) EDIT: Double post. Double post or not, I still expect a Happy Base Ball. FWIW, the only word I am certain I could identify and define in Walt's first post was "Colin." That's probably another way of saying I haven't been following the debate. Voros McCracken of Pinkus Posted: July 23, 2011 at 07:27 AM (#3883762) A while back I did a base run versions of a DIPS ERA estimator that I thought had some use because it expanded the range of ERA's achieved and also because traditional FIP underrates guys who give up lots of homers but not many baserunners (and overrates the opposite). This would only be appropriate for starters though. Relief pitchers probably aren't worth the trouble. Voros McCracken of Pinkus Posted: July 23, 2011 at 07:54 AM (#3883764) As for multicollinearity, the best way for someone not versed in stats to understand it is as a sample size problem. When you have two independent variables that track very closely with one another in a regression, what can happen is that the coefficient for one of those variables could be determined by a very small number of samples even if the overall sample size is large. This is because only in the samples when the closely tracking variables actually differ does it count toward that coefficient. So if you have height and weight as variables in a study of 100 and almost everybody in the sample is of a "normal" height/weight relationship and only one or two are much heavier or lighter, those one or two samples will determine almost all of the coefficient for either height or weight. bobm Posted: July 23, 2011 at 12:53 PM (#3883785) Among those is SIERA, which has lately migrated from here to Fangraphs.com in a new form, one more complex but not necessarily more accurate. We have offered SIERA for roughly 18 months, but have had a difficult time convincing anyone, be they our readers, other practitioners of sabermetrics, or our own authors, that SIERA was a significant improvement on other ERA estimators. I do not have a dog in this fight, but I found this piece interesting. It's very rare IMO in sabermetrics that someone asks, "What incremental value does this new formula have over the currently used formulas?". It's unfortunate that this bit of introspection--after much initial touting--was only prompted after a year-and-a-half by a defection to a competitor. SG Posted: July 23, 2011 at 02:44 PM (#3883810) Some commenters on the BP thread are seeing a connection in the timing; Matt, SIERA's point man and apparent progenitor leaves for Fangraphs, and Colin knocks a swipe at SIERA, only his strongest statement boils down to "SIERA's good as 'em all, but it's not worth the upkeep." It may be that a behind-the-scenes determination that SIERA wasn't worth the upkeep was the reason for Matt's departure in the first place. So it's not necessarily an after-the-fact swipe. As far as I can tell, Colin's not the intellectually dishonest type. Wins Above Paul Westerberg Posted: July 23, 2011 at 03:42 PM (#3883820) Interestingly, this is what Dave Cameron said this week in regards to SIERA on his Fangraphs chat: "I think SIERA is interesting. I also think the differences between SIERA and xFIP are really quite small. I'm glad we have both on the site, but I'll probably keep referring to xFIP most of the time. In cases where the difference is actually meaningful, it will be nice to be able to refer to SIERA and say "here's why xFIP might be missing something on this specific guy", but overall, they're in agreement on 99% of all pitchers." So I guess BPro isn't alone in seeing a bit of redundancy here, even if they do come off a bit like a scorned lover. puck Posted: July 23, 2011 at 04:26 PM (#3883830) It may be that a behind-the-scenes determination that SIERA wasn't worth the upkeep was the reason for Matt's departure in the first place. So it's not necessarily an after-the-fact swipe. As far as I can tell, Colin's not the intellectually dishonest type. Not just that, but Colin's written about this before , and has posted plenty of comments about the topic on various sites. He very well might not have agreed with using SIERA on the site, or figured, while they have a guy there who wants to maintain it, fine, why pick that particular battle. Walt Davis Posted: July 23, 2011 at 07:58 PM (#3883917) FWIW, the only word I am certain I could identify and define in Walt's first post was "Colin." Not to worry, it was pure stat-geekery with no baseball content. As far as I'm concerned, the prime example of the effects of collinearity in sabermetrics is the "BA doesn't matter" mistake. We could start by regressing (correlating) runs with BA and we'd find a strong, positive relationship. However, in the equation: runs = b0 + b1*BA + b2*OBP + b3*SLG + e the coefficient for BA is low and/or insignificant which would lead one to the conclusion that, once controlling for OBP and SLG, BA doesn't matter. The question is why. Part of the answer is that BA is the largest component of both OBP and SLG -- they are fairly highly collinear. So whatever impact BA has is being captured in OBP and SLG (but not vice versa). The rest of the answer is that the coefficients in that equation are not interpreted the way people think they are. Second point first: What does it mean to vary OBP while holding BA and SLG constant? Basically it means you drew a walk, a clearly positive event. What does it mean to vary SLG while holding BA and OBP constant? Basically it means you added an extra base, a clearly positive event. What does it mean to vary BA while holding OBP and SLG constant? Try it sometime, it's not easy to do. But basically it means something like trading a HR and 2 walks for 2 doubles and a single (and even that probably doesn't work out right) which isn't clearly a positive thing -- but who the hell wants to know the impact of that? So first point: We can rewrite the above equation as: b1*BA + b2*OBP + b3*SLG = b1*BA + b2*(BA + ISO_OBP) + b3*(BA + ISO_SLG) = (b1 + b2 + b3)*BA + b2*ISO_OBP + b3*ISO_SLG So BA has a much larger coefficient than the other two. Also we now have more sensibly interpreted coefficients. A change in BA while holding ISO_OBP and ISO_SLG constant is a single; a change in ISO_OBP is still a walk; a change in ISO_SLG is still an extra base. The main problem of course being that those have different denominators which is just dumb so simplify as: c1*(H/PA) + c2*(BB + HBP/PA) + c3*(XB/PA) and you have a nice, clean, easy to interpret regression ... which will show you that H/PA has the largest coefficient. The various linear run estimators lead you to the same conclusion -- the value of a base is roughly constant regardless of how that base was obtained but every hit adds a substantial run-scoring bonus. Der-K: Hipster doofus Posted: July 23, 2011 at 08:03 PM (#3883919) Nicely put. base ball chick Posted: July 23, 2011 at 09:47 PM (#3883944) b1*BA + b2*OBP + b3*SLG = b1*BA + b2*(BA + ISO_OBP) + b3*(BA + ISO_SLG) = (b1 + b2 + b3)*BA + b2*ISO_OBP + b3*ISO_SLG o kayyyyyyyyyyy can someone explain to me exactly what SIERA is looking for - leaving out math words i don't understand what they mean like coefficience? i understand that FIP imagines that fielders don't matter because there is so little statistically significant difference between the best fielders and the worst that if you looked at any pitcher and pretended that he had the worst 8 fielders in the ML or the 8 best fielders in the ML he would have the exact same FIP even if he didn't have the same ERA because he wouldn't change a thing about how he pitches. i know you use FIP in fantasy baseball to decide what pitchers to pick, but is the change from FIP to SIERA enough to statistically significantly increase your winning percentage? Nivra Posted: July 23, 2011 at 11:18 PM (#3883971) Posting here since BP doesn't allow posting by non-subscribers. I can't believe neither this site nor BP comments mentioned what I think was a crucial takeaway from Colin's article: "In a real sense, that’s what we do whenever we use a skill-based metric like xFIP or SIERA. We are using a proxy for regression to the mean that doesn’t explicitly account for the amount of playing time a pitcher has had. We are, in essence, trusting in the formula to do the right amount of regression for us. And like using fly balls to predict home runs, the regression to the mean we see is a side effect, not anything intentional." That's an incredibly incisive indictment of skill-based metrics like FIP, xFIP and SIERA. It calls into question some of the basic tenets of why we are constructing skill-based metrics to begin with. It also opens up a possible avenue into a completely different set of statistics that properly regress its components. "When you create an expected home run rate based on batted ball data, what you get is something that's well correlated with HR/CON but has a smaller standard deviation, so in tests where the standard deviation affects the results, like root mean square error, it produces a better-looking result, without adding any predictive value." This also opens up possibilities for better methods for analyzing model fit and predictive accuracy. I think that Colin made an error by introducing these two main points in an article surrounding the SIERA vs. FIP, which then became a discussion regarding formula complexity. These two points are independent and distinct, and I'd love to see more discussion about the points made above, rather than the complexity issue. Jittery McFrog Posted: July 23, 2011 at 11:29 PM (#3883978) can someone explain to me exactly what SIERA is looking for - leaving out math words i don't understand what they mean like coefficience? There are much people around here who are waaaaay more qualified than me to answer, but since no one has chimed in yet: FIP looks at HR, BB, and K rates and treats these as the basic skills. It doesn't look at how the effects of these skills interact. So if a pitcher starts walking an extra guy per inning, his FIP goes up by 3, regardless of what his HR or K rates are. SIERA tries to take into account how the skills interact. For example, if a HR-prone pitcher starts walking extra batters, his SIERA goes up more than if he were a low-HR guy. I think that's the basic idea. The article introducing SIERA is Jittery McFrog Posted: July 24, 2011 at 12:05 AM (#3884014) That's an incredibly incisive indictment of skill-based metrics like FIP, xFIP and SIERA. [...]I think that Colin made an error by introducing these two main points in an article surrounding the SIERA vs. FIP, which then became a discussion regarding formula complexity. I agree on both points, although I think there are a couple of (quite fixable) problems with his critique. One was here: So what happens if we give all the ERA estimators (and ERA) the same standard deviation? We can do this simply, using z-scores. I gave each metric the same SD as xFIP [...] ERA FIP xFIP SIERA RMSE 1.94 1.87 1.86 1.84 In essence, what we’ve done here is very crudely regress the other stats (except SIERA, which we actually expanded slightly) and re-center them on the league average ERA of that season. Now SIERA slightly beats xFIP, but in a practical sense both of them are in a dead heat with FIP, and ERA isn’t that far behind. As I said, this is an extremely crude way of regressing to the mean. If we wanted to do this for a real analysis, we’d want to look at how much playing time a player has had. In this example we regress a guy with 1 IP as much as a guy with 200 IP, which is sheer lunacy. It seems to me that the most important claim of the article hasn't been tested properly. If you think the claims for SIERA "are as imaginary as they can get", then why not see if they still disappear when a non-crude regression is applied to each of ERA, xFIP, FIP and SIERA? For example, why not, for each predictor, regress to the SD that provides the lowest RMSE? If I want to know whether SIERA is measuring something that other predictors aren't, I want to see which is closest when each is at its best, not which is closest after each has been run through a crude filter. Now maybe he has done something like this; if so, I haven't found it yet, which may very well be my fault. Also, one of the claims that the creators of SIERA made is that Looking through all of these different tests, it is apparent not only that SIERA is the best ERA estimator currently available, but specifically that it is exceptionally strong at measuring the skill level of specialized kinds of pitchers. If SIERA claims to measure the skill of certain pitchers particularly well, then one should probably test those separately before "moving on". tshipman Posted: July 24, 2011 at 12:11 AM (#3884023) That's an incredibly incisive indictment of skill-based metrics like FIP, xFIP and SIERA. It calls into question some of the basic tenets of why we are constructing skill-based metrics to begin with. It also opens up a possible avenue into a completely different set of statistics that properly regress its components. Isn't the purpose of regressing home runs to fly balls to find a proper regression? I don't understand what you mean here. Nivra Posted: July 24, 2011 at 04:41 AM (#3884167) Jittery: I agree with that, as well, with a small caveat. IMO, the crude regression shows that the difference between xFIP and FIP and SIERA are negligible, already. 1.84 vs. 1.86 and 1.87? That's the tiniest of incremental gains, if it even is significant, which I doubt. It definitely does call for more research regarding proper regression of analytic metrics like ERA, FIP, xFIP and SIERA. I think a viable finish to the article would be a call for further research in this area inasmuch as this is the first time I've seen this particular critique, and it's an extremely important matter for these metrics. #26/tshipman: The argument xFIP makes is that HR/FB should be 100% regressed to league avg, essentially assuming that pitchers have very little ability to influence HR/FB. The fact that xFIP appears to predict ERA better than FIP supports this assumption. Colin is making the argument that the assumption is incorrect. xFIP does better than FIP simply because FB-rate has lower variance than HR/PA. Thus, by using lgAvg HR/FB * pitcher FB/PA, you reduce the variance in your metric. This makes the predictor better because you are "regressing to the mean." The question with regression is always, "how much do you regress?" xFIP randomly chooses a regression amount based on the variance difference from HR/PA to FB/PA. This is most likely not the optimal amount to regress, but it's better than doing no regression (FIP). That's why xFIP does better than FIP. It's not because HR/FB is outside of pitcher ability, in fact, Colin gives data that clearly shows pitchers do control HR/CON has additional value over and above FB/CON. He basically calls into question how effective all the newer estimators are outside of their ability to regress to the mean. Is FIP better because pitchers don't influence BABIP or because FIP regresses more than ERA? Is xFIP better than FIP because pitchers don't influence HR/FB or because xFIP regresses more than FIP? He gives evidence that the bulk of the improvement we see for xFIP and SIERA is due to regression, and also that the difference between FIP/xFIP/SIERA and ERA is much smaller than previously thought. What he doesn't do is a more rigorous analysis (like Jittery is calling for above) of how much of the improvement is due to regression vs. actually reducing noise due to abilities outside of the pitchers' control. This is definitely the next step in this line of analysis. The step after that would be to rethink the way we calculate these metrics and how the different components regress independently, and come up with a new metric that does this properly. Austin Posted: July 24, 2011 at 04:57 AM (#3884171) Colin always seems particularly incendiary when he writes an article, and this one certainly comes off that way. For the record, even having a statistic like FRA to measure the value already contributed by the pitcher, I don't think it's a problem to have "mini projection systems" (as Colin insinuates that they are) like SIERA to fill the role of predicting future performance. Projection systems can't generally be run on a daily basis, so component-based statistics that purport to measure underlying skill can still be useful even if Colin's criticisms are taken to be accurate. tshipman Posted: July 24, 2011 at 05:32 AM (#3884177) #26/tshipman: The argument xFIP makes is that HR/FB should be 100% regressed to league avg, essentially assuming that pitchers have very little ability to influence HR/FB. The fact that xFIP appears to predict ERA better than FIP supports this assumption. Colin is making the argument that the assumption is incorrect. xFIP does better than FIP simply because FB-rate has lower variance than HR/PA. Thus, by using lgAvg HR/FB * pitcher FB/PA, you reduce the variance in your metric. This makes the predictor better because you are "regressing to the mean." The question with regression is always, "how much do you regress?" xFIP randomly chooses a regression amount based on the variance difference from HR/PA to FB/PA. This is most likely not the optimal amount to regress, but it's better than doing no regression (FIP). That's why xFIP does better than FIP. It's not because HR/FB is outside of pitcher ability, in fact, Colin gives data that clearly shows pitchers do control HR/CON has additional value over and above FB/CON. Okay. I see what you mean here now. You're right that it's interesting, but I don't know how much more information even a perfectly regressed statistic provides. Thank you for explaining it. Nivra Posted: July 24, 2011 at 07:26 AM (#3884182) Discussion over on the Book about this: http://www.insidethebook.com/ee/index.php/site/comments/siera_updated/ studes Posted: July 24, 2011 at 11:37 AM (#3884195) I don't see anything new in this regression discussion. Things like FIP and xFIP have always been acknowledged to be "lazy" regression. But that's because they were both intended to be more descriptive than predictive--there were intended to isolate certain parts of a pitcher's performance and come closer to some sort of "truth." Certainly, that glimpse of the "truth" can be improved upon by using better regression techniques. In my mind, that's what projection systems should do. However, I don't feel qualified to state whether SIERA, or any other stat, is an improvement or not. (and reading Colin's article, along with Matt's reply, won't help. Not enough brain cells in this old head of mine.) You must be Registered and Logged In to post comments. << Back to main
{"url":"http://www.baseballthinkfactory.org/newsstand/discussion/wyers_bp_lost_in_the_siera_madre","timestamp":"2014-04-18T23:30:07Z","content_type":null,"content_length":"59985","record_id":"<urn:uuid:c4300cf7-0421-4134-92de-9b33946899e8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from February 19, 2009 on The Unapologetic Mathematician We would like to define the multiplicity of an eigenvalue $\lambda$ of a linear transformation $T$ as the number of independent associated eigenvectors. That is, as the dimension of the kernel of $T- \lambda1_V$. Unfortunately, we saw that when we have repeated eigenvalues, sometimes this doesn’t quite capture the right notion. In that example, the ${1}$-eigenspace has dimension ${1}$, but it seems from the upper-triangular matrix that the eigenvalue ${1}$ should have multiplicity ${2}$. Indeed, we saw that if the entries along the diagonal of an upper-triangular matrix are $\lambda_1,...,\lambda_d$, then the characteristic polynomial is Then we can use our definition of multiplicity for roots of polynomials to see that a given value of $\lambda$ has multiplicity equal to the number of times it shows up on the diagonal of an upper-triangular matrix. It turns out that generalized eigenspaces do capture this notion, and we have a way of calculating them to boot! That is, I’m asserting that the multiplicity of an eigenvalue $\lambda$ is both the number of times that $\lambda$ shows up on the diagonal of any upper-triangular matrix for $T$, and the number of independent generalized eigenvectors with eigenvalue $\lambda$ — which is $\dim\ So, let’s fix a vector space $V$ of finite dimension $D$ over an algebraically closed field $\mathbb{F}$. Pick a linear transformation $T:V\rightarrow V$ and a basis $\left\{e_i\right\}$ with respect to which the basis of $T$ is upper-triangular. We know such a matrix will exist because we’re working over an algebraically closed base field. I’ll prove the assertion for the eigenvalue $\lambda=0$ — that the number of copies of ${0}$ on the diagonal of the matrix is the dimension of the kernel of $T^d$ — since for other eigenvalues we just replace $T$ with $T-\lambda1_V$ and do the exact same We’ll prove this statement by induction on the dimension of $V$. The base case is easy: if $\dim(V)=1$ then the kernel of $T$ has dimension ${1}$ if the upper triangular matrix is $\begin{pmatrix}{0} \end{pmatrix}$, and has dimension ${0}$ for anything else. For the inductive step, we’re interested in the subspace spanned by the basis vectors $e_1$ through $e_{d-1}$. Let’s call this subspace $U$. Now we can write out the matrix of $T$: We can see that every vector in $U$ — linear combinations of $e_1$ through $e_{d-1}$ — lands back in $U$. Meanwhile $T(e_d)=\lambda_de_d+\bar{u}$, where the components of $\bar{u}\in U$ are given in the last column. The fact that $U$ is invariant under the action of $T$ means that we can restrict $T$ to that subspace, getting the transformation $T|_U:U\rightarrow U$. Its matrix with respect to the obvious basis is The dimension of $U$ is less than that of $V$, so we can use our inductive hypothesis to conclude that ${0}$ shows up on the diagonal of this matrix $\dim\left(\mathrm{Ker}\left((T|_U)^{d-1}\right)\ right)$ times. But we saw yesterday that the sequence of kernels of powers of $T|_U$ has stabilized by this point (since $U$ has dimension $d-1$), so this is also $\dim\left(\mathrm{Ker}\left((T|_U)^ d\right)\right)$. The last diagonal entry of $T$ is either ${0}$ or not. If $\lambda_deq0$, we want to show that On the other hand, if $\lambda_d=0$, we want to show that The inclusion-exclusion principle tells us that Since this dimension of a subspace has to be less than or equal to $d$, the difference in dimensions on the right can only be either zero or one. And we also see that So if $\lambdaeq0$ we need to show that every vector in $\mathrm{Ker}(T^d)$ actually lies in $U$, so the difference in dimensions is zero. On the other hand, if $\lambda=0$ we need to find a vector in $\mathrm{Ker}(T^d)$ that’s not in $U$, so the difference in dimensions has to be one. The first case is easier. Any vector in $V$ but not in $U$ can be written uniquely as $u+xe_d$ for some nonzero scalar $x\in\mathbb{F}$ and some vector $u\in U$. When we apply the transformation $T$, we get $T(u)+x\bar{u}+x\lambda_de_d$. Since $\lambda_deq0$, the coefficient of $e_d$ is again nonzero. No matter how many times we apply $T$, we’ll still have a nonzero vector left. Thus the kernel of $T^d$ is completely contained in $U$, and so we conclude $\mathrm{Ker}(T^d)=\mathrm{Ker}\left((T|_U)^d\right)$. In the second case, let’s look for a vector of the form $u-e_d$. We want to choose $u\in U$ so that $T^d(u)=T^d(e_d)$. At the first application of $T$ we find $T(e_d)=\bar{u}\in U$. Thus $\displaystyle T^d(e_d)=T^{d-1}(\bar{u})\in\mathrm{Im}\left(\left(T|_U\right)^{d-1}\right)$ But the dimension of $U$ is $d-1$, and so by this point the sequence of images of powers of $T|_U$has stabilized! That is, $\displaystyle T^d(e_d)\in\mathrm{Im}\left(\left(T|_U\right)^{d-1}\right)\subseteq\mathrm{Im}\left(\left(T|_U\right)^d\right)$ and so we can find a $u$ so that $T^d(u)=T^d(e_d)$. This gives us a vector in the kernel of $T$ that doesn’t lie in $U$, and the inductive step is complete. As a final remark, notice that the only place we really used the fact that $\mathbb{F}$ is algebraically closed is when we picked a basis that would make $T$ upper-triangular. Everything still goes through as long as we have an upper-triangular matrix, but a given linear transformation may have no such matrix. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2009/02/19/","timestamp":"2014-04-17T18:38:02Z","content_type":null,"content_length":"59787","record_id":"<urn:uuid:480dc044-c7b9-4375-addf-ecdcf0f0497e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Applications of Intergration (Work related) June 11th 2006, 04:28 PM Applications of Intergration (Work related) 1. A heavy rope, 50ft long, weights 0.5 lb/ft and hangs over the edge of a building 120 ft high. a) How much work is done in pulling the rope to the top of the building? b) How much work is done in pulling half the rope to the top of the building? My solution: Work = Force x Distance W = $\int Fxdx$ Force = (Density) = (0.5) So Fx = 0.5x Then: $\int_{0}^{50} 0.5 x dx$ Is that correct? I have no idea how to do the b part. Thank you! P.S. How come I can't do the upper and lower limit of defined intergration? I used \int^50_0 and it didn't work. June 11th 2006, 05:38 PM For the syntax question, you reversed the order. I fixed it for you. It should be int_{0}^{50}. Look at your units for a guide. You know that the unit of work must be lb*ft. The force is not equal to the density. The lb is a unit of force. You must multiply $\frac{.5 lb}{ft} \times 50ft$ to get the correct unit for force. Now setup your integral. $W=\int_{0}^{50} 25xdx$ June 11th 2006, 08:16 PM Consider a length element $dx$ which is $x$ft below the top of the building. To lift this to the top of the building will require: $dW=\rho. dx. g. x$ work in whatever units we are using. So the work required to lift the whole rope is: $W = \int_0^{50} \rho. g .x\ dx$, where $\rho$ is the mass of the rope per unit length, $g$ is the acceleration due to gravity. Now if we are using pounds as our unit of mass, and the foot as our unit of length, then $g\approx 32\ \mbox{ft/s^2}$, and $W$ is in units of foot poundals, to convert it to foot pounds we would divide by $g$, then: $W = 0.5 \int_0^{50} x\ dx\ \mbox{ft.lb}$. 1. This to some extent demonstrates the absurdity of customary units, which in this context manages to confuse units of mass and force. 2. This demonstrates the absurdity of exercises in integration of this type. This can be done in one line without integration as the work done is the change in potential energy when a mass equal to the mass of the rope is lifted from the position of the ropes c-of-g to the top of the building. June 12th 2006, 10:52 AM Now I'm a bit puzzled CaptainBlack. If you integrate a unit of ft-lb with respect to ft, the resulting unit would be ft^2-lb, which isn't the unit for work. What am I missing? June 12th 2006, 08:07 PM Originally Posted by Jameson Now I'm a bit puzzled CaptainBlack. If you integrate a unit of ft-lb with respect to ft, the resulting unit would be ft^2-lb, which isn't the unit for work. What am I missing? Sorry if the notation is confusing, the ft-lb denotes the units of W. The integral is a density (now in slightly peculiar units of lb-force per unit length) times an integral of something in ft wrt something in ft. So we have [W]=[lb-force]/[ft] x [ft]^2=[lb-force][ft]
{"url":"http://mathhelpforum.com/calculus/3384-applications-intergration-work-related-print.html","timestamp":"2014-04-18T12:27:46Z","content_type":null,"content_length":"9746","record_id":"<urn:uuid:621df5a4-508d-4746-bd27-61f01320248c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Encyclopaedia Index Click here for Contents List Brian Spalding, of CHAM Ltd December, 1999 Paper presented at I Mech E, London, December 14,1999 in "CFD - Technical Developments and future Trends" Also available as: www.cham.co.uk/phoenics/d_polis/d_lecs/cmbstr5/cmbstr5.htm Abstract of the March 1999 lecture Contents of the present lecture • To attain the flow-prediction aims of CFD, we need to ascribe values to time-averages of non-linear functions of fluid variables, for example: where T is the absolute temperature. • Other quantities which we need to evaluate are time-averages of multiplication products, for example: for shear stresses and for chemical reactions, where u and v are instantaneous velocity components, and a and b are concentrations. • "All" that we need, to enable us to do so, is knowledge of the relevant "probability-density functions", for example the left-hand figures in: Note that the right-hand-side diagrams are reminders of the "inter-mingled fluid population" concept of a turbulent fluid, which underlines the "multi-fluid model" • It is now possible to calculate the PDFs; but it was not in 1942, when A.N.Kolmogorov had a "bright idea", namely: □ "Let's see if we can devise differential equations which have just one or two statistical quantities as the dependent variables. □ "Then, if we can solve these equations, and □ "if we can find sufficiently general empirical constants to insert in them, and □ "if we can connect these quantities to the ones we want by empirical relationships, □ "maybe we can do without the PDFs altogether". • One of the empirical relationships (already proposed by Boussinesq) was that turbulent flows were sufficiently like laminar ones for shear stresses, for example <u*v> (say) to be related to gradients of time-mean velocity by way of an "effective viscosity"; then this could be computed from the "dreamed-up" equations for the statistical quantities. • Other innovators, for example Ludwig Prandtl and Howard Emmons, had the same idea a little later; but it is fair to say that the whole of modern (sometimes ludicrously called "classical") turbulence modelling, springs from Kolmogorov's "bright idea". • It was a good idea at the time; and it worked fairly well for the (rather undemanding) turbulent shear flows; but is no use at all for chemical reaction [or for flows in which body forces (gravity/swirl) act on fluids exhibiting density fluctuations; but that is not the subject of the present paper] 2. Presuming the PDFs; another "good idea at the time" That knowledge of the PDFs was needed for predicting reaction rates was obvious in the early 1970s; and the first idea was that it might suffice to presume their shape, and devise an additional differential equation so as to find out everything elsew hich was necessary. This notion led to: • the eddy-break-up model (EBU; Spalding, 1971) • the concentration-fluctuations model (CFM; Spalding, 1971) • the eddy-dissipation concept (EDC; Magnussen, 1976) • the two-fluid model (2FM; Spalding, 1981) • and innumerable variants on the same theme All of these involved the supposition that any turbulent mixture could be treated as the inter-mingling of two fluids, the states and mixture fractions of which required to be computed from easy-to-formulate differential equations. This represented an advance on Kolmogorov's "ignore-the-PDFs" approach; but it was not good enough. [Somebody might have thought at the time: If two is not enough, what about four? or eight? or sixteen? etc? Refine the grid, dummy! But that did not happen for another 24 years!] So the next invention (by Bray, 1980) was the "flamelet" model, which involves the presumption that the turbulent mixture consist of: 1. fully-burned gas at the local time-average fuel-air ratio; 2. fully-unburned gas at the local time-average fuel-air ratio; 3. and a small amount of intermediate-state gas with a PDF which is the same as that prevailing in laminar steadily-propagating one-dimensional flames. This enables CFD/chemistry specialists to perform expensive calculations; but, in the present author's view, has no other merit (if that is the right word) whatever. • Presumed-PDF methods are what are mainly used by "high-tech" engineering companies at the present time. Nevertheless direct methods of calculating PDFs have been available for many years. • The "how-to-do-it" idea was provided by Dopazo and O'Brien in 1974; however, those authors were not numerical analysts at the time, so provided no solutions. • In 1982, Pope started to solve the relevant equations; but he used a "Monte Carlo" method, which proved to be expensive in terms of computer time. This may have given the "compute-the-PDF" approach a bad name. It is indeed little used in engineering practice. • More recently, the present author made the even-more-direct approach of discretising the PDF, and solving for its ordinates. This so-called "Multi-Fluid-Model (MFM)" approach has proved to be simple in concept, economical in implementation, and realistic in its predictions. This is what "dummy" should and could have done many years before. Turbulence-modelling history is a catalogue of missed opportunities and false starts.] • MFM can be regarded as what EBU should swiftly have developed into in the 1970s, having as many fluids, and as many PDF dimensions ( 2 will be quite enough for the time being), as the situation • MFM is "too new" (five-years-old!) to have been adopted in engineering practice. • At some time in the next millennium it will be (the author believes); perhaps even in Year 2000. Since the "laminar-flamelet model LFM" is the most "advanced" which is currently used by engineering companies, it is worth exploring the relations between it and MFM. This has been done in a recent paper, which shows that MFM reduces to LFM in restricted circumstances; but it has a much wider range of validity. The highlights of the just-mentioned paper can be seen by clicking here. MFM is not just a scientist's plaything: it can already be used to enable better designs to be distinguished from worse ones. A recent paper illustrates this by showing how MFM enables the smoke-generating propensities of gas-turbine-combustor designs to be predicted. The highlights of this paper can be seen by clicking here. It is the author's view that all time spent on CFD calculations incorporating the "presumed-PDF" approach is wasted; and, if design decisions are based on their outcome, the desisions will be correct only by chance. Those who have considered but do not use the alternative, namely calculating the PDFs, argue only: • it is too expensive (which may be true of Monte Carlo, but is certainly not of MFM); • what we have already is good enough (which is hard to prove); • the superiority of MFM has not been proved (which is true of anything which one has not tried). To these arguments it can only be answered that: • Kolmogorov's idea was adopted only because of its inherent plausibility and practicability; • the same was true of EBU, EDC, presumed-PDF, and all the rest; • none of these were "proved", "validated", "generally accepted" before they were taken up; nor could they have been. • How interesting it is that the conjectures of almost thirty years present such obstacles to the innovations of the 1990s! • Will the new millennium allow us to be more adventurous? 7. References 1. DB Spalding (1971) "Mixing and chemical reaction in confined turbulent flames"; 13th International Symposium on Combustion, pp 649-657 The Combustion Institute 2. DB Spalding (1971) "Concentration fluctuations in a round turbulent free jet"; J Chem Eng Sci, vol 26, p 95 3. BF Magnussen and BH Hjertager (1976) "On mathematical modelling of turbulent combustion with special emphasis on soot formation and combustion". 16th Int. Symposium on Combustion, pp 719-729 The Combustion Institute 4. Bray KNC in Topics in Applied Physics, PA Libby and FA Williams, Springer Verlag, New York, 1980, p115 5. SB Pope (1982) Combustion Science and Technology vol 28, p131 6. C Dopazo and EE O'Brien (1974) Acta Astronautica vol 1, p1239 DB Spalding (1999) "The use of CFD in the design and development of gas-turbine combustors"; www.cham.co.uk; shortcuts; CFD DB Spalding (1995) "Models of turbulent combustion" Proc. 2nd Colloquium on Process Simulation, pp 1-15 Helsinki University of Technology, Espoo, Finland DB Spalding (1998) The simulation of smoke generation in a 3-D combustor, by means of the multi-fluid model of turbulent chemical reaction: Paper presented at the "Leading-Edge-Technologies Seminar" on "Turbulent combustion of Gases and Liquids", organised by the Energy-Transfer and Thermofluid-Mechanics Groups of the Institution of Mechanical Engineers at Lincoln, England, December 15-16, 1998 Spalding DB (1999) "Connexions between the Multi-Fluid and Flamelet models of turbulent combustion"; www.cham.co.uk; shortcuts; MFM
{"url":"http://www.cham.co.uk/phoenics/d_polis/d_lecs/cmbstr5/cmbstr5.htm","timestamp":"2014-04-19T14:28:51Z","content_type":null,"content_length":"13324","record_id":"<urn:uuid:f5586910-8c02-45fb-ada0-85935529f5df>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Cracking Da Vinci'sCode On Tree Growth In winter, the sight of trees left bare after its leaves have fallen can leave some feeling gloomy. But for Renaissance genius Leonardo da Vinci, the view was a source of inspiration, which eventually led him to formulate a theory about the relationship between the size of a tree’s truck and the combined measure of its branches. Now, for the first time, a French researcher says he has come up with a scientific explanation for Da Vinci’s 500-year-old theory. In one of his more than 7,000 pages of notes, Da Vinci laid out his tree formula. He calculated that the squared diameter of a tree’s trunk is equal to the squared sum of the diameters of its branches. Moreover, if a trunk is split into two sections, the sum of the thickness of all the branches coming off of one of the section is equal to the thickness of that section of trunk. Up to now, no one had even offered scientific explanation to this theory. Christophe Eloy, a visiting professor at the University of California at San Diego and a specialist in fluid mechanics, may have finally found an answer. Most botanists thought that the law of nature behind Da Vinci’s formula was an efficient way to transport sap to leaves. Eloy disagrees. He thinks trees are structured as they are in order to protect themselves from damages caused by the wind. Five years ago, Eloy became interested in Leonardo’s formula and the mechanics of trees after reading the book Plaidoyer Pour L'arbre (Plea for the Tree) by Francis Hallé, a specialist in tropical forests. Eloy later attended a lecture by professor of fluid mechanics Emmanuel de Langre at the Ecole Polytechnique in Paris on the relation between wind and plants. “De Langre spoke about Leonardo’s formula, and that it was like a Pythagoras theorem for trees, except that no one had explained it yet,” he recalled. A question of mechanical engineering In his research, Eloy used an analytic model and a numeric model to computer-design the lightest possible tree structure that would still be able to resist the wind. “In the first model, I developed some hypothesis to simplify the geometry of ramification. I took into consideration that a tree’s shape is fractal by nature, meaning that the same pattern repeats itself in smaller structures, such as smaller branches,” says Eloy. “The problem to explain is one of mechanical engineering, to understand how the diameter of the branches should change so that each point of the structure had same probability of breaking when the wind blows.” The numeric model was used in order to evaluate the effectiveness of some assumptions of the first model. “I used a simple scheme with few parameters to generate the skeleton of the trees in 3-D. Some of the copies of the principal branches were summed up repeatedly in order to create a virtual structure. The diameter of each branch was calculated against the constant of the probability of breaking in the wind,” Eloy says. “The two models perfectly fit with each other and with Leonardo’s formula.” This finding might have several applications, according to the researcher, including a better understanding of the mechanism of how plants grow and resist outside elements. It could be relevant for forestry and agricultural businesses, as well as bioengineering, says Eloy. Such rules of nature occur every day under our eyes. But only Da Vinci, who as both a scientist and a painter, was an attentive enough observer of nature to be able to understand it. “He analyzed and debated his discoveries in a very modern way,” says Eloy. “His intuitions were clear and complex at the same time, and are studied and debated by scientists to this day.” News in English via Worldcrunch.com
{"url":"http://www.lastampa.it/2011/12/15/esteri/lastampa-in-english/cracking-da-vinci-scode-on-tree-growth-fddNfBK6pxThfIl4hbFD9H/pagina.html","timestamp":"2014-04-21T05:07:08Z","content_type":null,"content_length":"36093","record_id":"<urn:uuid:652bfbd1-e94c-44c0-86e8-b6fed168c0a9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
(atomic macro) generate subgoals using induction Major Section: PROOF-CHECKER-COMMANDS induct, (induct t) -- induct according to a heuristically-chosen scheme, creating a new subgoal for each base and induction step (induct (append (reverse x) y)) -- as above, but choose an induction scheme based on the term (append (reverse x) y) rather than on the current goal General Form: (induct &optional term) Induct as in the corresponding :induct hint given to the theorem prover, creating new subgoals for the base and induction steps. If term is t or is not supplied, then use the current goal to determine the induction scheme; otherwise, use that term. Note: As usual, abbreviations are allowed in the term. Note: Induct actually calls the prove command with all processes turned off. Thus, you must be at top of the goal for an induct instruction.
{"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/1/6/language/acl2-html-docs/ACL2-PC_colon__colon_INDUCT.html","timestamp":"2014-04-16T13:11:04Z","content_type":null,"content_length":"1742","record_id":"<urn:uuid:3409cb06-398c-494d-8b9d-456447c74434>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: solve for x 5/2x + 1/4x = 7/4 + x thank you • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50ff4e33e4b00c5a3be64c61","timestamp":"2014-04-20T19:00:51Z","content_type":null,"content_length":"44480","record_id":"<urn:uuid:32991e32-adb2-46a6-ad1a-2997526bce20>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Can someone check my work? • one year ago • one year ago Best Response You've already chosen the best response. Sure thing. Best Response You've already chosen the best response. I have to solve systems of equations, and I chose the elimination method, but for some reason the last step isn't working. b) 7x – 3y = 20 5x + 3y = 16 Elimination Method 7x – 3y = 20 5x + 3y = 16 So with the elimination method you want to manipulate one equation so that when you add the two equations together one will cancel out. You have: 7x – 3y = 20 5x + 3y = 16 Eliminate one variable. Since the sum of the coefficients of y is 0, add the equations to eliminate y. 7x – 3y = 20 5x + 3y = 16 12x = 36 Now we can find: x = 3 In order to solve for y, take the value for x and substitute it back into either one of the original equations. 5(3) + 3y = 16 15 + 3y = 16 Best Response You've already chosen the best response. ^ This Best Response You've already chosen the best response. Is actually messed up. Best Response You've already chosen the best response. Could you show me where i went wrong? Best Response You've already chosen the best response. I cannot understand how many ques you have posted up there . Can you please post one ques at a time. Best Response You've already chosen the best response. I only posted one question..? I just asked if you could show me where in the problem did I do something wrong. Best Response You've already chosen the best response. What are the equations and what is your answer for that. Best Response You've already chosen the best response. I am sorry i am having a hard time understanding what you hav done above.@ValentinaT Best Response You've already chosen the best response. Okay. The equation is: 7x – 3y = 20 5x + 3y = 16 I used the elimination method of solving systems to solve it. So with the elimination method you want to manipulate one equation so that when you add the two equations together one will cancel out. You have: 7x – 3y = 20 5x + 3y = 16 Eliminate one variable. Since the sum of the coefficients of y is 0, add the equations to eliminate y. 7x – 3y = 20 5x + 3y = 16 12x = 36 Now we can find: x = 3 In order to solve for y, take the value for x and substitute it back into either one of the original equations. 5(3) + 3y = 16 15 + 3y = 16 Best Response You've already chosen the best response. your answer should be x=3 y=1/3 Best Response You've already chosen the best response. A fraction for y? Best Response You've already chosen the best response. 15+3y=16 1)subract both sides by 15 2)divide both sides by 3 Best Response You've already chosen the best response. There can be a fraction and you can always check by putting the (x,y) value in your orignal equation and checking If, LHS=RHS then your answer is correct :) Best Response You've already chosen the best response. Thank you. Best Response You've already chosen the best response. Welcome :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50f5b0b0e4b061aa9f9ac72e","timestamp":"2014-04-18T08:06:06Z","content_type":null,"content_length":"64298","record_id":"<urn:uuid:3cc12aa2-3ddb-421d-bc8b-c85582d15d4c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Newtons Law Question For part c) it is asking for the net force acting on the car, so I am thinking that I would add the force of the car and the force of the trailer? (I guess the problem is I cannot get the force of the car :-/ ) Hi laurenflakes! (I think you're confused about what forces there are, and you're ignoring the force from the car's engine … I know it isn't mentioned in the question a), the trailer, you got it right … net (total) force = ma. For c), the car, you should do the same thing … net (total) force = ma. That net force is the force the trailer plus the force from the engine. But for b), the car, the force the trailer can't be found from F = ma, but it can be found from the answer to a), using a principle … can you see which principle?
{"url":"http://www.physicsforums.com/showthread.php?t=342133","timestamp":"2014-04-20T00:49:36Z","content_type":null,"content_length":"55471","record_id":"<urn:uuid:263d813d-8d31-4c16-9579-3631811f990e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Strafford, PA Algebra 2 Tutor Find a Strafford, PA Algebra 2 Tutor ...Also, I have experience instructing elementary-age children in a home-schooling environment. I consider one of the most important elements of science to be researching the correct answer. I have a strong background in research, as is necessary for any advanced science degree. 20 Subjects: including algebra 2, reading, statistics, biology ...While it will be completely up to you to determine what is best, I recommend at least 1 hour per week on a weekly basis to start. Thank you for taking the time to read about my experience and passion for science and learning. If you are interested in having me as your tutor, I look forward to hearing from you! 9 Subjects: including algebra 2, chemistry, geometry, algebra 1 I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University. 9 Subjects: including algebra 2, geometry, GRE, algebra 1 ...Solve polynomials. 7. Solve problems involving fractions. 8. Solve problems by using factoring. 9. 27 Subjects: including algebra 2, calculus, geometry, trigonometry I have been teaching Algebra and middle school math for 4 years in Camden, NJ. My experience includes classroom teaching, after-school homework help, and one to one tutoring. I frequently work with students far below grade level and close education gaps. 8 Subjects: including algebra 2, geometry, algebra 1, SAT math Related Strafford, PA Tutors Strafford, PA Accounting Tutors Strafford, PA ACT Tutors Strafford, PA Algebra Tutors Strafford, PA Algebra 2 Tutors Strafford, PA Calculus Tutors Strafford, PA Geometry Tutors Strafford, PA Math Tutors Strafford, PA Prealgebra Tutors Strafford, PA Precalculus Tutors Strafford, PA SAT Tutors Strafford, PA SAT Math Tutors Strafford, PA Science Tutors Strafford, PA Statistics Tutors Strafford, PA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Broad Axe, PA algebra 2 Tutors Center Square, PA algebra 2 Tutors Charlestown, PA algebra 2 Tutors Cynwyd, PA algebra 2 Tutors Gulph Mills, PA algebra 2 Tutors Ithan, PA algebra 2 Tutors Miquon, PA algebra 2 Tutors Oakview, PA algebra 2 Tutors Penllyn, PA algebra 2 Tutors Radnor, PA algebra 2 Tutors Rose Tree, PA algebra 2 Tutors Saint Davids, PA algebra 2 Tutors Southeastern algebra 2 Tutors Valley Forge algebra 2 Tutors Wayne, PA algebra 2 Tutors
{"url":"http://www.purplemath.com/strafford_pa_algebra_2_tutors.php","timestamp":"2014-04-18T19:24:36Z","content_type":null,"content_length":"24016","record_id":"<urn:uuid:83e39d79-c2fe-4cdf-8446-88873a08c0fe>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Math - Multiple choice Posted by Jen on Tuesday, September 8, 2009 at 7:15pm. The following data describes the rainfall. Jan - 1 Feb - 3 Mar - 1 Apr - 2 May - 2 Jun - 2 Jul - 19 Aug - 21 Sep - 3 Oct - 3 Nov - 2 Dec - 3 Which statement best describes the averages? A. The mean is the least appropriate to use to describe the average rainfall. B. The mean is the most appropriate to use to describe the average rainfall. C. The median is the least appropriate to use to describe the average rainfall. D. The mode is the least appropriate to use to describe the average rainfall. Please help. • Math - Multiple choice - Ms. Sue, Tuesday, September 8, 2009 at 7:22pm Since mean and average are the same, the best answer is B. • Math - Multiple choice - PsyDAG, Wednesday, September 9, 2009 at 2:33pm I would disagree with Ms. Sue. Since the mean is most influenced by deviant scores, it would be least appropriate. To test this out, calculate the means with and without the July-August data. I hope this helps. Thanks for asking. Related Questions Probabability - You are given monthly annualized total return data for two ... Math/Trig - The following is a table for the maximum temperature per month, for ... C++ - I am having trouble identifying the flaws in my code. I need to created a ... Geography - Why does London experience spring in MAR-MAY, summer in JUN-AUG, ... accounting - I am trying to figure out the cost of merchandise destroyed on Feb... Accounting - Jan. 1 Merchandise inventory $144,200 Jan. 1–Feb. 7 Purchases (net... Computers - How to do Excel Spreadsheet- Formulas? I have to do an excel ... algebra - Given D=5, E=10 and F=4, evaluate D E/F Multiple choice 1/2 2 25/2 math - during the winter bill blank has been quite concerned about the total ... statistics - Does it ever seem to you that the weather is nice during the work ...
{"url":"http://www.jiskha.com/display.cgi?id=1252451725","timestamp":"2014-04-20T14:18:47Z","content_type":null,"content_length":"9233","record_id":"<urn:uuid:5e02510f-95b4-46f0-94aa-c704d4c6f69c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Subroutine to solve system of equation with the accuracy better than double precision I'm currently using the GESV subroutine from MKL library to solve the system of equations (A.x=B). The number I used is Double Precision. I see the results are accurate up to 11 digits after the decimal point. Now I would like to have the accuracy is up to 16 digits but I can't find the solution yet. Anyone suggests a solution for that would be highly appreciated. Thanks a heap! RSS Oben So, 16/02/2014 - 03:37 If you're looking for a solution within MKL, the MKL forum would be a more likely choice to get you an answer. Otherwise, you could look into trying the iterative refinement by building the iterative refinement method using a combination of double real(8) and quad real(16) data types. Mo, 17/02/2014 - 14:12 GoldenRetriever schrieb: I'm currently using the GESV subroutine from MKL library to solve the system of equations (A.x=B). The number I used is Double Precision. I see the results are accurate up to 11 digits after the decimal point. Now I would like to have the accuracy is up to 16 digits but I can't find the solution yet. Anyone suggests a solution for that would be highly appreciated. Thanks a heap! It seems to me what you're trying to achieve, "have the accuracy .. up to 16 digits ", would only be possible under a very specific set of conditions for a system of equations, non-linear I assume. You would need to study your A matrix thoroughly, analyze its determinant, its eigenvalues, etc. and look at transformations to minimize any loss of precision during decompostion. And then you can make use of QUAD precision intelligently to improve accuracy. To me, this would be akin to developing a "custom solver" for a given problem set, a departure from using a general-purpose solver from MKL library. But should you figure out how to do this with a library solver like in MKL, please post back on this forum if you can do so. I, for one, will be very keen to learn from your experience and would greatly appreciate your contribution! Mo, 17/02/2014 - 15:34 Potentially you can itterate on the error, by: calculate x for A.x = B calculate the error e = B - A.x calculate x' for A.x' = e Potentially then x+x' is a better solution. You should investigate why A.x-B is only accurate to 10 significant figures, when up to 16 could be possible with real(8) Often due to rounding during the calculation, the above approach does not improve the result. You can try using real(16) (especially for the calculation of e using a real(16) accumulator), but this will only work if you can calculate the coefficients of A and B to real(16) precision. If you populate A and B as real(16) arrays with values calculated as real(8) you can achieve only limited improvement in the "accuracy" of the result, after consideration of the accuracy of the values used in A and B. This is due to the sensitivity of the error e to the inaccuracy of the coeffcicents of A and B. Poor precision is probably due to an "ill-conditioned" A, which can only be corrected by changing the problem definition. Mo, 17/02/2014 - 18:34 16 digits would be especially difficult if you're using double precision, since the type itself is good to only about 15 digits. 11 digits for a complex calculation is about right. Mi, 19/02/2014 - 02:39 Thank you all for the reply. Appreciated that. FortranFan: at this stage I prefer to use any available general-purpose solver such as from MKL library. To develop a kind of "custom solver" would be a bit out of my ability at the moment. Certainly will let you know if I find a way to get around with this. John Campbell: what you suggested would be interesting to try out. I will have a go with it. Tim Prince & Steve Lionel (Intel): Using double precision, my arrays A and B are accurate up to 15 digits; however, the results give out from "GESV" are only 11 digits of accuracy. I include a simple example of my case in the attached file for reference. In this example, the results are the displacements of a structure, they are supposed to be symmetric but some values are not accurately symmetric as expected. I notice MKL has an Expert Driver named gesvx which provides error bounds on the solution. I am not sure if it is something can help in this case. Anyone has experience using that please advise. Mi, 19/02/2014 - 02:43 Thank you all for the reply. Appreciated that. FortranFan: at this stage I prefer to use any available general-purpose solver such as from MKL library. To develop a kind of "custom solver" would be a bit out of my ability at the moment. Certainly will let you know if I find a way to get around with this. John Campbell: what you suggested would be interesting to try out. I will have a go with it. Tim Prince & Steve Lionel (Intel): Using double precision, my arrays A and B are accurate up to 15 digits; however, the results give out from "GESV" are only 11 digits of accuracy. I include a simple example of my case in the attached file for reference. In this example, the results are the displacements of a structure, they are supposed to be symmetric but some values are not accurately symmetric as expected. I notice MKL has an Expert Driver named gesvx which provides error bounds on the solution. I am not sure if it is something can help in this case. Anyone has experience using that please advise. Anhang Größe Herunterladen GESV_0.ZIP 2.34 KB Mi, 19/02/2014 - 09:08 I am moving this thread to the MKL forum. So, 02/03/2014 - 20:23 Did you try what I suggested on solving the error ? Often this does not work, but in some circumstances it can help. The basic approach is: given B = A . x a first solution estimate is x1 this gives e = B - A . x1 solving e = A . xe an improved solution is B = e + A . x1 = A . ( x1 + xe) and e2 = B - A . (x1 + xe) You can check the accuracy by either the maximum absolure error or the sum of the square error terms. ( e'.e ) Calculating e and e2 to higher precision can help. I'd be interested if it proved effective or ineffective, if you had a chance to test it out. Di, 04/03/2014 - 04:34 Thanks John for your follow up. I could not get it work even your method does sound make sense to me. I include a simple example with array of 16x16 in attached file if you interest to have a try with it. Please let me know if you can spot any problem with what I have done. Thanks very much mate. Anhang Größe Herunterladen GESV_140304.zip 2.55 KB Di, 04/03/2014 - 20:40 I removed all the libraries and replaced the solver with a real*16 solver. I also placed code to check the size of the coefficients against the size of the error terms. After cycle 1, the max coefficient term when calculating the error matrix is 6.45052935008452D+9 The max error is 7.666822057217360E-007, which implies an accuracy of about 16 significant figures. The determinant of the AA matrix is 2.085791801338600E+123. This AA matrix appears to be a well conditioned matrix and there is minimal improvement in error after 2 cycles. There is little opportunity to improve or demonstrate improvement of the error. I think that the basic problem you have is the error you are looking for an error magnitude of 1.e-10, where you should be looking for an error in terms of significant figures of error. For the changes I made, I am getting 16 significant figures, while I expect your original code provided something similar. For some reason the attachment did not work. I will paste the code I changed below. Excuse the expected layout. Will this IDZ ever get fixed !! ! USE MKL95_LAPACK ! USE mkl95_blas IMPLICIT NONE INTEGER*4, parameter :: N = 16 ! size of problem integer*4, parameter :: LIMIT = 11 ! number of itterations INTEGER I, J, k, ERROR_CODE real*8 A(N,N), B(N,1), AA(N,N), X(N,1) , XX(N,LIMIT), E(N,1) real*8 ErrorMax, s, determinant, v real*16 acum SUBROUTINE MATRIX_SOLVE (MATRIX_IN, RHS, determinant, ERROR_CODE) real*8, dimension(:,:), intent (in) :: matrix_in real*8, dimension(:,:), intent (inout) :: rhs real*8, intent (out) :: determinant integer*4, intent (out) :: error_code END SUBROUTINE MATRIX_SOLVE END INTERFACE ! BEGIN Read Input File OPEN(111,FILE='TDReview_FEM-K_REDUCE.csv',STATUS='UNKNOWN', ACTION='READ') READ(111,*) ((A(i,j), j=1,N), i=1,N) OPEN(112,FILE='TDReview_FEM-LOAD_REDUCE.csv',STATUS='UNKNOWN', ACTION='READ') READ(112,*) ((B(i,j), j=1,1), i=1,N) ! END Read Input File ! check A for symmetry s = 0 v = 0 do i = 1,n do j = i+1,n s = max (s, abs(a(i,j)-a(j,i)) ) v = max (v, abs(a(i,j)) ) end do end do write (*,*) 'Max symmetry error =',s write (*,*) 'Max coefficient =',v DO i=1,LIMIT AA = A ! Reasign AA which is modified when calling GESV ! X(:,1) = SUM(XX,DIM=2) do j = 1,n acum = 0 do k = 1,i acum = acum + xx(j,k) end do X(j,1) = acum end do ! E = B - MATMUL(A,X) v = 0 do j = 1,n acum = B(j,1) do k = 1,n acum = acum - A(j,k)*X(k,1) v = max ( v, abs(A(j,k)*X(k,1)) ) end do E(j,1) = acum end do ErrorMax = MAXVAL(ABS(E)) s = 0 do k = 1,N s = s + ABS (E(k,1)) end do WRITE (*,*) 'LOOP',I write (*,*) ' Max error term :', ErrorMax write (*,*) ' Avg error term :', s/n write (*,*) ' Max coefficient :', v IF (ErrorMax<1.D-10) EXIT ! CALL GESV (AA, E) call MATRIX_SOLVE (AA, E, determinant, ERROR_CODE) write (*,*) ' Solve error =',error_code write (*,*) ' determinant =',determinant XX(:,i) = E(:,1) END DO ! OPEN(111,FILE='TDReview_FEM-DISP_VEC.csv',STATUS='UNKNOWN', ACTION='WRITE') DO i=1,N WRITE (*,"(1000(1x,F55.19,','))") X(i,:) END DO Mi, 05/03/2014 - 02:51 Thanks John for that, I could not see the mentioned "real*16 solver" in the code. I could not reproduced your code to make it works as well. Did I miss anything? PS: the procedure to attach file on this forum is a bit different from the normal way of attaching file as most email webpage does. It is a bit confusing; after choosing to add the file we actually need one more step to upload them to get it TRULY added to the post. 75% of the time I failed to attach the file; sadly that's true. Should it be something Intel Forum to fix? Yes. ;) Mi, 05/03/2014 - 21:57 Attached is the hybrid version which mixes real*8 and real*16 for the equation solution. If you want a fully real*16 solution, change most real*8 to real*16. For a fully real*8 solution, change real*16 to real*8 in the solver, while keeping acum as real*16. You should see that the error does not change significantly and given the maximum value of the coefficients and load case, the errors reported are quite acceptable. I adapted this code to inspect the errors during the calculation. MKL does provide better routines for refining the error, but I think they would also find that as the matrix is well conditioned, there is not much scope for improvement. Anhang Größe Herunterladen gesv03.f90 6.34 KB Herunterladen gesv03_0.log 3.77 KB Melden Sie sich an, um einen Kommentar zu hinterlassen.
{"url":"https://software.intel.com/de-de/forums/topic/505523","timestamp":"2014-04-18T11:07:02Z","content_type":null,"content_length":"84490","record_id":"<urn:uuid:61cebc8c-1939-4aec-ab05-4ee3bc9526d7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Why Does VLOOKUP Return An #N/A Error? VLOOKUP week enters its second day, meaning that it’s time to have a look at how to problem-shoot #N/A errors. On the online forums I frequently see frustrated users asking why their VLOOKUP formula returns an #N/A error, so I hope that some of the reasons I’ve listed below will be helpful to many of you. Exact Match #N/A By far and away the most common reason for an exact match #N/A error is that the lookup_value doesn’t exist in the lookup column. Sometimes you can be virtually certain that it does exist, but the difference can be extremely subtle so don’t be surprised if you’ve missed it. In case you’re wondering what “lookup column” means, it was defined in the previous post. Here are three hard-to-spot examples : In this example we’re trying to find a date in a list and return the corresponding value in the adjacent column. As an aside, I should mention that I live in the UK so my dates are in a dd/mm/yyyy format. We can see that the date exists in the lookup column, so why is the exact match formula returning an #N/A error? Appearances can be deceptive. The first thing to do is to check whether VLOOKUP is correct in that the two 03/01/2012 values are not the same. In a spare cell we can enter the simple formula =H2= E4 to check; it returns FALSE which confirms that the dates indeed do not match: Cell formatting can change the appearance of a cell without changing the underlying value. If we click into the cell we can see that, in this case, the date in the lookup column also has a time stamp of 02:24:00 : The time stamp explains why VLOOKUP can’t marry the two dates together: they’re different. The correct resolution to this problem will vary depending on your situation: most likely you need to remove the times from the lookup column, which can be done very quickly using Text To Columns. Here’s a similar problem except that we have some sort of mismatch between two pieces of text rather than dates. They’re both spelt exactly the same, so why is VLOOKUP returning an #N/A error? The first step is to perform a direct comparison just the same as we did last time. The formula =H2=E4 returns FALSE confirming that the two values are not the same. The next check to perform is whether or not the length of the two strings is the same. The formulas =LEN(H2) and =LEN(E4) reveal an interesting difference: the length of ADMIRAL GRP in the lookup column seems to be one character longer than the lookup_value: This is because cell E3 has an extra space character after the word GRP. This can be clearly seen by clicking into the cell and looking at the position of the cursor which is not directly beside the letter P: Once the extra space has been removed the two values will match. This difference is the most subtle of all. In the below table we have a VLOOKUP formula which should return Paul, but for some reason it’s returning an #N/A error: Again, the direct comparison formula =H2=E4 returns FALSE, confirming that the pair of 3 values are not the same. Formatting could be hiding the problem - much the same as the date mismatch formula - but clicking in both the cells confirms that there are no decimal points in this particular case. On this occasion, the difference is caused by a data type mismatch. Excel recognises several data types including Number, Text, Logical, Error and Array. The data type can be checked using the TYPE function: The TYPE formulae tell us that the 3 in the lookup column is actually “3″ stored as text, whilst the lookup_value, 3, is a number. For more information on this have a look at the TYPE worksheet function topic in the Excel help file. VLOOKUP is type sensitive which is why it considers the two to be different. In fact, it turns out that, in this case, all the numbers in the lookup column are numbers stored as text; they can be quickly converted into number types using Text To Columns. Approximate Match #N/A The issues above may also apply so they should be considered but here are a few other potential problems which speak for themselves: • The lookup_value is less than the smallest value in the lookup column. • The lookup column is not sorted in ascending order. This will cause varied results: sometimes you will get the correct value, sometimes you’ll get the wrong value and sometimes you’ll get an If you’ve frequently experienced an #N/A value which isn’t covered by any of the above then post a comment and let us know! 101 Responses to Why Does VLOOKUP Return An #N/A Error? 1. Thanks for the post Colin. I am facing a problem with VLOOKUP, maybe you can help me out here. So the formula is simple, =VLOOKUP(A1, C2:D200, 2,0), where A1=Today(). I notice that this throws up a #N/A, because the date cell reference is a formula (Today()). Now, I need this formula in place because A2, A3, A4 would be Today()-365, Today()-182, Today()-91 respectively. Is there a way by which VLOOKUP can reference a cell which contains a formula and give me a proper result? Your help would be much appreciated! LikeLike this □ Hi Sid, It should work perfectly fine with the reference cell containing a formula.The TODAY() function returns a whole number. Your VLOOKUP() formula is performing an exact match so the #N/A result tells you that it can’t find an exact match. If you check column C and find a cell which you think contains an exact match, you can then follow the steps in the blog article to identify the problem. The most likely causes are that either column C contains not just a date but also a time (ie. a decimal portion) or column C has dates (which are numbers) formatted as text. LikeLike this 2. Nicely done. Very clear, very succint. LikeLike this 3. Hi guys. I have a table in Excel what I can’t get a proper reference to. Maybe you can help me. A simplified version of the table (Array1) is: (Blank) – MAD – MAPE – MSE ES – 13 – 9 – 18 MA – 11 – 7 – 16 MWA – 12 – 8 – 17 NA – 10 – 6 – 15 This table acts in the same way as the original table, and gives me the same errors. I am sure that every value in the table is different. I want to look up the lowest value for a column and return the name of the row. I have created a cell, say B2, that returns the lowest value of, say MAD (if that is my criteria), to make sure that the values are the same. I would think that the function would be: What am I doing wrong and are you able to return the name of the row? LikeLike this 4. Hi Rune, The problem you have is that the name column is to the left of the lookup column. When you use a standard VLOOKUP() formula, the lookup column needs to be on the left, so that’s why you’re having One solution would be to move around the columns in your table, but I expect that that isn’t really an option for you. Another, very common, workaround is to use an INDEX() and MATCH() formula. This formula is a bit more complicated than a VLOOKUP() formula but it gives you the flexibility you need to lookup values “to the left”. If we imagine that your table Array1 is in the range A1:D5 (with A1 being an empty cell) then the formula you need to find the name corresponding to the minimum MAD value is: As part of VLOOKUP week, Mike Girvin did a video explaining how to use INDEX() and MATCH(). You can find it on the link below and I’m sure you’ll find it to be an hour of your time well spent. If you have any further questions or want me to explain the formula then please let me know. LikeLike this □ Thank you for the fast reply, it was very helpful even though i found my own workaround. I will try out the INDEX and MATCH functions, and definitely watch the video once I got an hour free. Thanks again. LikeLike this 5. Very useful to me as I often get this error. Also, one item not listed that I use is the “clean” function for the table used in the formula LikeLike this 6. I am experiencing N/A error even though the value is TRUE when doing a comparison. My formula is =VLOOKUP(A2,Sheet2!A:B,2,FALSE) where I am doing this in Sheet 1. I have tried Trim, LEn and everything matches but still get this error. any help? LikeLike this □ Hi Venkat, Specifically which cell in Sheet2!A:B should match A2? LikeLike this 7. Hey.. I am applying Vlookup and in mostly cells i am getting values but in some cell i am getting #N/A. Rahter i chcked manually that value is there.. but still its giving me #N/A.. hope will get some inputs from you guys LikeLike this □ Hi Arpit, It sounds like you have written a vlookup formula in a cell and then filled it down a column or across a row. The first thing to check is that you correctly locked the row/column references of the table_array before you filled the formula. For example, if you have this formula in cell A1: If you then fill that formula down to A2, it becomes: The table array has changed to E2:F21 because the rows were not locked. To lock the rows you need to put dollar signs infront of the row numbers before you fill the formula down: If that all looks okay then double check that the value you’re looking up is in the left-most column of the table_array. It must be in the first column. If that looks correct too then you need to work through the examples on the blog post. Sometimes two values can look the same but they’re not. If your lookup value is a string then check for any extra spaces in the lookup value or the lookup table. If your lookup value is a date or number then check its data type (as shown in the blog post) in both the lookup value and in the lookup table because VLOOKUP() is data type sensitive. Let us know how you get on. LikeLike this 8. Colin Such a simple one but can’t get it to work. I tried the direct comparison =B2=E3, which gave me true but I still get N/A for a look up: =VLOOKUP(B2, D2:E6, 2,FALSE) So I have a column of letters with numbers in next cell (A…1, B….2) in column B and in E, trying to find B2 value (number 1) in D2:E6 range column 2, which is the number column, just comes up with this N/A. Any ideas what’s wrong? LikeLike this 9. Hi Andrew, A golden rule with VLOOKUP() is that the lookup column must be the first column in table_array. In your example, the first column in table_array is column D, but the column you’re trying to look up with is in column E. That’s why VLOOKUP() is returning an #N/A error. This problem happens quite often – if you look at the previous questions you’ll see that Rune had a similar situation. There are a number of things you can do to get this to work. The first option is to switch around the data in columns D and E in your lookup table. Quite often this isn’t something one would want to do, in which case a different formula is required. The most common formula workaround is to use INDEX() and MATCH() instead of VLOOKUP(). In this case the replacement formula would be: =INDEX(D2:D6,MATCH(B2, E2:E6,0)) So, in that formula, MATCH() finds the position of the first value (looking down the column) in E2:E6 which is equal to B2. INDEX() then returns the value in D2:D6 which is in the same position. A less common (and slightly more complicated) formula is to use VLOOKUP() with CHOOSE(). Hope that helps, LikeLike this □ Hi Colin That’s great info and thanks for sharing. I get kind of frustrated with excel functions a lot and end up just using VBA! LikeLike this 10. I am having the same problem but all the tests are true. I am basically trying to VLOOKUP a text value that begin with “^”. My table is sorted, so VLOOKUP is not referencing the first entry within my table but it will reference the same entry if its entered as a duplicate. My only work around is to place a space entry within my first row of the table to force VLOOKUP to being reference from row 2. How can I fix this? LikeLike this □ Hi Lance, I’m sure I can help you with this, but first I need to get a better idea about what’s going on. Please would you post your VLOOKUP formula and, if possible, a few example rows from your lookup table? Or, if you’d prefer, you can send me an example file (please remove any confidential information from your workbook) – my email address is on my ‘About’ page. LikeLike this ☆ Hi Lance, Thanks for sending me your workbook. Your VLOOKUP() formula (simplified from a larger formula) looks something like this: =VLOOKUP(Main!B2,'Symbol List'!A1:C30,1) I’ve simplified it both to make problem-shooting it easier and also so that everyone else can follow along. The fourth argument is omitted which means that your VLOOKUP() formula is performing an approximate match. For VLOOKUP() to do an approximate match correctly, the data in the lookup column must be sorted ascending. I checked ‘Symbol List’!A:A and your data is sorted ascending, which is good. However, the problem is that row 1 in column A is actually a header row. The text in the row header (in this case “Currency”) isn’t part of the sorted data, so it’s messing up things. To fix it, you can delete the empty row 2 which you had inserted and then just amend your formula to start from row 2 (where the data begins). So like this: =VLOOKUP(Main!B2,'Symbol List'!A2:C30,1) Finally, since you’re only returning the value from the 1st column of the table, you can reduce the table_array from A2:C30 to A2:A30 like so: =VLOOKUP(Main!B2,'Symbol List'!A2:A30,1) LikeLike this 11. Thanks so much!! This was VERY helpful! LikeLike this 12. Hi Colin, I have a very similar issue to where I get the #N/A even though the reference collume contain the value I am looking for. However when I double click into each single cell the value will display. The length, type, and value are all true and there is no issue there. Can you advise to why I have to double click into every cell for vlookup to work. Calculations is set to Automatic. I am going nuts here. I spend most of my day searching for a solution and can’t find a fix. Best regards, LikeLike this □ Hi Gus, Wow that sounds very frustrating indeed. It’s hard for me to identify what the problem is because I can’t see the formulas or the underlying data but it sounds like there is a calculation dependency tree issue. This could be caused by (for example) circular references or a UDF which is poorly written. If you e-mail me a simple, example workbook which demonstrates the problem then I’d be more than happy to take a look at it for you. My email address is on my About page. LikeLike this ☆ Thank you Colin, but I found a solution for it finally and would love to share it with you guys. LikeLike this 13. I did everything you mentioned and it did not work. But here is the trick that fixed the issue of #N/A when all the data are the same and everything is TRUE and the only fix is to click into every single cell then enter (very long process). Here is the trick: Highlight the look up value in the array (only the value you are looking), then click on data > text to columns> next > next > finish….. BINGO… Thank you all. LikeLike this □ Hi Gus, Thank you for posting your solution. It’s a shame we didn’t get to the root cause of the problem, but I’m sure that your post will help some poor soul who’s going through the same as you did. Well done on getting it fixed! LikeLike this ☆ Hi, I have also had issues with excel formatting and text to columns. Specifically when importing data from outside applications into excel, the formatting would look correct, check out as correct but not “work”. Running text to columns seems to clean the formatting somehow. It would affect vlookups as described by the op. Horribly frustrating until you find the fix…. I cannot give an example now as I no longer work in the same position, I just stumbled across this while looking for a different excel answer… LikeLike this 14. Hi, Really hoping you can help me with a VLOOKUP that returns a blank, which I am assuming is a #N/A in disguise. I inherited a workbook with formulas, there are 5 tabs, the formula is in one, and references another tab. FORMULA: =IFERROR(IF(LEN($C147)<5,VLOOKUP("0"&LEFT($C147,5),'4. Static Data'!$A$2:$F$1048576,MATCH(H$2,'4. Static Data'!$A$2:$F$2,0),FALSE),VLOOKUP(LEFT($C147,5),'4. Static Data'! $A$2:$F$1048576,MATCH(H$2,'4. Static Data'!$A$2:$F$2,0),FALSE)),"") If you need the workbook happy to send as I have been trying to figure this formula out for weeks. I have done all of the LEN, TYPE etc formulas and they all return true. I also tried Gus's suggestion, which didn't help. LikeLike this □ Hi Deb, Yes, the IFERROR() function is hiding an error result there and it’s very likely that the error is an #N/A error. To make it easier to understand I’ve factorised your formula to make it less verbose: =IFERROR(VLOOKUP(IF(LEN($C147)<5,"0","")&LEFT($C147,5),'4. Static Data'!$A$2:$F$1048576,MATCH(H$2,'4. Static Data'!$A$2:$F$2,0),FALSE),"") We could also get rid of the VLOOKUP() entirely and replace it with INDEX(), but that’s a story for another occasion. There are two functions in your formula which commonly return #N/A errors: VLOOKUP() and MATCH(). The MATCH() part of your formula looks like this: MATCH(H$2,'4. Static Data'!$A$2:$F$2,0),FALSE) Here MATCH() is performing an exact match, so it will return #N/A if it can’t find the value in H2 in ’4. Static Data’!$A$2:$F$2. The VLOOKUP() part of your formula is also performing an exact match and will return #N/A if it can’t find the C147 value (preceded by a 0 if C147 is less than 5 characters) in column Static If you’re still struggling to find the problem then you’re welcome to email a simple spreadsheet which demonstrates the problem. LikeLike this ☆ Hi Debra, Thanks for sending your worksheet: I see the problem now. Believe it or not, this is a data type issue. Let’s take the formula in row 36 as an example: =IFERROR(IF(LEN($C36)<5,VLOOKUP("0"&LEFT($C36,5),'4. Static Data'!$A$2:$F$99,MATCH('1. four_day_outlook'!H$2,'4. Static Data'!$A$2:$F$2,0),FALSE),VLOOKUP(LEFT('1. four_day_outlook'! $C36,5),'4. Static Data'!$A$2:$F$99,MATCH('1. four_day_outlook'!H$2,'4. Static Data'!$A$2:$F$2,0),FALSE)),"") The value in C36 is 16235 and it is a number data type. If we look on the static data sheet, A66 contains the value 16235 and it is a number data type too. You did the checks I suggested on my blog correctly. The problem is the LEFT() function in your formula returns a text data type. So although the number 16235 gets read by your formula, the LEFT() function changes it to the string "16235" before VLOOKUP() uses it. Let's prove this theory before we look at a fix. – Open a new workbook and type 16235 into cell A1. – In cell B1 put in the formula =TYPE(A1) and it will return 1 which tells you that A1 contains a number type. – In cell A2 put in the formula =LEFT(A1,5) and it will return 16235. – In cell B2 put in the formula =TYPE(A2) and it will return 2 which tells you that A2 contains a string type. Okay, let’s put the theory to one side because now it’s time to fix your workbook. What you need to do is change the 16235 in the static data sheet so it is a string. This is easy to do: select column A on the static data sheet, press CTRL+1 to open the format cells dialog, choose Text from the category list and click on OK. Then go to cell A66, press F2 and then enter. You’ll also need to go to cell A67, press F2 and enter as well. Once you’ve done this you should see a small green triangle in each of the cells and, if you select one of the cells you should get a small exclamation mark next to the cell with a dropdown which, if you click on, will tell you that it is a number stored as text. In this case, that’s exactly what you want and your VLOOKUP formulae should work, so don’t choose the option to convert Text into a Number. Incidentally, if you look elsewhere in the column you’ll notice that there are some other numbers stored as text in there, so I suspect that someone has hit this problem before. LikeLike this ☆ Thank you so much, it now works perfectly, you are a life saver LikeLike this 15. Hi Colin, I have been using VLookup for days now and suddenly I am only returning #NA results, even though my spot checking verifies the values match. My formula is =VLOOKUP(Z34983,’[UPC codes & Price Markup for Day Brite.xlsx]Sheet1′!$A$1:$B$52,2,FALSE). I have numerous rows of data in which I am referencing/matching the UPC number to pullover net price into another spreadsheet (provided by client). It seemed to be working wonderfully up until this afternoon. I have saved, closed excel, restarted the process and still running into the issue. I am “pulling my hair out” frustrated and must get this project done already. Any ideas? Oh, some things I have already confirmed: my table array did have spaces after the UPC numbers but I knew that and have been getting rid of them all along. I am using the same excel versions as I have been and I can’t think what’s changed. Unless it is Friday, and a major ‘user error’ that I can seem to see. :( Thanks in advance for the help/advice! LikeLike this □ Hi Andrea, Your formula looks fine so it would seem to be a data issue. If you consolidate a simple example which demonstrates the issue into a single workbook and email to me, I’d be happy to take a look for you. LikeLike this 16. Hey Colin, I’m having a VLookup issue in Excel 2007. I have two tables, each on its own tab. The first tab is Prices. The second is March 2013. Prices contains a small 6 row by 2 column table. Abbreviated days of the week are in the left column (B) and the price for each day is in the right column (C). The full table range is B2:C7. The table on March 2013 is a compilation and calculation table designed to derive the totals per day and month. The command I have set up is on March 2013 and is intended to retrieve the price for a given day from Prices, and multiply it by the quantity in the C column of the same row on March 2013: A7 is the abbreviated day of the week using =WEEKDAY() on the date in column B on March 2013. The formula returns properly for everything except Fri. Every reference using Fri or Friday turns up #N/A. I used =VLOOKUP(“Fri”,B2:C7,2) directly in the prices tab to confirm it and used an exact copy of the formula with “Thurs” at the same time. Thurs works but Fri doesn’t. I’ve changed the left column in the Prices table from General to Text with no apparent change. I’ve made sure all the text fields are exactly the same, both by testing and re-entering. Yet, for some reason, Excel seems to disagree with Friday and Fri. Does Microsoft have issues with the best day of the week or something? At first, it had issues returning the correct values for Thurs, Fri, and Sat. That was fixed by putting the rows in alphabetical order. However, Fri still returns #N/A for some reason. Any thoughts? LikeLike this □ As an extra note, I have discovered that shortening the reference range to B2:C5, so four rows, allows the Fri references to return correctly. As soon as it goes up to five or more rows, Fri goes AWOL. LikeLike this 17. Wow. I’m such a dunce. I use =TEXT(CellReference,”ddd”) to return the day of the week from the date on March 2013 so it returns in the text format instead of a number code. LikeLike this □ Aaand to continue the Dunce-age: I didn’t have my Prices table alphabetized quite right. I had Monday first, probably because it’s first in the week. As soon as I corrected that, my error cleared up. I have got to be one of the dumbest smart people I know. LikeLike this 18. Thank you very very much. Very seldom you come across such a succint yet concise explanation. MS should hire you to do their F1s. LikeLike this 19. I am having this problem. the “=” comparison shows “TRUE”, len() shows same, type() also shows same. Still vlookup gives an #n/a. Anything else I can check. Font and formatting is also exactly the same. Anything else I can check? LikeLike this 20. Hi Faisal, please would you post your VLOOKUP formula, the “=” comparison check formula which you used, and tell me the exact value you’re looking up? LikeLike this 21. Hi, Can anyone assist with this: All the data is #N/A LikeLike this 22. Hi Niki, Please would you tell me what range the named range Allexpenses refers to? LikeLike this 23. Hi Colin, The data is like this: Employee name Recategorisaion code E M N Mocha Vet bills 1,000.00 Fifi Hay 100.00 Mamba Bunny meals 300 Milli Vet bills 998 Patch Grass 100 LikeLike this 24. sorry those numbers go under the letters LikeLike this 25. Hi Niki, I assume that Allexpenses is a named range which references your whole table, so the issue is with the MATCH() part of your formula. You want MATCH() to look across the table headers to determine which column position you want to return, so you need to tell it to look across there rather than looking at the whole table. Suppose your Allexpenses named range is in the range F1:M1000. Create a new named range called AllexpensesHeaders and give it a reference to the first row of the table. In my example this would be F1:M1. Then amend your formula to: LikeLike this □ Hi Colin, Sorry for taking a while to get back to you. I had some issues with the formula. I made your recommended adjustments and noticed two issues: 1) Some employee names are not pulling through even though they are listed on Allexpenses. 2) On the lookup sheet some names appear like Pumpkin that is not on the Allexpenses range N/A appears – do you know how to avoid this. Hope you can help. Thank you in advance LikeLike this ☆ Hi Nik, Please email me a sample workbook (without sensitive data) which demonstrates the issue and I will take a look at it for you. My email address is colinleggblog at gmail dot com. LikeLike this 26. I have used VLOOKUP for a while so I know what I’m doing, but this case is just odd. On one hand I have a table of divisions giving me rounded numbers and then I have a table of all rounded numbers written by hand. When I do a VLOOKUP I can’t find a handful of this numbers. I copied and pasted the value of one of this numbers and put it side by side with a hand written number. When typing =A2=B2 it comes out true, they are the same type, I check to the 15th decimal and they are the same but the VLOOKUP just can’t find it. Please help! LikeLike this □ Hi Ivan, This sounds like a floating-point rounding issue. If it is then I think the easiest way for you to resolve it would be to edit the formulas in your table of divisions to include a rounding to a number of decimal places which is reasonable for your project. You can do this by using the ROUND() worksheet function, eg to round 1/3 to 5 decimal places: Once you’ve adjusted your hand written numbers to be the same number of decimal places then your VLOOKUP() formulas should be happy. For more information on floating-point rounding have a look through these articles: Floating-point arithmetic may give inaccurate results in Excel Understanding Floating Point Precision, aka “Why does Excel Give Me Seemingly Wrong Answers?” LikeLike this 27. Hi Colin, Thanks very much for your information. It really helped me out a lot. However, and I probably am stretching things a little here, I have a question after using your info and links. I have a workbook with a lot of info which looks like this: Hero Games Wins Losses WR Player KDA Alchemist 1 1 0 100,00% Huite 1,29 Alchemist 4 1 3 25,00% Jurjen 2,18 Alchemist 5 4 1 80,00% Marc 2,4 Alchemist 2 1 1 50,00% Peter 2,75 Alchemist 3 1 2 33,33% Roland 2,9 Alchemist 3 0 3 0,00% Ermo 3,33 Ancient Apparition 1 0 1 0,00% Peter 0,78 Ancient Apparition 2 0 2 0,00% Jurjen 0,95 Ancient Apparition 4 1 3 25,00% Huite 1,77 Ancient Apparition 3 2 1 66,67% Roland 2 Ancient Apparition 6 3 3 50,00% Marc 2,2 Ancient Apparition 12 5 7 41,67% Ermo 3,47 Imagine that there are several other ‘hero’ names and various stats. In a ‘team’ composition sheet I’ve come a big way with all the help, it now finds the highest WR and KDA per selected hero. A little like this: Teamfight win% KDA wie % wie KDA Dark Seer 100,00% 17,00 Peter Peter Warlock 81,82% 3,96 Ermo Ermo The win% and the KDA is found with the following array formula: =MAX(IF(Totallist!A:A=’Team Composition’!A9;Totallist!E:E)) and =MAX(IF(Totallist!A:A=’Team Composition’!A9;Totallist!I:I;)) That works splendidly, thanks! However, finding the corresponding names (wie% and wie KDA) is something which I can’t get working. I can get it working by manually selecting ranges, like this: =INDEX(Totallist!$F$103:$F$109;MATCH(‘Team Composition’!B9;Totallist!E103:E109;0)) but I would rather have it work the same with a nested IF function and let it only index if the criteria is met. Tried a lot of things, but or too few arguments or a #REF error. And (I know, I’m really stretching it :-)) would it be possible to do something as follows: a team consists of 5 players. Each of them plays a certain hero. If i put the heroes in a certain order it should bring up the ‘best scoring’ player for it. However, if on line 1 for hero 1 player ‘Peter’ is found, he should be excluded for hero 2-5. I tried to work with the ‘solver’ to see if it could maximize for example the ‘total’ KDA in a team of 5 players playing 5 heroes, but I failed miserably :-) Thanks in advance, LikeLike this □ Hmm.. one more thing. If in the complete workbook I apply a filter (for example, only show the stats for heroes which a certain player has played at least 10 times) in the aforementioned formula’s it still searches the complete set of data instead of the filtered set. Is that something which can be solved in those formula’s? LikeLike this ☆ Found it! =INDEX(TotalFilter!F:F;MATCH(‘Team Composition’!B94;(IF(TotalFilter!A:A=’Team Composition’!A94;TotalFilter!E:E;0));0)) Sometimes, it’s just very simple and am I making it difficult for myself. LikeLike this 28. Hi, I have two list of dates and trying to pull certain numbers from an old list. The Vlookup returns a value on the first datea, however, does not work on any of the subsequent dates. I did the len and type errors and both match on the subsequent dates, however, excel still says the dates do not match. So I get the #N/A error instead of the value that I want. Any thoughts on how to check the error or what this could be? LikeLike this □ Hi Brian, A few things to check: (1) Did you lock the table_array reference (with $ signs) before you filled it down the column/across the row? If not then the lookup value may not be in the range referenced in the formula. (2) If the dates in the lookup column are not sorted you need the VLOOKUP() to be an exact match (ie. Range_Lookup needs to be False). (3) Do any of the dates have times on the end of them? Cell formatting might hide this, but you’ll be able to see the times in the formula bar. If you’re still having problems then send me a sample workbook (less any sensitive data) and I’ll take a look at it for you. LikeLike this ☆ I did not lock in the array. It works now. Thank you for the help! LikeLike this 29. Tom asks: I was Googling for reasons why my simple VLOOKUP function isn’t working and came across your blog. Although it was informative, none of the reasons you gave in that particular blog helped. I’m getting a #N/A result for some entries and incorrect results for others. I attach the spreadsheet for your reference. Your blog says you work(ed) for a large financial company so the spreadsheet could be easily understood, but what I’m trying to do is show a client the potential results of putting money into a pension. Specifically, at this point I’m stuck at, I want to automate the response Excel gives to how much additional tax relief a client can expect, depending on what rate of income tax they pay. Hi Tom, Your VLOOKUP() formula in B17 is this: You are passing in 3 arguments into VLOOKUP(). The 3 arguments are B16, J1:K4 and 2. VLOOKUP() actually takes 4 arguments and the important thing to know is that VLOOKUP() will do an approximate match if the fourth argument is omitted or if the fourth argument is TRUE or 1. So these three formulas would all do the same thing: =VLOOKUP(B16,J1:K4,2, TRUE) =VLOOKUP(B16,J1:K4,2, 1) When you an approximate match with VLOOKUP(), the data in the lookup column (J1:J4) must be sorted. This is covered in my blog post. In this case, I think you’re using an approximate match by mistake: I reckon you want to do an exact match instead. When you do an exact match the data in the lookup column doesn’t need to be sorted. To do an exact match you must pass either FALSE or 0 in as a fourth argument, so either of these formulas should work for you: =VLOOKUP(B16,J1:K4,2, FALSE) =VLOOKUP(B16,J1:K4,2, 0) Hope that helps, LikeLike this 30. I am sooo so frustrated. I feel like this is a very simple formula and it is returning all #N/A. I have done all the trouble shooting mentioned above. Everything matches. My formula is =VLOOKUP (B2,TABLE10,4,FALSE). This particular formula has worked for me in the past and now it works on nothing. I feel like it is my excel software as opposed to the formatting in the spreadsheet. LikeLike this □ Hi Sonya, I can’t see anything wrong with your formula, so it looks like there must be an underlying issue with the data. If you email me a small example file I’ll take a look for you. LikeLike this 31. Hi Colin, I am attempting to use the vlookup with this formula =VLOOKUP(B36,A62:B46661,2, FALSE), seems to look right, I have ensured the cell and the table are the same format. I have not included the table header (I get the N/A#) regardless. The table is simple with the column A containing $.01 to $466.00, and Column B having a specific fee ie $1.50 to $5.00. The cell I am asking the vlookup to find contains a formula on a different sheet that is =B34/B22, which is formatted the same as the table. Please advise what is amiss? I’m going out of my bird trying to figure this out. LikeLike this □ Hi Pat, Your exact match VLOOKUP() formula looks good to me. The number you are looking up is calculated by a division formula, so this looks like a rounding issue – possibly even a floating-point rounding issue which happens because some decimal numbers can’t be fully represented in binary. This problem has bitten a few other people who have posted comments on here. If that’s the case then the way to fix the problem would be to either change your precision as displayed setting (which I generally would not recommend) or you can apply rounding in your If that still doesn’t work then it’d be worth ensuring that the numbers in the lookup column are rounded to 2 decimal places too and, failing that, email me an example. Here are a couple of links I posted on another comment: Floating-point arithmetic may give inaccurate results in Excel Understanding Floating Point Precision, aka “Why does Excel Give Me Seemingly Wrong Answers?” LikeLike this ☆ Hi Colin! Worked beautifully with the rounding formula. Thanks so much! LikeLike this 32. Thank you so much. In my case, I could’t match a number from one workbook no another. I tried the =XX=XX formula and returned FALSE. The problem wasn’t my vlookup formula but the numbers in one of my workbooks. I solved it by copy-pasting all the column numbers over themselves. The autocorrection asked if the numbers should be copied as numbers or actual text. I went for numbers and voilá! It switched from FALSE to TRUE and of course, my vlookup formula returned an actual value instead of a N/A LikeLike this 33. Hello, I am facing a problem when calling for a value corresponding to TIME. The first cell which i call gives me a correct value but when i drag it, the output received is #N/A. Please help LikeLike this □ Hello Anoop, From the information on your comment, I’d say the main 2 possible reasons for the error which spring to mind are: 1. When you drag the formula down you haven’t locked the necessary cells so the dragged VLOOKUP() formulas don’t have the correct range references. 2. Time values in MS Excel are floating point numbers, so this could be a floating point rounding issue. Please see previous comments which discuss this. Hope that helps, LikeLike this 34. Awesome…your advice regarding LEN and TYPE helped me so much with issue I was experiencing! LikeLike this 35. Thanks so much! I was totally stuck with my vlookup and your article had the answer (sort order was my issue). ~Avni LikeLike this 36. Good morning, Thanks for the explicit help. Forgive me if I missed this, but I haven’t seen in the examples above the following error.: Data Validated List used as the Lookup Value to return data from a Table. Here’s my formula. The table contains text and numbers, the Data Validated List is the first column (M) of the table. ex. row/column data: M16:20 Maximum: Detailed research N16:20 Review data. 016:20 Narrow to one or 2 segments P16:20 1-2 weeks Some of the rows work whereas others do not. In the other vlookups in the doc, it returns the wrong row, such as (original formula: =VLOOKUP($C$3,Table13,2) ) choosing C2 for D2 returns instead: What I’ve tried in both instances.: - Eliminating spaces at the end of the sentences - formatting the cells as text - INDEX and MATCH (and the more complicated vlookup mentioned in the same response from above) - True/false proofs - type checks All of the text was typed into the worksheet. Thank you in advance. LikeLike this □ An interesting update. When words that start with an M, A, E (and potentially others) are placed in the validated list they return an N/A. When an S or P word is placed there it returns the correct answers. LikeLike this □ Organizing the other vlookup’s validated list alphabetically solved the problem of mismatched row returns. LikeLike this ☆ Hi Kris, The lookup column needs to be sorted ascending because your VLOOKUP formula is doing an approximate match. If you pass False into the 4th parameter to make it an exact lookup then the lookup column won’t need to be sorted. LikeLike this 37. Thanks, I assume that’s why the other bad returns came back as well. LikeLike this □ Hi Kris, Yes, that sounds right. If you have a look towards the end of the blog post there’s a short section on approximate lookups which describes the behaviour you’re seeing. LikeLike this 38. Hi Colin, Thanks so much. It solved my problems. Tony Yap LikeLike this 39. Hi Colin, Odd issue here with vlookup. I have a set of recipes all of which reference a data sheet in the same workbook using vlookup. Data sheet is about 90rows of data (about ingredients). In the past I have inserted additional rows in the middle of those 90 rows to add new ingredients with no trouble at all. Today, when I inserted a new ingredient, all my recipes that have a certain code (=text of 00000, which results in BLANK entries after lookup) come up with lookup results N/A. This has not happened before. Any clues? LikeLike this □ Hi Gavin, I’m not sure why your formula is returning #N/A. Please would you email me an example workbook and I’ll take a look at it for you? LikeLike this ☆ Hi Colin, Thanks for the offer of help. Please see attached sheet. I have culled all recipes except for one, SM119. The Ingredients are in the Database tab. If I add a new row in anywhere after row 4 in Database, the vlookuptable in SM119 from N6 generates NA results if the Code in K is ‘000000’. We have used this without any worry for a long time, adding in new rows, but something seems to have gone wrong now. Any help gratefully received. Best Wishes, Gavin Heys LikeLike this ☆ Hi Gavin, You can’t attach workbooks on here. Please would you email it (email address at the end) to me? LikeLike this ☆ Hi Gavin, Thanks for emailing me your workbook. The VLOOKUP() formula in SM119!N6 is The TRUE part of the formula tells VLOOKUP() to do an approximate match. When VLOOKUP() does an approximate match, it is essential that the lookup column (the first column in the lookup table which, in this case, is a named range called DATA) is sorted is ascending order. If the lookup column is not sorted ascending then you will get unpredictable results: it might return the correct result, it might return #N/A or it might even return a wrong value! In this case I think your safest option is to change your formulae so that they tell VLOOKUP() to do an exact match. To do this, just change the TRUE to FALSE, like this: Hope that helps, LikeLike this 40. Jason asks: Colin – I didn’t leave a comment in your post about VLookup returning #N/A, but I saw that someone commented on it today and you had replied to email them the workbook. So I’m doing the same Your article worked great, but for some reason it’s not working on every line. This is a spreadsheet for fantasy basketball, where I figure out the best team using projected points. The name columns on each tab are a little crazy, because I had to merge them on Sheet 1, then do the Proper() of the column D and put it in Column E. I also did the same on Sheet 2 so they would match It comes back with the value for most of the players, but not all of them. The example that I have been trying to figure this out with is cell E25, Blake Griffin Lac. On sheet 2 he is there in cell D24. They come back as FALSE when doing =E25=D24, but all the other tests you suggest come back as they are the same. I’m hoping you can help me out with this, as I’ve just been entering them manually for quite some time and it’s a real pain in the butt! Thanks in advance for taking a look. Respectfully yours, LikeLike this □ Hi Jason, Your formula for Blake Griffin is If you look in sheet1, you’ll see that Blake Griffin is in cell E24. Your formula references Sheet1!E25:F274, so it can’t find him (E24 is not in the lookup range). This has come about because you’ve put the following formula in cell H2 and then filled it down the column without locking the E2:F251 reference. What you need to do is change the formula in cell H2 to the below to lock the row references: and then fill it down the column. That fixes the first problem. The second problem is that your formula is doing an approximate match. This is fine in the sense that the data in Sheet1 is sorted in ascending order, but the problem is that Sheet1 is “missing” some data. I’ll explain what I mean by “missing” in a minute. The result of this is that your approximate match formula is returning some incorrect results (ie, projected points for some players are wrong). To demonstrate this, make the correction to the formula I suggested above and fill it down column H. If you look at Alex Burkes, you’ll see that his projected points returned by the VLOOKUP() formula are 3.1. But, if you look at Sheet1 you’ll see his projected points should be 17.925. The reason for the mismatch is on your NBA sheet, Alec Burks’ team is Uta but on Sheet1 it is Utah (with an “h”). Because you have inconsistencies in your data which are being masked by the approximate match, I think you should use an exact match instead. This will highlight the problems so you can fix them. Change the formula in H2 to this: and then fill it down the column. The FALSE at the end of the formula tells VLOOKUP() to only return projected points when it finds an exact match for the player. Formulas that show #N/A then need to be investigated. For example, Alex Burks and Andris Biedrins both have a team mismatch. Austin Daye and Chris Wilcox are entirely missing from Sheet1, etc. If you’re still not sure about what approximate are in VLOOKUP() formulas, you might want to have a read of my blog post on them: Hope that helps, LikeLike this 41. Colin, I tried using your tips and applying it to my take home test for my class, where we have to find the Standard Bonus and Performance Bonus using the VLookup formula, but I am so confused. I really don’t know what to refer to since in the first table, the first column contains the agent, then the annual commission, then the year with the company. Then theres another table which is the bonus schedule that contains the years of service, performance award, and standard bonus. The Standard Bonus and Performance Bonus are both going into the first table and in the formula, the table_array portion of it has to be an absolute value because I have to autofill it down the column. I just don’t know what to refer to. If you could help out, that’d be awesome! LikeLike this 42. Hi Kaitlyn, The first table contains the years of service, so it sounds like you need to add a new column of VLOOKUP formulas to the first table which will look up the performance award and standard bonus by using the years of service for each agent. LikeLike this 43. Hi: I currently am working on a spreadsheet that compiles a number of different reports into one larger aggregate report. Put another way, I have 8 buckets , all containing the same 3 columns of data, keyed to a date, filling one big bucket (the overall/total report). Hope this makes sense. Currently I have it setup so that each smaller bucket is assigned to its own worksheet, and the total value report in its own sheet gets its summarized aggregate totals by dragged down a column that uses chained vlookups referencing the dates in the table. So the total worksheet pulls total sales from Nov1 in sheet 1, Nov1 in sheet 2, Nov1 in sheet 3 etc using vlookup against the date…and this is dragged down to include all the dates in the tables. Problem is, some of the campaigns in the smaller buckets ended earlier or later than others, so it causes the result to reflect as a #N/A because it can’t find the values as the date doesn’t Aside from entering all 0′s and dummy rows to keep the dates the same among the tables, is there any way to just have it ignore missing values or stop once it reaches the end/last date of the table without actually taking the particular vlookup function that is hitting the missing date and causing the error out of chain? Thanks for the help! LikeLike this □ Hi Larry, There are always multiple ways one can skin a cat, so it’s hard for me to suggest the best way for you to achieve your goal (formulas, VBA, pivot tables etc) without spending some time looking at your workbook. You mention the word ‘aggregate’ in your question which makes me wonder whether or not you should be using SUMIF() formulas instead of VLOOKUP() formulas. I think the best you could do with VLOOKUP()s would be to wrap them with an error handling function such as: ☆ IFNA() (only available in Excel 2013 or later) ☆ IFERROR() (only available in Excel 2007 or later) ☆ IF(ISNA()) (pre 2007) That way you can have the formula return 0 instead of those unsightly #N/As. LikeLike this 44. Please help I’m trying to do the vlookup from Excel A AND B and my purpose is to dig out How many invoices are still outstanding. The excel A is mess up data and B is all completed and paid invoices. So The lookup value is the invoice doc. no. Say in coulmn H from Excel A. Also, the coulm H is being sorted by invoice no. Originally, it is mess up, but in order to delete the duplicate invoices, I sorted it by invoice number. The table array is in excel B( this excel shows all the completed invoices), from column A to I, column, colum A being the system generated invoice no. and column I being the invoice doc. no Myvlookup formula is below; =VLOOKUP(H2,’[Completed PR Raw data 2013.XLS]Sheet1′!$A:$I,1,0) It’s all return with #N/A, but I saw there are matched invoice doc. No. from EXCEL A and B can you help me ? thanks heaps!! LikeLike this 45. Hi Colin, I have two columns. I would like to compare two columns in excel and lookup value matches with the array should display “match” else “do not match”. How do you think this can be accomplished. Below is the data set: one seven #N/A two eight #N/A three one one four four four five #N/A six #N/A formula used: =VLOOKUP(B2,A$2:A$7,1,FALSE) With thousands of records, just wanted to verify, which lookup value do not exists in array of values. It seems below formula seems to be not working. Please suggest right approach with vlookup =IF(VLOOKUP(B4,A$2:A$7,1,FALSE)=”#N/A”,”Do Not Match”,”MATCH”) LikeLike this 46. I could able to solve the issue by using: IF(ISNA(VLOOKUP( LikeLike this 47. Hi Colin, My function uses the ‘TRUE’ range look up value, but when the Lookup value is exactly the same as a value, it still returns the next smallest value rather than the same value. I tried changing the range lookup of the same equation to FALSE but this returned a #N/A error. I have tried using the suggested methods to troubleshoot my VLookup function but they unfortunately haven’t solved my problem. I now know the problem has something to do with how the lookup value is being calculated as if I put the number directly into the vlookup it returns the correct answer. The lookup value: =E20-(2*E13) where each cell represents a diameter value. I also tried using the Match function but this resulted in the same issue, both when looking for an approximate or exact value. I have tried for a while to solve this but have been unsuccessful, so any help would be much appreciated. LikeLike this □ Hi Elliot, It sounds like a rounding or floating point issue. Both of these have been discussed in previous comments. As a test, try rounding your source data and lookup value to 2 decimal places (actually 2 decimal places and not formatting to only display 2 decimal places) and then try doing an exact match VLOOKUP. If you’re still having problems then email me a simple example workbook and I’ll take a look for you. LikeLike this ☆ Thanks for the reply. Managed to fix it by using =ROUND on the lookup value. LikeLike this 48. I realise post was written years ago but I just wanted to say thank you, thank you, thank you! Just wasted 2 hours of my life with the #N/A only to find some of my numbers were saved as text despite the fact I thought I had already converted them. You have saved me many more hours of frustration. LikeLike this 49. Hi, what u posted is a very precise info on what to do about the error.. However I am still having problem with it. comparing 2 cells (eg. =a20=a45) gives out true value. but still i get N/a error. I am my file to ur email id, if it is possible for you to look at it and find a solution. Thanx a lot. LikeLike this □ found out i had problem in another entry which somehow caused error in other data.. sorted it out and now all is ok.. thanx a lot for dis awesom post.. LikeLike this 50. I have the following formula: VLOOKUP(750,$F$3:$G$23,2,TRUE). Basically, I am looking through the range $F$3:$G$23 to find the numeric value 750. When this value is found, the formula should return the value in column G, which happens to be a date. The range $F$3:$G$23 includes twenty different numbers in column F (sorted in ascending order) and twenty different dates in column G also ascending). Again, no dupes. The number “750″ is not an exact match for any of the values in column F. However, there are values for 725 (this is row 10 with a corresponding date value of Jan 1, 2014) and 775 (this is a in row 11 with a corresponding date value of Feb 1, 2014). The last value in the range is 1500 with a corresponding date value of Dec 1, 2014. I would expect the VLOOKUP(750,$F$3:$G$23,2,TRUE) to return the date from the 10th or 11th row. However, it is returning the date from the 20th (last) row, as though the value were not found. LikeLike this □ One follow-up. In my data range ($F$3:$G$23), not every row has a value in the first column. The final five rows do not have numbers in column F. They do, however, have dates in column G. Interestingly, the problem I identified goes away if I put values in the last five rows of column F. However, it seems to me to be irrelevant whether I do this, since the value 750 falls between two rows which do already have values. LikeLike this ☆ Hi Shawn, It sounds like you’ve found the problem there. Those final five rows which are empty will cause the behaviour you describe. It’s important to understand that, when you’re doing an approximate match, VLOOKUP searches the column by jumping down and up it rather than just starting at the top and working down. I wrote about this in more detail here so have a read through and hopefully things will make more sense. LikeLike this 51. I have the following data on one worksheet: TABLE #1 START DATE | END DATE | RELEASE NAME 01/01 | 01/07 | missing formula here 01/08 | 01/14 | missing formula here 01/15 | 01/21 | missing formula here 01/22 | 01/29 | missing formula here I have the following data on another worksheet: TABLE #2 LAUNCH DATE | RELEASE NAME 01/04 | Apple 01/11 | Banana 01/17 | Orange 01/25 | Peach I want to search column A in TABLE #2 (Launch Date) and find our where within TABLE #1 the date appears. Then I want to return the “Release Name” as the formula result in TABLE #1. For example: The date 01/11 in TABLE #2 falls between 01/08 and 01/14 in TABLE #1. So, I would like the formula in column C row 2 in TABLE #1 to result in the value “Banana.” LikeLike this 52. VLOOKUP on text is not working for me. I have several sheets (sheet1, sheet2, sheet3) with data by countries and one sheet (sheet4) with the three-letter country codes defined in ISO 3166-1. I am trying to VLOOKUP the three-letter country code in sheet1 and only get #N/A errors. I have checked for trailing spaces and such things, data type… – checking with a formula returns TRUE, confirming that two values *are* the same. VLOOKUP however will return an #N/A error. See https://www.dropbox.com/s/o8fg387jf8a3wp8/sample.ods for a sample. LikeLike this □ Hi Mariano, When you do a VLOOKUP(), the first column in the table must contain the value you’re trying to find. Your table columns are: ISO | COUNTRY but you are trying to look up a country and return an ISO code – a ‘lookup to the left’. This means that your current table structure will not work for a straightforward VLOOKUP() formula. Here are 3 ways you can work around this limitation: Option 1 – Change your table structure If you swap around the ISO and Country columns then your VLOOKUP() formula should work (change the column index from 1 to 2). Option 2 – Use An Index / Match Formula Instead of using: try this: Option 3 – Use CHOOSE() embedded in the VLOOKUP() Like so: Personally, in this case, I think the best one for you is Option 2. LikeLike this 53. Hej Colin, I was not actively aware of the restriction that lookups could only be done “to the right”. Swapping ISO and Country columns was the easiest solution for me, but I will keep option 2 and 3 in mind in case I am stuck with the order of columns for any reason. Thank you very much for your help! PS: You don’t happen to know your way around data labels in a bubble chart as well? :-) LikeLike this 54. VERY helpful article, thank you! Particularly the systematic diagnosis ideas. The problem I resolved with some thought provocation from your post was I was getting a ‘False’ return to my vlookup formula. Turns out I had the formula correct but there were HIDDEN COLUMNS in my lookup range, one of which very coincidentally contained the actual text value FALSE. And of course that column was hidden and happened to be the column I THOUGHT I was referencing by counting the columns left to right in the data set. So, I was mighty confused to be getting a ‘False’ in the cell of the formula until I realized that ‘duh, hidden columns’ mean I’m getting a match but the return value isn’t in the column # I thought it was.’ Bottom line is your post helped to eliminate some possibilities and consider others, so again, thank you! LikeLike this This entry was posted in Microsoft Excel and tagged #N/A, Error, VLOOKUP. Bookmark the permalink.
{"url":"http://colinlegg.wordpress.com/2012/03/26/why-does-vlookup-return-an-na-error/","timestamp":"2014-04-21T12:15:07Z","content_type":null,"content_length":"273596","record_id":"<urn:uuid:950c9e4b-d9cf-4279-9ae7-36f7fae45d81>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
s maximum principle Hausdorff's maximum principle Hausdorff’s maximum principle Let $S$ be the set of all totally ordered subsets of $X$. $S$ is not empty, since the empty set is an element of $S$. Partial order $S$ by inclusion. Let $\tau$ be a chain (of elements) in $S$. Being each totally ordered, the union of all these elements of $\tau$ is again a totally ordered subset of $X$, and hence an element of $S$, as is easily verified. This shows that $S$, ordered by inclusion, is inductive. The result now follows from Zorn’s lemma. ∎ ZornsLemma,AxiomOfChoice,ZermelosWellOrderingTheorem, ZornsLemmaAndTheWellOrderingTheoremEquivalenceOfHaudorffsMaximumPrinciple, EveryVectorSpaceHasABasis, MaximalityPrinciple maximum principle, Hausdorff maximality theorem Mathematics Subject Classification no label found Added: 2002-09-29 - 23:48
{"url":"http://planetmath.org/hausdorffsmaximumprinciple","timestamp":"2014-04-18T23:18:23Z","content_type":null,"content_length":"36105","record_id":"<urn:uuid:c4fb0c7b-06c3-41f0-b734-fdb17505d2cf>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Tannaka-Krein theorem Tannaka-Krein theorem Tannaka-Krein theorem (in the narrow sense) is a particular Tannaka duality theorem for compact topological groups. The original theorem is (in two independent and different formulations) stated and proved in • Tadao Tannaka, Über den Dualitätssatz der nichtkommutativen topologischen Gruppen, Tohoku Math. J. 45 (1938), n. 1, 1–12 (project euclid has only Tohoku new series!) • M.G. Krein, A principle of duality for bicompact groups and quadratic block algebras, Doklady AN SSSR 69 (1949), 725–728. For a textbook treatment see • E. Hewitt, K. A. Ross, Abstract harmonic analysis, 2 vols. See also Revised on June 5, 2011 17:21:47 by Urs Schreiber
{"url":"http://www.ncatlab.org/nlab/show/Tannaka-Krein+theorem","timestamp":"2014-04-19T12:00:37Z","content_type":null,"content_length":"12358","record_id":"<urn:uuid:1cccd340-a035-43a4-b8eb-93a29c875ece>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
If you were given a large data set such as the sales over the last... - (221764) | Transtutors describing data If you were given a large data set such as the sales over the last year of our top 1,000 customers, what might you be able to do with this data? Posted On: Sep 09 2012 06:46 AM Tags: Statistics, Central Tendency, Others, College Solution to be delivered in 24 hours after verification Solution to "describing data" Related Questions in Others Ask Your Question Now Copy and paste your question here... Have Files to Attach? Questions Asked Questions Answered Topics covered in Statistics
{"url":"http://www.transtutors.com/questions/describing-data-221764.htm","timestamp":"2014-04-20T00:49:43Z","content_type":null,"content_length":"65607","record_id":"<urn:uuid:1d306231-d6fd-4402-8d96-ddacd5f6c55a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Monkeys in the Jungle Date: 11/18/96 at 17:57:38 From: DR STEVEN R BERKMAN MD Subject: Monkeys in the jungle... A group of monkeys are hanging from some trees and see a pile of 3000 bananas (monkeys have a way of knowing exactly how many bananas are in a pile). They want to bring the bananas to their friends 1000 miles away on the other side of the jungle. So, they swing through the jungle carrying as many bananas as they can to the other side. The monkeys are indecisive, so they don't necessarily make one trip. Altogether, they can carry up to 1000 bananas at any given time, but every mile the group shares one banana. Since the monkeys are good friends, they always stick together. What is the MAXIMUM number of bananas the group of monkeys will be able to bring to their friends on the other side of the jungle? How do they do it? Date: 11/20/96 at 17:11:17 From: Doctor Lynn Subject: Re: Monkeys in the jungle... I ran this through a spreadsheet and eventually came up with this answer: (starting with 1000 bananas each time) They go 250 miles and drop 500 bananas and come back. They go 250 miles, pick up 250 bananas, go another 250 miles, drop 250 bananas and come back. They go 250 miles, pick up the 250 bananas, go 250 miles, pick up the other 250 bananas and reach their friends with 500 bananas. Not a particularly nice way of solving the problem and it may not be the MAXIMUM number, but I haven't been able to find a better one. Perhaps one of the other Doctors will be more help. -Doctor Lynn, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: 11/21/96 at 03:14:34 From: DR STEVEN R BERKMAN MD Subject: Re: Monkeys in the jungle... Dr. Lynn: Thank you for your reply. But what if they go 333.3 miles and drop 666.7.... Go back and get another 1000...go 333.3 and drop another 666.7.. Go back and get the last 1000...go 333.3 and drop another 666.7. Now they have 2000 at the first point... Take 1000 and go another 333.3 mi and drop 666.7... Go back...get the last 1000...go the 333.3 and drop 666.7... Now they have 1333.333 two thirds of the way there... They pick up 1000...go the last 333.3 and drop 666.66666666 Go back....get the last 333.33 and then share the 666.666 with their In one of your archives is the problem using a camel...and they came up with 533.33 bananas... WE'RE SO CONFUSED! Any suggestions? Again, thanks for your help. Steven Berkman, M.D. Date: 11/27/96 at 16:13:10 From: Doctor Ceeks Subject: Re: Monkeys in the jungle... First of all, I'll assume that the bananas are actually eaten continuously, and not after each mile. After all, if the bananas are actually eaten only after each whole mile, then one could get all 3000 bananas any distance by carrying 1000 bananas half a mile at a time. Indeed, it may be conceptually better to think of this problem as monkeys carrying sacks of sand which leak at the rate of 1 pound per mile. This model more accurately describes the problem based on the kinds of statements you have made (such as that the monkeys can go back without using bananas and that there can be fractional amounts of bananas). So, I will phrase my answer in terms of sand, but you can replace sand with bananas if you want. (The final answer will be that 833 1/3 pounds of sand (bananas) can be brought to the destination.) Note that as long as there are more than 2000 pounds of sand, at least three trips are required to move all the sand some distance no matter how the sand is distributed along the path. We can therefore succeed in getting 2000 pounds of sand one third of the way to the destination by taking three trips one third of the way and back. This is what you do to start off your answer below. Now, as long as there are more than 1000 pounds of sand, at least two trips are required to move all the sand some distance no matter how the sand is distributed along the path. We can therefore succeeed in getting 1000 pounds of sand one third plus one half of the way to the destination by making two trips of 500 miles each. At this stage, there are 1000/6 miles to go, and this done, we reach the destination with 1000-1000/6=833 1/3 pounds of sand. Now we ask, why is this the best possible? Here, I will give a heuristic argument. I hope this argument will be convincing enough but I do want to point out that it is not a completely rigorous mathematical argument as it stands. However, it can be made rigorous with some extra effort (which I'm sorry for not wanting to do!). Imagine a very short segment of the path...say a foot long. For this one foot segment, we ask, how many trips are taken across this segment by sand bearing monkeys? The amount of sand lost to the earth on this segment is a multiple of the number of such trips. Now, we can imagine cutting the whole path into these 1 foot segments and placing a number over each segment indicating how many sand bearing trips are taken over that segment. For instance, a silly thing to do would be to carry 1000 pounds all the way in three separate trips resulting in no sand at the destination. Each segment would be assigned the number 3 because each 1 foot segment saw 3 sand bearing monkey trips crossing it. In the solution given above, there would be 3's over segments in the first third of the path, 2's over the next 500 miles, and 1's over the remaining sixth. Now, how ever the monkeys originally got the sand over to produce the assignment of numbers to the segments that we see, we can always make the monkeys move the sand over foot by foot (segment by segment) taking as many trips as indicated by the numbers to arrive at a solution which results in the same amount of sand at the destination, but with the property that the sand is moved one segment at a time. For instance, in the silly strategy, we took three trips of 1000 miles each. But we could also move 1000 pounds a foot, go back, move 1000 pounds a foot, go back, move 1000 pounds a foot. Then, divide what sand is left into thirds, and move each of the three piles of sand another foot, according to the prescription described in the last paragraph. The amount of sand arriving at the destination will be the same as the silly strategy, but the monkeys will behave very differently along the way. The key point of doing this is that we can now see that the furthest we can encounter a segment assigned a 2 to the starting point is 1000/3 miles into the path. This is because we want to minimize the losses for as long as possible, and this can be done only by assigning 3's as far as possible. Doing so, reduces the sand to 2000 pounds by the 1000/3 mile mark. Any other strategy will result in a loss of a thousand pounds at an earlier stage. Now observe that the best strategy must do this, because the best strategy will observe 2000 pounds of sand occuring closest to the destination. (If there were a strategy which observed 2000 pounds occuring earlier, then there would be less than 2000 pounds at the 1000/3 mile mark and so the strategy of getting 2000 pounds to the 1000/3 mile mark must do at least as good.) By similar reasoning, the furthest we can encounter a segment assigned a 1 is 500 miles further or at the 5/6 of 1000 mile mark. (And, of course, 1 is the minimum possible value a segment can be assigned unless the monkeys lose all the sand before getting to the It is true that there is the objection that using segments 1 foot long forbids strategies which have monkeys going distances that are not multiples of 1 foot. However, we can take the length of each segment to be arbitrarily small. If you know calculus, you can check that "in the limit", that is as the length of these segments tends to 0, you will achieve the same answer given above. -Doctor Ceeks, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: 11/27/96 at 23:46:22 From: DR STEVEN R BERKMAN MD Subject: Re: Monkeys in the jungle... Doctor Ceeks: Thank you for your reply to my query. However, the monkeys eat one banana for every mile they travel... whether it be forward, toward their eventual goal, or backwards, to pick up more bananas. The consensus seems to be that 533 1/3 is the correct number. Steven Berkman, M.D. Date: 12/04/96 at 14:16:04 From: Doctor Ceeks Subject: Re: Monkeys in the jungle... According to your response earlier where your wrote But what if they go 333.3 miles and drop 666.7.... Go back and get another 1000...go 333.3 and drop another 666.7.. Go back and get the last 1000...go 333.3 and drop another 666.7. Now they have 2000 at the first point... it implies that the monkeys go backward without having to carry any bananas. If this is the case, then my answer is correct. If in fact the monkeys must eat bananas going backward too, then if they go 333.3 miles, they ate 333.3 bananas and they could only drop off 333.3 bananas since they would need 333.3 bananas in order to go back to the starting point. On the other hand, if you are saying that the monkeys would eat bananas going back IF they were carrying bananas, then that's ok, my solution accounts for this too. Let me say again my solution: Carry 1000 bananas 333 1/3 miles and drop off 666 2/3 bananas. Go back and get another 1000, go 333 1/3 and drop off another 666 2/3. Go back and get another 1000, go 333 1/3 and drop off another 666 2/3. Now they have 2000 at the first point (which is 333 1/3 miles from the starting point). This is exactly the same as the way you start in one of your Now, carry 1000 bananas another 500 miles further to the 833 1/3 mile mark and drop off 500 bananas. Go back and get the remaining 1000 bananas and travel 500 miles to the 833 1/3 mile mark and drop off 500 Now there are 1000 bananas at the 833 and 1/3 mile mark. Carry these 1000 bananas another 166 2/3 miles to the destination and drop off 833 and 1/3 bananas. (The answer in the archives is not an answer. It is just a suggestion at what the answer may be.) -Doctor Ceeks, The Math Forum Check out our web site! http://mathforum.org/dr.math/ Date: 12/04/96 at 14:54:24 From: Doctor Ceeks Subject: Re: Monkeys in the jungle... Above, you actually found a solution which deliver 666 2/3 bananas, so I'm confused why you wrote that you think the consensus is that 533 1/3 is correct? By the way, if the monkeys require a banana per mile in order to move anywhere, forward or back, then the answer is, indeed, 533 1/3. The proof can be modelled on similar lines as in the message I sent which gives a proof (assuming travel distances which are multiples of a foot) that 833 1/3 is the maximum value for monkeys which can travel back without carrying any bananas (as you have assumed in the method below). To overcome the "1 foot" problem, one can define a function F from points on the path to the integers by letting F(x) be the number of times monkeys cross x. Here, the condition that the monkeys be banana bearing is unnecessary because it is now required at the outset that monkeys must always carry bananas! But, also, it should be assumed that the monkeys only make a finite number of changes of direction, which is quite reasonable to assume. Then one can mark the points of change of direction and make the monkeys walk a new route (as in the other solution) but with the same effect in terms of bananas delivered. One then argues that the furthest along 2000 bananas can get is 200 miles into the path. Then one argues that the furthest 1000 bananas can get is 200+333 1/3 = 533 1/3 miles along the path. Then, they just carry the 1000 bananas to the destination and arrive with 533 1/3 Specifically, it can go like this: -Carry 1000 bananas 200 miles, drop off 600 bananas, return consuming all remaining 200 bananas to get back to the start. -Repeat the above again. -Now carry 1000 bananas 200 miles so at the 200 mile mark, there are 600+600+800 = 2000 bananas. -Now carry 1000 bananas 333 1/3 miles, drop off 333 1/3 bananas, go back 333 1/3 miles, pick up the 1000 bananas, go 333 1/3 miles so at the 533 1/3 mile mark, there are 333 1/3 + 666 2/3 = 1000 bananas. Now carry the 1000 bananas to the destination and arrive with 533 1/3 bananas. -Doctor Ceeks, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/56751.html","timestamp":"2014-04-17T12:45:33Z","content_type":null,"content_length":"17645","record_id":"<urn:uuid:9bec6aec-2747-46e6-9ebe-bf0ebdf8d76c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
optimal way to grab closest sum Author optimal way to grab closest sum Ranch Hand Joined: Aug 13, i have a group of items, stored in an array, sorted according to the influence: 2004 say Posts: 177 class person name (string) influence (number) end class i have a number x, i need to get a group of items where the sum of their influence is the closest to the given number x s1 70 s2 60 s3 25 s4 11 and x = 90: so the result wil be s2 and s3 coz 70 + 25 = 95 is the closest sum to 90.. sorry for i dont know if such an algo has a name.. thanks for any help. Author and Joined: Jan 10, Moved to Java in General (intermediate) Posts: 60059 [Asking smart questions] [Bear's FrontMan] [About Bear] [Books by Bear] I like... Joined: Oct 14, Posts: 18141 If you want to find a subset of the numbers such that their sum is the largest possible value no greater than the given number X, that's called the backpack problem. I imagine your 8 problem is mathematically equivalent. I like... Ranch Hand Joined: Aug 13, is there a java implementation of this algorithm? 2004 thanks Posts: 177 Ranch Hand Joined: Jan 03, You can google for it, or just write your own. Posts: 490 "Computer science is no more about computers than astronomy is about telescopes" - Edsger Dijkstra subject: optimal way to grab closest sum
{"url":"http://www.coderanch.com/t/380721/java/java/optimal-grab-closest-sum","timestamp":"2014-04-21T12:25:19Z","content_type":null,"content_length":"27189","record_id":"<urn:uuid:4267eb27-a4ac-4072-b586-9b7180529cfd>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
fundamental vector field Fundamental vector fields and differential forms Let $G$ be a Lie group with Lie algebra $\mathfrak{g}\cong T_e G$, $M$ a $C^1$-differentiable manifold and $u\colon G \times M\to M$ a left $C^1$-differentiable action. For every $m \in M$, denoted by $u_m\colon G\to M$ the $C^1$-differentiable map $u_m\colon g\mapsto u(g,m)$. If $A \in \mathfrak{g}$, the $C^1$-differentiable vector field on $M$ given by $(\chi_A)_m = (T_{1_G}u_m)(A),\,\,\,\,m\in M,$ sometimes also denoted $A^\sharp$, is called the fundamental vector field corresponding to $A$. If $s\colon I\to G$ is a curve around $s(0) = 1_G$ representing $A$, then $(\chi_A)_m$ is represented by $t \mapsto s(t) m$. Analogously, one can define the fundamental vector field for the right actions. There is a dual notion as well. Given a right $C^1$-differentiable $G$-manifold $E$, a $C^1$-differentiable $1$-form $\omega$ with values in $\mathfrak{g}$, $\omega \in \Gamma (T^* E\otimes\mathfrak{g})$ is called the fundamental differential form corresponding to $A$, if for all $p\in E$, $\omega_p(\chi_A) = A$. The fundamental vector fields are important in study of differentiable actions and particularly useful in the basic study of Ehresmann connections. Indeed, the connection form is a fundamental form (“1st Ehresmann condition”), and there is a characterization of those fundamental forms which are connection forms (“2nd Ehresmann condition”).
{"url":"http://ncatlab.org/nlab/show/fundamental+vector+field","timestamp":"2014-04-19T12:00:41Z","content_type":null,"content_length":"21481","record_id":"<urn:uuid:8477378c-9a49-4264-9eac-c18ef7dca240>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Garden Grove, CA Geometry Tutor Find a Garden Grove, CA Geometry Tutor ...My most frequently used words are "How" and "Why?" Therefore,I studied science in college. I wanted to know how things work. Why various things work in certain ways? 19 Subjects: including geometry, chemistry, reading, English ...Many teachers are used to giving lectures following the textbook which may not always be easy to understand and may leave a student feeling overwhelmed. My teaching methods ensure that my students feel confident in their Algebra 2 abilities by making the subject as simple as possible. I was nom... 22 Subjects: including geometry, English, Spanish, reading ...Pre-testing is necessary for proper entry to the school and will give the student a boost off the starting line to make for a smooth transition into the school and throughout the program. Working with a counselor before the enrollment process is mandatory to plan out each semester (quarter) to m... 20 Subjects: including geometry, reading, English, writing Dear Students and Parents! I am writing this letter to introduce myself as your potential tutor. I am a California credentialed secondary science teacher who is available to provide effective tutoring in science and math subjects. 21 Subjects: including geometry, reading, biology, algebra 1 ...Growing up I was always interesting in Math and science. I took as many classes as I could and ultimately ended up getting a degree in Biomedical Engineering from UCI. Along the way, I first started tutoring in Highschool working with students from grades 7th-12th. 14 Subjects: including geometry, chemistry, physics, algebra 1 Related Garden Grove, CA Tutors Garden Grove, CA Accounting Tutors Garden Grove, CA ACT Tutors Garden Grove, CA Algebra Tutors Garden Grove, CA Algebra 2 Tutors Garden Grove, CA Calculus Tutors Garden Grove, CA Geometry Tutors Garden Grove, CA Math Tutors Garden Grove, CA Prealgebra Tutors Garden Grove, CA Precalculus Tutors Garden Grove, CA SAT Tutors Garden Grove, CA SAT Math Tutors Garden Grove, CA Science Tutors Garden Grove, CA Statistics Tutors Garden Grove, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Garden_Grove_CA_Geometry_tutors.php","timestamp":"2014-04-18T00:41:02Z","content_type":null,"content_length":"24044","record_id":"<urn:uuid:22e4343e-476d-4358-9d23-a8ab8bb29010>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Math 501. 1st Homework. Due Monday, September 10, 2007 Homework on "Chapter 1" 1. Let {xn} n=1 be a sequence of real numbers. Let l R. Show that lim sup xn = l if and only if: (i) for each > 0, there exists a positive integer n0 such that for each n n0 xn < l + . (ii) for each > 0 and each positive integer n0 there exists a positive integer n n0 such that xn > l + . 2. Prove that for a sequence of subsets {An} n=1 of the universal set , lim An exists if and only if for each , lim I( An) exists. Hint: Recall that I( An) is the indicator function of the set An. 3. Let A := {(x, y) R2
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/558/2157600.html","timestamp":"2014-04-20T23:32:46Z","content_type":null,"content_length":"7731","record_id":"<urn:uuid:11ad87f6-0f8c-4f41-91ce-9e58a8ad94fc>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus: The Root Test Video | MindBites About this Lesson • Type: Video Tutorial • Length: 13:29 • Media: Video/mp4 • Use: Watch Online & Download • Access Period: Unrestricted • Download: MP4 (iPod compatible) • Size: 145 MB • Posted: 06/26/2009 This lesson is part of the following series: Calculus (279 lessons, $198.00) Calculus: Sequences and Series (45 lessons, $69.30) Calculus: The Ratio and Root Tests (3 lessons, $5.94) Taught by Professor Edward Burger, this lesson comes from a comprehensive Calculus course. This course and others are available from Thinkwell, Inc. The full course can be found at http:// www.thinkwell.com/student/product/calculus. The full course covers limits, derivatives, implicit differentiation, integration or antidifferentiation, L'Hopital's Rule, functions and their inverses, improper integrals, integral calculus, differential calculus, sequences, series, differential equations, parametric equations, polar coordinates, vector calculus and a variety of other AP Calculus, College Calculus and Calculus II topics. Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of America. In 2006, Reader's Digest named him in the "100 Best of America". Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, "Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas" and of the textbook "The Heart of Mathematics: An Invitation to Effective Thinking". He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals, including The "Journal of Number Theory" and "American Mathematical Monthly". His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the theory of continued fractions. Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures. About this Author 2174 lessons Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/. Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through... Recent Reviews This lesson has not been reviewed. Please purchase the lesson to review. This lesson has not been reviewed. Please purchase the lesson to review. Sequences and Series The Ratio and Root Tests The Root Test Page [1 of 3] So the name of the game is to figure out whether an infinite series converges or diverges, and one powerful technique and one powerful test that we can use is the ratio test. Take the ratio of two consecutive terms in the infinite series, a term divided by the previous term, take the absolute value, take the limit. If that limit is small, namely less than one, the thing converges absolutely. If it is big, that limit is actually greater than one, it diverges. And if it equals one the test is inconclusive. There's another test that's very similar to the ratio test, but allows us to easily take a look at an infinite series and determine whether it converges absolutely or diverges similar to the ratio test. In fact, it's almost like a twin. It's a great idea whenever you see an infinite series that has powers within. Now, if you see a power, like 2^n or something like that, the ratio test will often work. But if you see a really complicated thing, which actually has and the exponent ends, then think something else. What you should think is, "Well, how do you undo if you have an n power?" Take the n^th root. Welcome to the n^th root test. So the n^th root test is a way of extracting n's that are powers. By taking n^th roots, they just go away. Now let me reason with you on this one so you can see the idea, the theme, just as we saw on the ratio theme. If this thing has to go to zero at very fast clip, one way to measure that is to take the n^th root and ask what happens to the n^th root. If that thing actually shrinks to zero really fast, then what that means is that the thing without the n^th root, in fact, must be really shrinking to zero and so this thing converges very quickly. Therefore, we think that the whole infinite series must, in fact, converge. So the idea is if you take the n^th root in some sense trying to actually prevent the thing from going to zero really fast, and it still goes to zero pretty fast, then it converges. So what exactly is the n^th root test? Well, the n^th root test says the following. Suppose you have this infinite series and we now construct a new limit. So I'm going to still call it limit, n going to infinity, and now what I'm going to do is instead of taking a ratio of terms, I'm just going to take one term, a[n], take the absolute value of it, so strip away negative signs, and then take the n^th root of that, and then take the limit. I'll call that limit rho. Rho for root this time. We call it rho, but now think about rho as a ratio, but as a root of the term. Well, the test actually looks the exact same, has the exact same feel to it. If, in fact, the rho is small, less than one, then I say that the infinite series converges. And not only does it converge, it converges absolutely. Then infinite series converges absolutely. And what if the ratio is large? If the n^th root is greater than one, then as we saw with the ratio test, this will diverge. This is not going to zero fast enough. The terms are not going to zero fast enough, so then the infinite series diverges. Leaving one case left, what if rho equals one? Well, then as always, we have a game over. This test was inconclusive. The thing might converge, the thing might diverge, the thing might converge absolutely, the thing might converge conditionally. We just don't know. This test fails. It can conclude nothing. So that is the n^th root test. You can use the n^th root test whenever you see things that are appearing with n powers. So let's take a look at some examples. Let's take a look at the infinite series, . Now again, I'm starting with a familiar one. This happens to be a geometric series where the r is 1/2, and so I know this actually converges, it converges absolutely since there's no negative signs. And in fact, moreover, I also know that this actually converges to a particular target that I can compute, because geometric series we actually know what the sum equals. So let's verify all that using the n^th root test. Let's just see the n^th root test in action. So I take the limit as n goes to infinity of the n^th root of the absolute value of the n^th term. What does that equal in this case? That equals the limit as n goes to infinity of the n^th root of - well, the absolute value of this is just itself. So I see here is just . Now happens if you take the n^ th root if this? Well, the n^th root of 1 is 1. And what's the n^th root of 2^n? Well, the n^th root and the n here, they annihilate themselves, and it's left with 2. And what's that limit? That limit is a half. Notice that that half, therefore, is the rho, is the root limit. That is less than one, and so therefore, we know by the root test that this thing converges absolutely, just as we already know and which was confirmed by the root test. This converges absolutely by the n^th root test. An easy example, but I want us to get in the habit of seeing how this limit looks. Well, let's try another one. . This is also another geometric series where now the r is actually -3, which you notice in absolute value is bigger than one. So we're already seen this should diverge. Let's verify that right now by using the root test. So if you use the root test we take the n^th root . What does that equal in this case? What's the limit, and then goes to infinity of the n^th root of what? Of the . And what does that equal? Well, the absolute equal of - and that's -3. It's all right at the end. So the (-1)^n and absolute value just becomes a 1, because negative and positive, you throw those away. So what I see here is just 3^n. So this is the limit as n goes to infinity of the n^th root of 3^n. And that equals just 3. So the limit is 3. That is the root limit, and that exceeds 1. So therefore, by the root test I conclude that this thing must diverge, which of course, is old news since we knew it diverged before. This is confirmed by this. This thing diverges by the n^th root test. Here you can see the root test in action. Let's try something a little bit more exciting, and where we don't know the answer in advance. So let's take the , so this is going to alternate a little bit, n + 1^n. Now what's going on here? Well, this is an alternating series. And what's happening with the denominator? This is a very fast growing denominator. I'm taking (n + 1)^n, so that's really growing fast, and it's growing faster than even a p-series we grow, because not only do I have a 2 in here, but I have a continually growing exponent. So my thinking is this is growing so fast to infinity, the whole thing must be shooting to zero at such a fast, rate this must converge. That's my guess. And I see n's raised to n powers. This is perfectly suited for the n^th root test. When you see n powers with n's here, the n^th root test is the way to go. So let's apply the n^th test and see what this says. So I take the limit, it then goes to infinity of the n^th root of the absolute value of the n^th term. So what does that equal? That equals the limit as n goes to infinity of the n^th root of...now I've got to put in the divided by (n + 1)^n. What does the absolute value do? It annihilates every other negative sign that I have, so that drops out. The bottom is already positive, so I'm just left with the limit as n goes to infinity of the n^th root of . So what's that? Well, the n^th root of 1 is just 1, and the n^th root of (n + 1)^n - well, the n^th root and the n power kill each other off, and I'm just left with n + 1. And what's that limit? Well, that limit is plainly zero, and so that is the root limit, which is plainly less than 1. It's as far from 1 as you can possibly get and still be positive. And so therefore, by the root test I can conclude that, in fact, this must converge. And in fact, it converges absolutely. So this thing converges absolutely by the root test. Let's try one last one together. This one is the . Now I've got to be completely honest with you, because I've never lied to you. First thing I do when I look at this is I think two things. First of all, when I see n's here with an n power here, I immediately think of the root test. So in the bottom of my mind I'm going, "root test." So the next thing I do when I see this, I say, "Hey. What about the quickie test?" In fact, do the terms shrink to zero? Answer: no. If you take the limit and let n go to infinity, you have to use a little L'Hôpital's rule in a very clever way, but we've already seen that this thing is semi-familiar; this actually converges to the . You can see that really fast if you want by remembering that we saw that this limit, limit as n goes to infinity . That actually works out to be e. If you work out that inside piece there, that's just , getting the common denominator, and this is the reciprocal of that. So this limit will actually be . So is not zero. These terms have to shrink to zero if there's any hope of this thing to converge. So already know that this diverges. But let's use the root test and see what happens. We'll give the root test; take the limit as n goes to infinity of the n^th root of the absolute value of the n^th term. That's the limit as n goes to infinity of the n^th root of what? Well, the absolute value of this is just itself, so I just see . The n^th root of an n power, they kill each other off, so I'm going to put the limit as n goes to infinity of . What's that limit? One application of L'Hôpital's rule, it equals one. So the root limit is one. And so rho equals one, which means what? It means, whoop, test inclusive. We don't know whether this converges or diverges. So lucky for us we had the foresight to even ask the quickie question, does this thing, in fact, shrink to zero? The answer is it absolutely does not, and so therefore, there's no hope of this converging. This diverges by the quickie test, because the terms themselves are not going to get small enough. They don't go to zero. They approach , and if you add up a lot of 's, that's going to be pretty big. So, in fact, here's an example where the n^th root test in fact is inconclusive, and yet we know the infinite series diverges, because the terms aren't getting small. So that was the n^th root test and you can enjoy taking roots all over the place now. Get it Now and Start Learning Embed this video on your site Copy and paste the following snippet: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/lesson/3633-calculus-the-root-test","timestamp":"2014-04-18T18:55:34Z","content_type":null,"content_length":"63371","record_id":"<urn:uuid:db5fd2cd-7e7a-4136-a659-0bf41c8f1431>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Large cardinals and constructible universe up vote 6 down vote favorite We know that if $V=L$ holds, then $|\cal{P}(\omega)|=|\cal{P}(\omega)\cap \textrm{L}|=\aleph_1$ whereas, in the presence of a measurable cardinal (in fact, even Ramsey) $|\cal{P}(\omega)\cap \textrm {L}|=\aleph_0$. I remark that the cardinalities are of course computed in (the corresponding) $V$. The first is just the fact that the constructible universe satisfies CH, while the second has to do with the fact that in the presence of a measurable, $\omega_1^{L}<\omega_1$ i.e. the existence of large cardinals makes the relative $\omega_1^{L}$ "drop" below its "maximum possible" value (which is attained, if you want, in the "extreme case" when $V=L$). My question is, what can we say, in general, about the beaviour of $\omega_1^{L}$ given axioms of increasing strength above (or equal to, in strength) $V\neq L$? In particular, what happens if we just assume $V\neq L$? set-theory lo.logic large-cardinals add comment 1 Answer active oldest votes Each of the following implies that (the true) $\omega_1$ is inaccessible in $L$, and hence that there are only countably many constructible reals: • The proper forcing axiom • There is a Ramsey cardinal • $0^\#$ exists • All projective sets are Lebesgue measurable • All $\Sigma^1_3$-sets are Lebesgue measurable (EDIT: These are just some of the well-known examples that came to my mind. This list is neither exhaustive nor canonical.) up vote 11 down vote The mere existence of a nonconstructible set, or even a nonconstructible real, does not imply that $\omega_1^L$ is countable. There are many forcing notions in $L$ which do not accepted collapse $\omega_1$: adding one or many Cohen reals, destroying Souslin trees, etc. Each such forcing (over L) results in a model where $\omega_1=\omega_1^L$. In fact, "Martin's axiom plus continuum is arbitrarily large" is consistent with $\omega_1^L=\omega_1$. (But also with $\omega_1^L<\omega_1$.) ADDED: Preserving $\aleph_1$ of the ground model (which may or may not be the constructible universe $L$) is a key component in many independence proofs concerned with the theory of the reals. The "countable chain condition", which is enjoyed by all the forcings I mentioned above, is a property of forcing notions that guarantees preservation of $\aleph_1$; there are several other (weaker) properties which also suffice, most prominently (Baumgartner's) "Axiom A" and (Shelah's) "properness". 5 Also, my favorite, if every uncountable $\Pi^1_1$-set has a perfect subset then $\aleph_1$ is inaccessible in $L$. – Dave Marker Jul 12 '11 at 14:06 Thanks! This result (of Solovay? or is it older?) predates Shelah's theorem on $\Sigma^1_3$-measurability.Was this perhaps the first large cardinal whose existence follows from a statement in descriptive set theory? – Goldstern Jul 12 '11 at 21:29 1 According to Kanamori's book (p.135) Specker gets at least part of the credit for the result mentioned by Dave. – Ali Enayat Jul 12 '11 at 23:55 Thanks a lot everybody! – kvagk Jul 13 '11 at 10:40 add comment Not the answer you're looking for? Browse other questions tagged set-theory lo.logic large-cardinals or ask your own question.
{"url":"http://mathoverflow.net/questions/70097/large-cardinals-and-constructible-universe?sort=votes","timestamp":"2014-04-21T13:09:49Z","content_type":null,"content_length":"57061","record_id":"<urn:uuid:053f44a9-0986-4186-b79e-d35b7e007958>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Ihara zeta function (graph theory) coefficients using a line graph up vote 0 down vote favorite I wish to take a simple undirected graph (i.e. the complete graph K_4) Arbitrarily direct said graph, and then create a line graph from the directed version of the graph. However, in Sage it appears to create a line graph that shows a connection between two edges (that are just inverses of each other), so what I really want is a line graph that doesn't give an edge connected to its own inverse. That's why I asked if we could remove cycles of length 2, but that doesn't seem to solve the problem. Here's what I am trying to work out: G = graphs.RandomGNP(4,1) GD = G.to_directed() #orients G m = GD.size() #number of edges of digraphG LG = GD.line_graph() #the line graph of the digraph IM = identity_matrix(QQ,GD.size()) T = LG.adjacency_matrix() #returns the adjacency matrix of the line graph var('u') #defines u as a variable X=IM-u*T #defines a new matrix X Z=X.det() #defines polynomial in u aka inverse of the Ihara zeta function Z #computes determinant of X Z.coefficients(u) #extracts coefficients considering my graph is a complete graph on 4 vertices - the coefficients should be as such: [coeff,degree of u] [1,0], [0,1], [0,2],[-8,3],[-2,4] im only interested in coefficients up to the order of n=#of nodes in the graph, so here for K_4 obviously n=4. where the coefficient of u^3 corresponds to the negative of twice the number of triangles in K_4 where the coefficient u^4 corresponds to the negative of twice the number of squares in K_4 1 Might there be a Sage forum for mailing list for Sage users? If so, that would be a better place to ask this question. – David Roberts Feb 23 '13 at 10:31 add comment 2 Answers active oldest votes Not sure if the question is well defined - you can remove 2-cycles in many ways, getting different digraphs. One possible approach is start with empty set of edges $E$. For all edges $(u,v) \in E(G)$ add $(u,v)$ to $E$ iff $(v,u) \not \in E$. Here is a sample sage implementation: def removedigons(G): up vote 1 down ed=[] vote for u,v in G.edges(labels=False): if (v,u) in ed: continue ed += [(u,v)] return g I suppose what I want to do is: Create a directed graph, but when I create a line graph from the directed graph - I don't want the line graph to give connections between edges and their own inverse. – jtaa Feb 23 '13 at 14:10 Note for that for the line graph you will get 2 vertices for the pair of reverse edges. – joro Feb 23 '13 at 14:45 yes, that's exactly what i want. what i don't want is two vertices (edges) connected to each other when one is the inverse of the other. is there a way to stop that happening? – jtaa Feb 23 '13 at 17:49 add comment I received an answer (from fidelbc on sagemath) elsewhere and this is what I had wanted: G=graphs.CompleteGraph(4) D=G.to_directed() L=D.line_graph() L.delete_edges([((x,y,None), up vote 0 down (y,x,None)) for x,y in G.edges( labels=None ) ]) L.delete_edges([((x,y,None), (y,x,None)) for y,x in G.edges( labels=None ) ]) vote accepted add comment Not the answer you're looking for? Browse other questions tagged sage graph-theory directed-graph cycles zeta-functions or ask your own question.
{"url":"http://mathoverflow.net/questions/122682/ihara-zeta-function-graph-theory-coefficients-using-a-line-graph?sort=oldest","timestamp":"2014-04-21T02:44:16Z","content_type":null,"content_length":"59572","record_id":"<urn:uuid:a0c869c2-3559-47dd-8835-bb63dd8801c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction to the Bootstrap Results 1 - 10 of 2,049 - Machine Learning , 1996 "... Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making ..." Cited by 2479 (1 self) Add to MetaCart Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. The vital element is the instability of the prediction method. If perturbing the learning set can cause significant changes in the predictor constructed, then bagging can improve accuracy. 1. Introduction A learning set of L consists of data f(y n ; x n ), n = 1; : : : ; Ng where the y's are either class labels or a numerical response. We have a procedure for using this learning set to form a predictor '(x; L) --- if the input is x we ... - Journal of the Royal Statistical Society, Series B , 1994 "... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly zero and ..." Cited by 1828 (36 self) Add to MetaCart We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly zero and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described. Keywords: regression, subset selection, shrinkage, quadratic programming. 1 Introduction Consider the usual regression situation: we h... - INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE , 1995 "... We review accuracy estimation methods and compare the two most common methods: cross-validation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), te ..." Cited by 749 (12 self) Add to MetaCart We review accuracy estimation methods and compare the two most common methods: cross-validation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leaveone-out cross-validation. We report on a largescale experiment -- over half a million runs of C4.5 and a Naive-Bayes algorithm -- to estimate the effects of different parameters on these algorithms on real-world datasets. For cross-validation, we vary the number of folds and whether the folds are stratified or not; for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, the best method to use for model selection is ten-fold stratified cross validation, even if computation power allows using more folds. - Journal of Computational Biology , 2000 "... DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot ” of transcription levels within the cell. A major challenge in computational biology is to uncover, from such measurements, gene/protein interactions and key biologica ..." Cited by 731 (16 self) Add to MetaCart DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot ” of transcription levels within the cell. A major challenge in computational biology is to uncover, from such measurements, gene/protein interactions and key biological features of cellular systems. In this paper, we propose a new framework for discovering interactions between genes based on multiple expression measurements. This framework builds on the use of Bayesian networks for representing statistical dependencies. A Bayesian network is a graph-based model of joint multivariate probability distributions that captures properties of conditional independence between variables. Such models are attractive for their ability to describe complex stochastic processes and because they provide a clear methodology for learning from (noisy) observations. We start by showing how Bayesian networks can describe interactions between genes. We then describe a method for recovering gene interactions from microarray data using tools for learning Bayesian networks. Finally, we demonstrate this method on the S. cerevisiae cell-cycle measurements of Spellman et al. (1998). Key words: gene expression, microarrays, Bayesian methods. 1. - MACHINE LEARNING , 1999 "... Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several variants in co ..." Cited by 539 (2 self) Add to MetaCart Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a Naive-Bayes inducer. The purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while boosting methods (AdaBoost and Arc-x4) reduced both the bias and variance of unstable methods but increased the variance for Naive-Bayes, which was very stable. We observed that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances, emphasizing not only "hard" areas but also outliers and noise. , 1998 "... This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I err ..." Cited by 531 (8 self) Add to MetaCart This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I error). Two widely used statistical tests are shown to have high probability of type I error in certain situations and should never be used: a test for the difference of two proportions and a paired-differences t test based on taking several random train-test splits. A third test, a paired-differences t test based on 10-fold cross-validation, exhibits somewhat elevated probability of type I error. A fourth test, McNemar’s test, is shown to have low type I error. The fifth test is a new test, 5 × 2 cv, based on five iterations of twofold cross-validation. Experiments show that this test also has acceptable type I error. The article also measures the power (ability to detect algorithm differences when they do exist) of these tests. The cross-validated t test is the most powerful. The 5×2 cv test is shown to be slightly more powerful than McNemar’s test. The choice of the best test is determined by the computational cost of running the learning algorithm. For algorithms that can be executed only once, Mc-Nemar’s test is the only test with acceptable type I error. For algorithms that can be executed 10 times, the 5×2 cv test is recommended, because it is slightly more powerful and because it directly measures variation due to the choice of training set. , 1997 "... This paper analyses the recently suggested particle approach to filtering time series. We suggest that the algorithm is not robust to outliers for two reasons: the design of the simulators and the use of the discrete support to represent the sequentially updating prior distribution. Both problems ar ..." Cited by 519 (15 self) Add to MetaCart This paper analyses the recently suggested particle approach to filtering time series. We suggest that the algorithm is not robust to outliers for two reasons: the design of the simulators and the use of the discrete support to represent the sequentially updating prior distribution. Both problems are tackled in this paper. We believe we have largely solved the first problem and have reduced the order of magnitude of the second. In addition we introduce the idea of stratification into the particle filter which allows us to perform on-line Bayesian calculations about the parameters which index the models and maximum likelihood estimation. The new methods are illustrated by using a stochastic volatility model and a time series model of angles. Some key words: Filtering, Markov chain Monte Carlo, Particle filter, Simulation, SIR, State space. 1 1 , 2004 "... In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing ..." Cited by 473 (2 self) Add to MetaCart In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective. - IEEE TRANSACTIONS ON NEURAL NETWORKS , 2001 "... This paper provides an introduction to support vector machines (SVMs), kernel Fisher discriminant analysis, and ..." , 2002 "... Summary. Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the ..." Cited by 347 (6 self) Add to MetaCart Summary. Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the false discovery rate FDR traditionally involves intricate sequential p-value rejection methods based on the observed data. Whereas a sequential p-value method fixes the error rate and estimates its corresponding rejection region, we propose the opposite approach—we fix the rejection region and then estimate its corresponding error rate. This new approach offers increased applicability, accuracy and power. We apply the methodology to both the positive false discovery rate pFDR and FDR, and provide evidence for its benefits. It is shown that pFDR is probably the quantity of interest over FDR. Also discussed is the calculation of the q-value, the pFDR analogue of the p-value, which eliminates the need to set the error rate beforehand as is traditionally done. Some simple numerical examples are presented that show that this new approach can yield an increase of over eight times in power compared with the Benjamini–Hochberg FDR method.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=7696","timestamp":"2014-04-21T03:56:44Z","content_type":null,"content_length":"40673","record_id":"<urn:uuid:05baf736-0de8-4703-9719-8fc8f51b1f6c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: If two guys walk into a store and one of the two guys get knock out how many are left? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/510937b7e4b0d9aa3c45f513","timestamp":"2014-04-19T01:56:44Z","content_type":null,"content_length":"56291","record_id":"<urn:uuid:dc8ceca2-67c5-4ab5-9f35-5bec154c43ed>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 2 of 2 1. CMB 2011 (vol 54 pp. 506) On the Canonical Solution of the Sturm-Liouville Problem with Singularity and Turning Point of Even Order In this paper, we are going to investigate the canonical property of solutions of systems of differential equations having a singularity and turning point of even order. First, by a replacement, we transform the system to the Sturm-Liouville equation with turning point. Using of the asymptotic estimates provided by Eberhard, Freiling, and Schneider for a special fundamental system of solutions of the Sturm-Liouville equation, we study the infinite product representation of solutions of the systems. Then we transform the Sturm-Liouville equation with turning point to the equation with singularity, then we study the asymptotic behavior of its solutions. Such representations are relevant to the inverse spectral problem. Keywords:turning point, singularity, Sturm-Liouville, infinite products, Hadamard's theorem, eigenvalues Categories:34B05, 34Lxx, 47E05 2. CMB 2009 (vol 52 pp. 481) Some Infinite Products of Ramanujan Type In his ``lost'' notebook, Ramanujan stated two results, which are equivalent to the identities \[ \prod_{n=1}^{\infty} \frac{(1-q^n)^5}{(1-q^{5n})} =1-5\sum_{n=1}^{\infty}\Big( \sum_{d \mid n} \qu {5}{d} d \Big) q^n \] and \[ q\prod_{n=1}^{\infty} \frac{(1-q^{5n})^5}{(1-q^{n})} =\sum_{n=1}^{\infty}\Big( \sum_{d \mid n} \qu{5}{n/d} d \Big) q^n. \] We give several more identities of this type. Keywords:Power series expansions of certain infinite products Categories:11E25, 11F11, 11F27, 30B10
{"url":"http://cms.math.ca/cmb/kw/infinite%20products","timestamp":"2014-04-17T15:37:22Z","content_type":null,"content_length":"28050","record_id":"<urn:uuid:154d07b3-98cd-48dc-9e57-c818e4ac0bee>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Items tagged with population Hello everyone, I have a problem with the Brownian Motion (I'm using Maple 15). Here is the code: X1:=SamplePath(X(t),t=0..T,timesteps =T/d): The last line give me a number different from zero, which is not true for a Brownian Motion as defined. Could you please tell me where is the problem? PS: I have the same problem with WienerProcess(sigma).
{"url":"http://www.mapleprimes.com/tags/population","timestamp":"2014-04-20T08:18:59Z","content_type":null,"content_length":"58541","record_id":"<urn:uuid:350dfd66-3b98-4ca5-80eb-58964e59e9c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Teacher2Teacher - Q&A #918 View entire discussion [<< prev] [ next >>] From: A parent <cvt7@yahoo.com> To: Teacher2Teacher Public Discussion Date: 2001121520:37:40 Subject: A dissenting voice As a parent of a student who uses DG in the 9th grade, I am disappointed by the approach. There are two issues here, I believe. First, the traditional approach of using proofs was intended as an end in itself, as that portion of one's education that demonstrates the structure and meaning of mathematical reasoning. Assuming one dispenses with the relevance or usefulness of teaching the deductive nature of all mathematics, all that's left is the teaching of geometry as a practical skill -- the sort of thing a carpenter would need, for example, or a physicist. (That's not why mathematics has been considered a basic component of a liberal education, by the way). Secondly: How good is DG at what it attempts to accomplish? If the objective is to teach the facts, formulas, and methods of geometrical measurement and construction, one could teach it like the sciences -- physics or chemistry, for example. In that case, the group investigations of DG would be like labs, demonstrating the principles. But would anyone write a physics or chemistry book which expected the student to arrive at all physical laws and formulas on his/her own, from the labs? I don't think so. DG would be a wonderful book for a "math lab" or "fun with math" course, but, as a main course text, it leaves too much to the student. (There are no facts in DG. Only questions, and instructions on how to find the answers). Not only is the inductive approach to mathematics fundamentally invalid (math is deductive! science is inductive!), it can also be very misleading in practice. For example: Some triangle-related investigations in the book begin with "draw a scalene triangle, and ...". Well, drawing a scalene triangle is not as easy as it sounds. Most people, when asked to do so, will draw an isosceles triangle lying on one of its equal sides. Try to do an investigation on this triangle, and you will think that many of the properties of an isosceles triangle are those of the scalene. The devil is in the details. (And don't expect the text to correct you. All important statements in the book have blanks to be filled in by the student). Also, group investigations are always tricky, disregarding the fact that different students learn at different speeds, and *in different ways*. The brightest student always dominates the group, and the others are made to follow, without understanding. I can say a lot more. The author obviously loves geometry and his passion is welcome and it shows in the book. Again, there is a place for this approach on the side of a main course. But as a main approach to teaching geometry, I believe that this concept falls wide of the mark both in its objective and in its methodology. Post a reply to this message Post a related public discussion message Ask Teacher2Teacher a new question
{"url":"http://mathforum.org/t2t/discuss/message.taco?thread=918&n=3","timestamp":"2014-04-19T03:07:56Z","content_type":null,"content_length":"7166","record_id":"<urn:uuid:0439b5f9-bfe8-40dc-b96c-3c8f3a109521>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
A Political Redistricting Tool for the Rest of Us - The Population Density Function The Population Density Function All of the data collected by the United States during its decennial census is freely available from the Census Bureau. The data that we used throughout this paper comes from the 2010 data set. Our redistricting approach is driven primarily by the population density function, \(\rho\), of a particular state. The Census Bureau provides population data down to the resolution of a single census tract. The Bureau also provides the geographic shapes of each census tract as they were during the collection of the data. If we let \(T\) be a census tract, \(A(T)\) be the area of the census tract (as projected upon the GCS North American 1983 [arcGIS]), and \(p(T)\) be the population of the tract, we define the population density function as \[\rho(\phi,\theta) = p(T)/A(T),\qquad (\phi,\theta)\in T\] The density function, \(\rho\), is a piecewise constant function defined within the boundaries of a single state. For definiteness, we assume that \(\phi\in [0,\pi]\) is latitude (with \(0\) at the North Pole) and \(\theta \in (-\pi,\pi]\) is longitude (with \(0\) at the Prime Meridian and \(\theta \) increasing to the east). Though not a sphere, we will assume a spherical approximation of the earth with radius \(R=3958.755\) miles [Moritz]. To facilitate numerical algorithms, we further discretize \(\rho\) using a uniform grid in the latitude/longitude domain as in the figure below. For simplicity, we assume that the state that we are redistricting is bounded by a spherical patch (some rectangle in the \((\phi,\theta)\)-plane) that is completely contained in both the northern hemisphere and in the western hemisphere. Hence, the latitudes spanning the state are contained in the interval \([\phi_{\min},\phi_{\max}] \subset [0,\pi/2]\) and the longitudes are contained in the interval \([\theta_{\min},\theta_{\max}] \subset (-\ pi, 0]\). I.e. \(\phi_{\min}\) is the northern most latitude of the state and \(\theta_{\min}\) is the western most longitude of the state. We discretize the spherical patch as \begin{eqnarray*} \phi_i & = & \phi_{\min} + \frac{\phi_{\max}-\phi_{\min}}{M},~i=0,1,\ldots M \\ \theta_j & = & \theta_{\min} + \frac{\theta_{\max}-\theta_{\ min}}{N},~j=0,1,\ldots N. \end{eqnarray*} A uniform latitude/longitude grid on the surface of a sphere We then generate a discrete density function on the patch \(\rho_{ij} = \rho(x_i,y_j).\) In order to reconstitute the population of a particular region, it is necessary to know the area of of each grid square in the discretization. Recall that each grid square of latitude and longitude represents a small patch on the surface of the earth. Furthermore, though the grid squares are uniform in the latitude and longitude domain, the actual surface area represented by each square depends upon its latitude again see this figure and this figure. We approximate the area of the patch with upper-left hand corner at \((x_i, y_j)\) by assuming that the earth is a sphere and using a spherical surface integral. A flat projection of a uniform latitude longitude grid For later reference, the colored dots indicate a Moore-type neighborhood set. The red dot is the center of the neighborhood. The yellow dots are the neighbors of the center dot. Every dot is the center of an associated Moore neighborhood. First we convert \((x_i,y_j)\) to spherical coordinates. \begin{eqnarray*} \phi_i & = & (90^\circ - x_i)\cdot \frac{\pi}{180^\circ} \\ \theta_j & = & y_j \cdot \frac{\pi}{180^\circ}. \end{eqnarray*} Then, \begin{eqnarray*} A(x_i,y_j) & = & \int_{\theta_j}^{\theta_{j+1}} \int_{\phi_i}^{\phi_{i+1}} R^2 \sin\phi \, d\phi d\theta\\ & = & R^2(\theta_{j+1} - \theta_j) (\cos(\phi_{i+1}) - \cos(\ phi_i)). \end{eqnarray*} It follows immediately that the population of a particular grid square is well-approximated by \(\rho_{ij} A(x_i,y_j)\).
{"url":"http://www.maa.org/publications/periodicals/loci/a-political-redistricting-tool-for-the-rest-of-us-the-population-density-function-0","timestamp":"2014-04-16T11:15:48Z","content_type":null,"content_length":"101137","record_id":"<urn:uuid:6c1c1c93-3f77-4b68-bc3b-f782ff7981db>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Can gravity cause an expansion of a sphere? Hi everybody, some time ago our teacher has shown us the following example from the theory of elasticity: Calculate how the gravity of the sphere changes its size. The sphere is made of ideal linear material (in practice, perhaps some metal) with Young modulus [itex]E[/itex] and Poisson ration [itex]\nu [/itex]. The amount of the material is such that if the gravity did not act, the radius of the sphere would be [itex]R_0[/itex]. Now imagine the gravity is "turned on". Do you think the sphere will shrink or expand? Teacher said (and the same can be found in Landau Lifgarbagez, Theory of elasticity, p. 21) that the sphere as a whole will actually expand due to gravity. Do you think such a strange conclusion can be correct?
{"url":"http://www.physicsforums.com/showthread.php?p=3907134","timestamp":"2014-04-19T09:36:40Z","content_type":null,"content_length":"38389","record_id":"<urn:uuid:146713a9-ee82-4ad3-9c0f-7451dd4f4b7e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
need help with calculus Hi, Thinkdesigns. Calculus is basically just the study of rates of change. If you always keep that basic definition in the back of your head you should never become overwhelmed by the breadth of the Anyway, you asked for some simple questions related to maximums and minimums. 1) If y = 4x - x^2 then find the maximum value of y and for what value of x does this value occur? 2) If y = x^2 - 6x + 10, determine if there is a local extrema and at what point it occurs. 3) If y = 10x + 1, determine if there is a local extrema and at what point it occurs. Those are a couple of the most basic types of minimum/maximum problems. If you want tougher examples or want a question answered go to the "Help Me!" thread. Last edited by irspow (2005-12-10 08:38:23)
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=2148","timestamp":"2014-04-16T10:49:51Z","content_type":null,"content_length":"12460","record_id":"<urn:uuid:80c7ef64-163d-4751-82e4-fa1c2a1688f1>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Option Prices under the Fractional Black-Scholes Model This Demonstration shows the values of vanilla European options in a model based on fractional Brownian motion and on ordinary geometric Brownian motion (the Black–Scholes model). The strike price is fixed at 100. Options values in this model generally overprice Black–Scholes values. In spite of its having attractive properties as a model for the stock exchange, the suitability of fractional Brownian motion for option pricing is controversial. There is some evidence that certain stock returns may exhibit the phenomenon of "long memory" (slowly decreasing covariance between returns at different times) [2], though this seems to be fairly weak. It is also generally accepted that stock returns display the phenomenon of "clustering". None of these phenomena appear in semi-martingale models, such as the Black–Scholes model. They do appear, however, if we consider the analogue of the Black–Scholes model based on fractional Brownian motion with Hurst index , where . Since fractional Brownian motion is not a semi-martingale, the Itô theory of stochastic integrals cannot be directly applied to it. One can try to replace the Itô integral by a version of the pathwise Riemann–Stieltjes integral, but then, as has been shown by Rogers [6], the resulting model of option values admits arbitrage. As this is contrary to empirical evidence, it has been generally thought that models based on fractional Brownian are not usable for option pricing. Hu and Øksendal [4] defined a new stochastic integral based on the Skorokhod integral and the Wick product and showed that a model based on this integral does not admit arbitrage. Unfortunately, it was shown in [1] that this model does not make economic sense. However, recently Guasoni [3] showed that as soon as one introduces proportional transaction costs in the fractional Black-Scholes model, arbitrage opportunities vanish. In this Demonstration we adopt a different approach, based on the work of Norros, Valkeila, and Virtamo [3]. They have shown that one can define a centered Gaussian martingale (called "the fundamental martingale") that generates the same filtration as the fractional Brownian motion. Since it is the filtration rather than the stochastic process itself that represents information provided by the market, it seems reasonable to use this martingale for option pricing. It is then easy to obtain formulas analogous to the classical Black–Scholes formulas (and coinciding with them when the Hurst index is ). [1] T. Bjork and H. Hult, "A Note on Wick Products and the Fractional Black-Scholes Finance Stoch., 9 (2), 2005 pp. 197–209. [2] N. J. Cutland, P. E. Kopp, and W. Willinger, "Stock Price Returns and the Joseph Effect: A Fractional Version of the Black-Scholes Model," in Proceedings of the Former Ascona Conferences on Stochastic Analysis, Random Fields and Applications, Progress in Probability , Vol. 36, Basel: Birkhauser, 1995 pp. 327–351. [3] P. Guasoni, "No Arbitrage under Transaction Costs, with Fractional Brownian Motion and Beyond," Math. Finance, 16 (3), 2006 pp. 569–582. [4] Y. Hu and B. Øksendal, "Fractional White Noise Calculus and Applications to Finance," Inf. Dim. Anal. Quantum Probab. Rel. Top., 6 (1), 2003 pp. 1–32. [5] I. Norros, E. Valkeila, and J. Virtamo, "A Girsanov-Type Formula for the Fractional Brownian Motion," in Proceedings of the First Nordic-Russian Symposium on Stochastics , Helsinki, Finland, 1996. [6] L. C. G. Rogers, "Arbitrage with Fractional Brownian Motion," Math. Finance, 7 (1), 1997 pp. 95–105.
{"url":"http://demonstrations.wolfram.com/OptionPricesUnderTheFractionalBlackScholesModel/","timestamp":"2014-04-17T09:42:59Z","content_type":null,"content_length":"47130","record_id":"<urn:uuid:6f3af410-1f28-43c2-b3ec-460e4ffea236>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
5. Arc Length, Angles, and Rotations Theorems 5.1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 Theorem 5.1: If two distinct points on a circle are not diametrically opposed, then the tangent lines at those points meet a point outside the circle. Theorem 5.2: Let P be a partition of the arc between A and B. Then the inside approximation corresponding to P is shorter than the outside approximation. If Q is a refinement of P. Then the inside approximation corresponding to Q is larger than the inside approximation corresponding to P, and the outside approximation corresponding to Q is less than the outside approximation corresponding to P Theorem 5.3: Arc length is preserved by isometries.^1 Theorem 5.4: (The angle addition axiom) Let C be a point on the arc between A and B. Then the length of the arc from A to B is the sum of the lengths of the arcs from A to C and from C to B.^2 Theorem 5.5: If a circle is drawn at the vertex of an angle, then the arc length between the points on the circle where the arms of the angle meet the circle is proportional to the radius of the Theorem 5.6: The number of radians in an angle is preserved by isometries. Theorem 5.7: The number of radians in a straight angle is half of the number of radians in a full circle. Theorem 5.8: The number of radians in a right angle is half of the number of radians in a straight angle. Theorem 5.9: The length of the arc of the unit circle above the points a and c on the x-axis is given by^T3 Theorem 5.10: The number of radians in an angle is a continuous function of the x-coordinate of a point on the moveable arm of the angle kept at a fixed distance from the vertex. Theorem 5.11: Vertical angles are the same size. Theorem 5.12: A rotation is a composition of two reflections, and hence is an invertible isometry.^3 Theorem 5.13: (The Protractor Axiom) Let x be a positive real number less than pi. Let A and B be two points. Then there are exactly two lines going through point A which make an angle of x radians with the ray from A through B. In one, the angle would be measured clockwise from A, and in the other, the angle would be measured counterclockwise from A. The two angles are reflections of each other about the line determined by AB. 6. Parallel Lines and Similar and Congruent Triangles
{"url":"http://www.sonoma.edu/users/w/wilsonst/Papers/Geometry/angles/default.html","timestamp":"2014-04-16T13:06:37Z","content_type":null,"content_length":"5597","record_id":"<urn:uuid:3297f84a-0b03-4270-a5a9-cdb2e51ed5a9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Continued Fractions A. Ya. Khinchin Continued Fractions 1997; Dover Publications; 0486696308 Elementary-level text by noted Soviet mathematician offers superb introduction to positive-integral elements of theory of continued fractions. Clear, straightforward presentation of the properties of the apparatus, the representation of numbers by continued fractions and the measure theory of continued fractions. 1964 edition. Prefaces.
{"url":"http://www.researchbooks.org/0486696308/CONTINUED-FRACTIONS/","timestamp":"2014-04-18T23:17:26Z","content_type":null,"content_length":"2194","record_id":"<urn:uuid:0c1f38cc-908c-4581-aeb0-d75dd15f3deb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Quarks 'swing' to the tones of random numbers A matrix is a rectangular array of numbers. A random matrix can be compared to a Sudoku filled with random numbers. Matrices are part of the equations governing the movements of the particles. In a random matrix there are numbers that are entered randomly, while there are still certain symmetries, for example, you can require that the numbers in the lower left should be a copy of the numbers above the diagonal. This is called a symmetrical matrix. At the Large Hadron Collider at CERN protons crash into each other at incredibly high energies in order to 'smash' the protons and to study the elementary particles of nature - including quarks. Quarks are found in each proton and are bound together by forces which cause all other known forces of nature to fade. To understand the effects of these strong forces between the quarks is one of the greatest challenges in modern particle physics. New theoretical results from the Niels Bohr Institute show that enormous quantities of random numbers can describe the way in which quarks ’swing’ inside the protons. The results have been published in arXiv and will be published in the journal Physical Review Letters. Just as we must subject ourselves, for example, to the laws of gravity and not just float around weightless, quarks in protons are also subject to the laws of physics. Quarks are one of the universe’s smallest, known building blocks. Each proton inside the atomic nucleus is made up of three quarks and the forces between the quarks are so strong that they can never - under normal circumstances, escape the protons Left- and right-handed quarks The quarks combined charges give the proton its charge. But if you add up the masses of the quarks you do not get the mass of the proton. Instead, the mass of the proton is dependent on how the quarks swing. The oscillations of the quarks are also central for a variety of physical phenomena. That is why researchers have worked for years to find a theoretical method for describing the oscillations of quarks. The two lightest quarks, 'up' and 'down' quarks, are so light that they can be regarded as massless in practice. There are two types of such massless quarks, which might be called left-handed and right-handed. The mathematical equation governing quarks’ movements show that the left-handed quarks swing independently of the right-handed. But in spite of the equation being correct, the left-handed quarks love to ’swing’ with the right-handed. Spontaneous symmetry breaking "Even though this sounds like a contradiction, it is actually a cornerstone of theoretical physics. The phenomenon is called spontaneous symmetry breaking and it is quite easy to illustrate", explains Kim Splittorff, Associate Professor and theoretical particle physicist at the Niels Bohr Institute, and gives an example: A dance floor is filled with people dancing to rhythmic music. The male dancers represent the left-handed quarks and the female dancers the right-handed quarks. All dance without dance partners and therefore all can dance around freely. Now the DJ puts on a slow dance and the dancers pair off. Suddenly, they cannot spin around freely by themselves. The male (left-handed) and female (right-handed) dancers can only spin around in pairs by agreeing on it. We say that the symmetry ’each person swings around, independent of all others’ is broken into a different symmetry ’a pair can swing around, independent of other pairs’. Similarly for quarks, it is the simple solution that the left-handed do not swing with the right-handed. But a more stabile solution is that they hold onto each other. This is spontaneous symmetry Dance to random tones "Over several years it became increasingly clear that the way in which the left-handed and right-handed quarks come together can be described using a massive quantities of random numbers. These random numbers are elements in a matrix, which one may think of as a Soduko filled in at random. In technical jargon these are called Random Matrices", explains Kim Splittorff, who has developed the new theory together with Poul Henrik Damgaard, Niels Bohr International Academy and Discovery Center and Jac Verbaarschot, Stony Brook, New York. Even though random numbers are involved, what comes out is not entirely random. You could say that the equation that determines the oscillations of the quarks give rise to a dance determined by random notes. This description of quarks has proven to be extremely useful for researchers who are looking for a precise numerical description of the quarks inside a proton. It requires some of the most advanced supercomputers in the world to make calculations about the quarks in a proton. The central question that the supercomputers are chewing on is how closely the left-handed and right-handed quarks ’dance’ together. These calculations can also show why the quarks remain inside the protons. One problem up until now has been that these numerical descriptions have to use an approximation to the 'real' equation for the quarks. Now the three researchers have shown how to correct for this so that the quarks in the numerical calculations also 'swing' correctly to random numbers. New understanding of the data ”Using our results we can now describe the numerical calculations from large research groups at CERN and leading universities very accurately”, says Kim Splittorff. ”What is new about our work is that not only the exact equation for quarks, but also the approximation, which researchers who work numerically have to use, can be described using random matrices. It is already extremely surprising that the exact equation shows that the quarks swing by random numbers. It is even more exciting that the approximation used for the equation has a completely analogous description. Having an accurate analytical description available for the numerical simulations is a powerful tool that provides an entirely new understanding of the numerical data. In particular, we can now measure very precisely how closely the right-handed and left-handed quarks are dancing”, he says about the new perspectives in the world of particle physics. Facts on Quarks Quarks are one of the universe's smallest, known building blocks. Each proton inside the atomic nucleus is made up three quarks. There are six different kinds of quarks: Up, down, top, bottom, charm and strange and their anti-quarks. Ordinary matter consists only of up- and down-quarks and electrons. A particle made up of quarks is affected by the strong nuclear force, which is one of the four forces of nature. The force binds the quarks together and is approximately 10^33 times stronger than gravity and 100 times stronger than the electro-magnetic force. But the range is small, equivalent to an atomic diameter of approximately 10^-15 m. More information: Article in arXiv
{"url":"http://phys.org/news204809213.html","timestamp":"2014-04-19T20:40:18Z","content_type":null,"content_length":"72631","record_id":"<urn:uuid:a21b5875-f8e7-4146-bc31-f72fddfb9f3c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Emily on Friday, October 1, 2010 at 4:45pm. Ali did his homework at school with a graphing calculator. He determined that the equation of the line of best fit for some data was y=2.63x-1.29. Once he got home, he realized he had mixed up the independent and dependent variables. Write the correct equation for the relation in the form y=mx+b. • Math - Henry, Friday, October 1, 2010 at 5:10pm x = 2.63y - 1.29 • Math - Emily, Friday, October 1, 2010 at 5:12pm How did u get the answer? • Math - Henry, Friday, October 1, 2010 at 6:28pm IN the 1st Eq, y was the dependent variable; because its' value depended on the value assigned to x. So, if we graphed the 1st Eq, we would select values for x and the value we calculate for y would DEPEND on the value of x. • Math - Conal, Sunday, September 15, 2013 at 10:46pm (x-.1.29) divided by 2.63= Y Question asks for linear equation form, (divide by everything in bracket)1 over 2.63 is 0.38, 1.29 over 2.63 is 0.49 Ans: y=0.38x+0.49 • Math - Conal, Sunday, September 15, 2013 at 10:46pm (x-.1.29) divided by 2.63= Y Question asks for linear equation form, (divide by everything in bracket)1 over 2.63 is 0.38, 1.29 over 2.63 is 0.49 Ans: y=0.38x+0.49 Related Questions Chemistry - Given data collected for a liquid. Equilibrium vapor pressure in ... math - 1. The table shows the number of wild strawberry plants in a particular ... math - I don't understand this problem at all. It involves finding the line of ... math - The blue line of best fit for boys intersects the vertical axis at 20 ... advanced math - I'm not sure how to find the line of best fit. I have two ... math - Use the following table to answer the questions. (Give your answers ... statistics(best fit line) - I'm trying to graph a set of points with data, and I... Algebra - These data represent the population of a certain city. Year Population... Math - graphing - i am having trouble with a graphing calculator please help ... TI-86 calculator - How do you afte entering data into the table find that ...
{"url":"http://www.jiskha.com/display.cgi?id=1285965942","timestamp":"2014-04-17T22:08:41Z","content_type":null,"content_length":"9613","record_id":"<urn:uuid:92bca126-69e7-4bed-ab97-c5dd15bdfbb5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Center of Mass, Man, Woman, and Frictionless Ice 1. The problem statement, all variables and given/known data A 58 kg woman and an 78 kg man stand 8.00 m apart on frictionless ice. How far from the woman is their CM? 4.6 m If each holds one end of a rope, and the man pulls on the rope so that he moves 2.6 m, how far from the woman will he be now? Use two significant figures in answer. 1.9 m How far will the man have moved when he collides with the woman? 2. Relevant equations mass man * Δx man = mass woman * -Δx woman 3. The attempt at a solution This can't be right because it's too close to letter B's answer. I'm assuming because the ice is frictionless that they'll just keep going until the hit each other. (78 kg)(x) = -(58 kg)(4.6 m) x = -3.42 m 8.00 m - 2.6 m - 3.42 m = 1.98 m
{"url":"http://www.physicsforums.com/showthread.php?p=3804537","timestamp":"2014-04-17T18:24:22Z","content_type":null,"content_length":"29125","record_id":"<urn:uuid:a298d8ff-dc2b-49a5-a1f0-c561dd4a39ff>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Trooper, PA Math Tutor Find a Trooper, PA Math Tutor ...I value and respect the incredible variety of personalities that I encounter while teaching, and I greatly enjoy meeting students from various backgrounds. I understand that each student deserves specialized consideration in order to succeed, and I happily adapt my teaching methods to best meet ... 17 Subjects: including ACT Math, SAT math, English, reading ...I taught for Johns Hopkins Center for Gifted and Talented Youth, Kennedy Krieger High School, and Towson University. I have also worked as a private tutor. I have excellent references available upon request and would be happy to email my Curriculum Vita to interested parties. 55 Subjects: including prealgebra, linear algebra, grammar, geometry ...As an experienced SAT tutor, I spend much of my time advising students about college decisions. I have helped students work on college applications and prepare for college work. I have also advised students on college majors and career goals. 10 Subjects: including algebra 1, algebra 2, vocabulary, grammar ...I had a lot of success with students considered to be at the lowest level of the class by their teachers. It turned out that a little one on one attention, as well as a lot of encouragement throughout the process gave unexpected results, and all of my students started growing before my eyes, and... 7 Subjects: including geometry, prealgebra, precalculus, trigonometry I'm a retired college instructor and software developer and live in Philadelphia. I have tutored SAT math and reading for The Princeton Review, tutored K-12 math and reading and SAT for Huntington Learning Centers for over ten years, and developed award-winning math tutorials. 14 Subjects: including algebra 1, algebra 2, geometry, precalculus Related Trooper, PA Tutors Trooper, PA Accounting Tutors Trooper, PA ACT Tutors Trooper, PA Algebra Tutors Trooper, PA Algebra 2 Tutors Trooper, PA Calculus Tutors Trooper, PA Geometry Tutors Trooper, PA Math Tutors Trooper, PA Prealgebra Tutors Trooper, PA Precalculus Tutors Trooper, PA SAT Tutors Trooper, PA SAT Math Tutors Trooper, PA Science Tutors Trooper, PA Statistics Tutors Trooper, PA Trigonometry Tutors Nearby Cities With Math Tutor Audubon, PA Math Tutors Center Square, PA Math Tutors Chesterbrook, PA Math Tutors Eagleville, PA Math Tutors East Norriton, PA Math Tutors Fairview Village Math Tutors Graterford, PA Math Tutors Gulph Mills, PA Math Tutors Jeffersonville, PA Math Tutors Plymouth Valley, PA Math Tutors Sanatoga, PA Math Tutors Strafford, PA Math Tutors Tredyffrin, PA Math Tutors Valley Forge Math Tutors Wyndmoor, PA Math Tutors
{"url":"http://www.purplemath.com/Trooper_PA_Math_tutors.php","timestamp":"2014-04-17T16:03:14Z","content_type":null,"content_length":"23897","record_id":"<urn:uuid:82289f9c-553c-45eb-87c6-c30afb520aa1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
Find largest square number - without using calculator March 31st 2011, 11:30 PM Find largest square number - without using calculator Hey All, This problem must be done without the use of a calculator. I am able to guesstimate my way to a close enough answer but need some help to improve this. If n is the largest square number such that $s \leq n$, find s when (i) $n = 6.4 \times 10^{3}$ (ii) $n = 6.4 \times 10^{6}$ I got (i) $s = 6400$ easily since its a perfect square, For (ii) I wrote the number as $64 \times 10^{4} \times 10$ Then nearest perfect square close to 10 is 9, so $s \approx 8^{2} \times 100^{2} \times 3^{2}$ From there I figured that the number is between $2400^{2}$ and further got to $s = 2500^{2} = 6250000$ The problem is that the required answer is $6395841$ and the weight-age is only 1 mark. Using the calculator suggests this is the exactly correct answer. But since this is a non-calculator use problem, leads me to believe I am missing an obvious/simpler/faster way of getting there. What am I missing? Thanks again for all your help. April 1st 2011, 02:31 AM Hey All, This problem must be done without the use of a calculator. I am able to guesstimate my way to a close enough answer but need some help to improve this. If n <-- typo? Shouldn't that be s? is the largest square number such that $s \leq n$, find s when (i) $n = 6.4 \times 10^{3}$ (ii) $n = 6.4 \times 10^{6}$ I got (i) $s = 6400$ easily since its a perfect square, For (ii) I wrote the number as $64 \times 10^{4} \times 10$ Then nearest perfect square close to 10 is 9, so $s \approx 8^{2} \times 100^{2} \times 3^{2}$ From there I figured that the number is between $2400^{2}$ and further got to $s = 2500^{2} = 6250000$ The problem is that the required answer is $6395841$ and the weight-age is only 1 mark. Using the calculator suggests this is the exactly correct answer. But since this is a non-calculator use problem, leads me to believe I am missing an obvious/simpler/faster way of getting there. What am I missing? Thanks again for all your help. I'm only guessing: $6.4 \cdot 10^6 = 640 \cdot 100^2$ So $25 < a < 26$ (see attachment) Use linear interpolation. According to my sketch $a \approx 25 + \frac{15}{51}$ Since you multiply the apprximate value of a by 100 to get s you only have to calculate the first two digits of $\frac{15}{51} = 0.29$ Thus $a = 25.29~\implies~s=2529$ April 1st 2011, 07:12 AM Awesome @earboth! A much clearer and very elegant solution. Learning something new everyday here. Thanks again!
{"url":"http://mathhelpforum.com/algebra/176499-find-largest-square-number-without-using-calculator-print.html","timestamp":"2014-04-20T02:23:06Z","content_type":null,"content_length":"11015","record_id":"<urn:uuid:5956950f-1a01-4642-a9c5-8947c00b50b6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Uniqueness Theorem Yes, of course if you change the initial conditions, you will have different solutions! What the "existance and uniqueness" theorem for initial value problems says is that a given (well behaved) differential equation, with specific initial conditions will have a unique solution. Specifically, the basic "existance and uniqueness theorem" for first order equations, as given in most introductory texts, says "If f(x, y) is continuous in x and y and "Lipschitz" in y in some neighborhood of [itex](x_0, y_0)[/itex] then the differential equation dy/dx= f(x,y) with initial value [itex]y(x_0)= y_0[/itex] has a unique solution in some neighborhood of [itex]x_0[/itex]". (A function, f(x), is said to be "Lipschitz" in x on a neighborhood if there exists some constant C so that |f(x)- f(y)|< C|x- y| for all x and y in that neighborhood. One can show that all functions that are differentiable in a given neighborhood are Lipschitz there so many introductory texts use "differentiable" as a sufficient but not necessary condition.) We can extend that to higher order equations, for example [itex]d^2y/dx^2= f(x, y, dy/dx)[/itex] by letting u= dy/dx and writing the single equation as two first order equations, dy/dx= u and du/dx= f(x, y, u). We can then represent those equations as a single first order vector equation by taking [itex]V= <y, u>[/itex] so that [itex]dV/dx= <dy/dx, du/dx>= <u, f(x,y,u)>[/itex]. Of course, we now need a condition of the form [itex]V(x_0)= <y(x_0), u(x_0)>[itex] is given which means that we must be given values of y and its derivative at the same value of [itex]x_0[/itex], not two different For example, the very simple equation [itex]d^2y/dx^2+ y= 0[/itex] with the boundary values y(0)= 0, [itex]y'(\pi/2)= 0[/itex] does NOT have a unique solution. Again, the basic "existance and uniqueness theorem" for intial value problems does NOT say that there exist a unique solution to a differential equation that will work for any initial conditions. It says that there exists a unique solution that will match specific given initial conditions.
{"url":"http://www.physicsforums.com/showthread.php?p=4208142","timestamp":"2014-04-16T10:37:33Z","content_type":null,"content_length":"24388","record_id":"<urn:uuid:8196d92e-c054-4b86-b2f4-318e6242ff2e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
A question on the use of the rational zero theorem February 20th 2013, 06:46 AM A question on the use of the rational zero theorem I graphed the function $x^3-2x+3$, and it appears that there is one real root. I wanted to use the rational zero theorem to determine the root but it doesn't agree with my geometric picture. According to Wolfram, if the coefficients of the polynomial are specified to be integers then we can form rational zeros, $\pm{p/q}$, from the leading and terminal coefficients. From this, I got $\pm3$, $\pm1$. But this is incorrect as none of the possible rational zeros check out. February 20th 2013, 09:02 AM Re: A question on the use of the rational zero theorem The only real root of this equation is irrational. February 20th 2013, 09:21 AM Re: A question on the use of the rational zero theorem If an integer is a root of the function you mentioned then this root would be a factor of the constant term .However not all the factors of the constant term would be a root of f(x)=0 .I agree with Emakarov the only root of your function is irrational and lies between -2 and -1. February 20th 2013, 10:39 AM Re: A question on the use of the rational zero theorem Okay, so because the only real root of this equation is irrational, then the rational zero theorem does not apply? I'm confused because the rational zero theorem doesn't mention that it has these February 20th 2013, 12:23 PM Re: A question on the use of the rational zero theorem The rational root theorem applied to a polynomial f(x) with integer coefficients says the following. For every rational number x written as a fraction p / q in lowest terms, if f(x) = 0, then p divides the constant term of f and q divides the leading coefficient of f. This statement is true for the polynomial from the OP, i.e., $f(x) = x^3-2x+3$. Indeed, take any rational number x. Then f(x) ≠ 0. Therefore, the premise of the implication "if f(x) = 0, then..." is false, which means that the whole implication is true. In other words, the theorem says something only about rational roots of a polynomial. It says nothing about irrational roots, and it does not claim that there have to be rational roots.
{"url":"http://mathhelpforum.com/algebra/213469-question-use-rational-zero-theorem-print.html","timestamp":"2014-04-20T19:12:41Z","content_type":null,"content_length":"6993","record_id":"<urn:uuid:379c1338-35f9-4a50-b308-ccbc0cbdac4c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
is this a way to solve a radical equation? April 15th 2011, 08:25 PM #1 Junior Member Mar 2011 is this a way to solve a radical equation? so FOIL the left side, square the right side for, 2x^2-7=x^2-6x+9. Add 7: 2x^2=x^2-6x+16 but then divide by 2 instead of adding 2x^2: x^2=(x^2-6x+16)(1/2). Then subtract x^2 and get .5x^2-3x+8=0. Then use the quadratic formula to solve for x. Is this a licit way to solve this? If not, then why not? No. You can't FOIL the left hand side because it's not the product of two binomials... What you actually have is Sqrt(2x^2 - 7) = 3 - x. Start by squaring both sides to undo the square root, then expand the RHS and solve the resulting quadratic. Don't you have to foil the left side then, since you are squaring both sides? That's what I meant in any case. You do not FOIL the LHS. That is what Prove It is trying to say. sqrt(2x^2 - 7) = 3 - x [ sqrt(2x^2 - 7) ]^2 = (3 - x)^2 2x^2 - 7 = (3 - x)^2 Now FOIL the RHS... Also, when you are done check your solutions to make sure they work. Squaring both sides of an equation often introduces extra solutions. April 15th 2011, 08:29 PM #2 April 16th 2011, 08:10 AM #3 Junior Member Mar 2011 April 16th 2011, 08:14 AM #4
{"url":"http://mathhelpforum.com/algebra/177799-way-solve-radical-equation.html","timestamp":"2014-04-19T05:59:48Z","content_type":null,"content_length":"40200","record_id":"<urn:uuid:08932eaf-2214-4a0b-aca3-ae4bc091a11b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
The Glass-Bottom Blog If you haven't read it yet, I recommend Batterman's article on the philosophical connection between emergent phenomena and singularities. It is nice to have philosophers taking the renormalization-group idea seriously, as this idea has had an enormous impact on how physics is done and interpreted by physicists -- at least by theorists -- but hasn't made it to the pop physics books or the undergraduate curriculum. Batterman correctly observes that physicists understand emergent phenomena in terms of the renormalization group, that the renormalization group concept needs limits (like that of infinite system size) to be made precise, and that the limits lead to singularities; he goes on to make what I think are some misleading statements about the interpretation of singularities. In this post I'll try to run through the usual argument and explain how I think the singularities ought to be I understand emergent phenomena in terms of the following analogy. Suppose you drop a ball onto a hilly landscape with friction, and ask where it will end up a very long time later. The answer is evidently one of the equilibrium points, i.e., a summit, a saddle point, or (most likely) a valley. Two further points to be made here: (1) It does not matter on the hillside the ball started out; it'll roll to the bottom of the hill. In other words, very different initial conditions often lead to the same long-time behavior. (2) It matters very much which side of the summit the ball started out on; small differences in initial conditions can lead to very different long-time behavior. So what constitutes an "explanation" of the properties of the ball (say its response to being poked) a long time after its release? One possible answer is that, because mechanics is deterministic, once you've described the initial position and velocity you've "explained" everything about the long-time behavior. However, this is unsatisfactory because point (1) implies that most of this "explanation" would be irrelevant, and point (2) implies that the inevitable fuzziness of one's knowledge of initial conditions could lead to radically indeterminate answers. A better answer would be that the explanation naturally divides into two parts: (a) a description of the properties (curvature etc.) of the equilibrium points, and (b) the (generally intractable) question of which basin-of-attraction the ball started out in. In particular, part (a) on its own suffices to classify all possible long-time behaviors; it reduces a very large number of questions (what does the ball smell like? at what speed would it oscillate or roll off if gently poked?) to a single question -- where is it? (Approximate position typically implies exact position in the long-time limit, except if there are flat valleys.) "Emergent" (or "universal") phenomena are descriptions of equilibrium points, i.e., answers to part (a) of the question. The renormalization group concept is the notion that the large-scale behavior of a many-body system is like the long-time behavior of a ball in a frictional landscape, in the sense that it is governed by certain "fixed points," which can be classified, and that theories of these fixed points suffice to describe the large-scale properties of anything. So, for instance, there are three states of matter rather than infinitely many. The analogue of time is the length-scale on which you investigate the properties of the system -- as you go from a description in terms of interacting atoms to one in terms of interacting blobs and so on -- and the analogue of the "loss of information" via friction is the fact that you're averaging over larger and larger agglomerations of stuff. (All of this is quite closely related to the central limit theorem.) The role of infinite limits in the former case is obvious: if you start the ball very close to the top of the hill (where, let's say, the slope is vanishingly small), it'll take a very long time to roll off. So the fixed-point idea only really works if you wait infinitely long. However, it's also obvious that if you wait a really really long time and the ball hasn't reached its equilibrium, this is because it is near another equilibrium; so the equilibrium description becomes arbitrarily good at arbitrarily long times. (This is of course just the usual real-analysis way of talking about infinities.) The infinite-system-size limit is precisely analogous: while it only strictly works in the infinite-size limit, this "infinity" is not a pathology but is to be interpreted in the usual finitist way -- given epsilon > 0 etc. Epsilon-delta statements are true regardless of how far the series is from convergence, but they grow increasingly vacuous and useless as epsilon increases; something similar is true with dynamical systems and the renormalization group. I should explain what this has to do with fractals, by the way. In the case of the ball, a fixed point is defined as a configuration that is invariant under the equations of motion; in the case of the many-body system, a fixed point is a configuration that is invariant under a change of scale, i.e., a fractal. A continuum object is, of course, a trivial kind of fractal; you can't see the graininess of it without a microscope, and it doesn't seem to have any other scale than the size of its container. Systems near phase transitions are sometimes nontrivial fractals -- e.g., helium at the superfluid transition is a fractal network of droplets of superfluid in a bath of normal fluid, or vice versa. Phase transition points, btw, correspond to ridges; if you move slightly away from them, you "flow" into one phase or the other. The association between unstable equilibria and nontrivial fractals is not an accident. Any departure from the nontrivial fractal (say in the helium case) leads to either superfluid or normal fluid preponderating at large scales; if you average on a sufficiently large scale the density of droplets of the minority phase goes to zero, and you end up in one trivial phase or the other.
{"url":"http://glassbottomblog.blogspot.com/2010/09/emergence-and-limits.html","timestamp":"2014-04-17T04:26:16Z","content_type":null,"content_length":"93998","record_id":"<urn:uuid:cc695632-eb03-4cde-91a2-602f2decf69b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Chaotic spatial patterns described by the extended Fisher-Kolmogorov equation. (English) Zbl 0862.34012 The authors study spatial patterns of the bounded solutions of $\gamma {u}^{"\text{'}\text{'}\text{'}}={u}^{\text{'}\text{'}}+u-{u}^{3}\phantom{\rule{4.pt}{0ex}}\text{on}\phantom{\rule{4.pt}{0ex}}ℝ·\phantom{\rule{2.em}{0ex}}\left(*\right)$ The solutions of (*) exhibit a rich structure which depends crucially on the parameter $\gamma$. In fact one can have chaotic behavior in some sense which is defined in the paper (arbitrarily many maxima and minima). The proofs do not rely on abstract theorems but on clever constructive methods. 34A34 Nonlinear ODE and systems, general 34A45 Theoretical approximation of solutions of ODE 92D25 Population dynamics (general) 35Q80 Appl. of PDE in areas other than physics (MSC2000)
{"url":"http://zbmath.org/?q=an:0862.34012","timestamp":"2014-04-21T04:44:26Z","content_type":null,"content_length":"21501","record_id":"<urn:uuid:49ae4bd5-d512-469b-8f56-e02e202a6032>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Basis for space which contains the image of a matrix October 7th 2010, 11:14 PM #1 Senior Member Feb 2008 Basis for space which contains the image of a matrix Find a basis for $\mathbb{R}^3$ which contains a basis of $im(A)$, where $A=\begin{pmatrix}1&2&3&4\\ 2&-4&6&-2\\-1 & 2& -3 & 1 \end{pmatrix}$ After row reduction $U\sim\begin{pmatrix}1&0&3&1.5\\ 0&1&0&1.25\\0 & 0& 0 & 0 \end{pmatrix}$ So I've found the basis for the image of A $\textrm{basis(im(A))}=\left\{ \begin{pmatrix}1\\2\\-1\end{pmatrix}, \begin{pmatrix}2\\-4\\2\end{pmatrix}\right\}$ How would I find the basis for $\mathbb{R}^3$ which contains the image of A EDIT: Ok so I've just seen the way my textbook does it, and they row reduce the matrix A alongside the identity matrix and then choose the columns from A and the identity matrix (which are the pivot columns in the reduced form). Why does this method work? $(A|I)=\begin{pmatrix}1&2&3&4&1&0&0\\ 2&-4&6&-2&0&1&0\\-1 & 2& -3 & 1&0&0&1 \end{pmatrix}$ Last edited by acevipa; October 8th 2010 at 03:23 AM. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/158785-basis-space-contains-image-matrix.html","timestamp":"2014-04-19T15:02:39Z","content_type":null,"content_length":"31676","record_id":"<urn:uuid:4410d851-11f3-474c-816c-9583c4672b1a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 271 mL of dilute stearic acid solution was used to make a circular monolayer of stearic acid molecules on the surface of water. Calculate the mole of stearic acid in 271 µL of a 7.00 x 10 ^(-4) M stearic acid solution. • 10 months ago • 10 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51b7737be4b00fef3d0a98ea","timestamp":"2014-04-18T18:51:32Z","content_type":null,"content_length":"53251","record_id":"<urn:uuid:edda4558-eeff-41aa-9b27-b6e05a398835>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian Hierarchical Poisson Regression Model for Overdispersed Count Data This example uses the RANDOM statement in MCMC procedure to fit a Bayesian hierarchical Poisson regression model to overdispersed count data. The RANDOM statement, available in SAS/STAT 9.3 and later, provides a convenient way to specify random effects with substantionally improved performance. Overdispersion occurs when count data appear more dispersed than expected under a reference model. Overdispersion can be caused by positive correlation among the observations, an incorrect model, an incorrect distributional specification, or incorrect variance functions. The example displays how Bayesian hierarchical Poisson regression models are effective in capturing overdispersion and providing a better fit. Count data frequently display overdispersion (more variation than expected from a standard parametric model). Breslow (1984) discusses these types of models and suggests several different ways to model them. Hierarchical Poisson models have been found effective in capturing the overdispersion in data sets with extra Poisson variation. Hierarchical Poisson regression models are expressed as Poisson models with a log link and a normal variance on the mean parameter. More formally, a hierarchical Poisson regression model is written Consider data collected by Margolin, Kaplan, and Zeiger (1981) and studied in Breslow (1984) from an Ames salmonella mutagenicity assay. Table 1 reports the number of revertant colonies of TA98 salmonella (SALM) tested at six dose levels of quinoline (DOSE) with three replicate plates. Table 1 Salmonella Data Set Doses of Quinoline ( The following statements read the data into SAS and create the ASSAY data set. The variable data assay; input dose salm @@; l_dose10 = log(dose+10); plate = _n_; The data appear to be a likely candidate for a Bayesian hierarchical Poisson regression model with replicates. However, you usually begin with a standard analysis of a Poisson regression and evaluate any evidence for overdispersion. Bayesian Poisson Regression Model Suppose you want to fit a Bayesian Poisson regression model for the frequency of revertant colonies of TA98 Salmonella with density for the The likelihood function for each of the revertant colonies of salmonella and corresponding covariates is Suppose the following prior distributions are placed on the parameters, where The diffuse Using Bayes’ theorem, the likelihood function and prior distributions determine the posterior distribution of The goodness-of-fit Pearson chi-square statistic McCullagh and Nelder (1989) is calculated to assess model fit: First, let 1. If there is no overdispersion, the Pearson statistic approximately equals the number of observations in the data set minus the number of parameters in the model. The following SAS statements use the likelihood function and diffuse prior distributions to fit the Bayesian Poisson regression model and calculate the Pearson chi-square statistic: ods graphics on; proc mcmc data=assay outpost=postout thin=5 seed=1181 nbi=10000 nmc=10000 propcov=quanew monitor=(_parms_ Pearson) dic; parms beta1 2 beta2 0 beta3 0; prior beta: ~ normal(0,var=1000); lambda = exp(beta1 + beta2*dose + beta3*l_dose10); model salm ~ poisson(lambda); if plate eq 1 then Pearson = 0; Pearson = Pearson + ((salm - lambda)**2/lambda); ods graphics off; The PROC MCMC statement invokes the procedure and specifies the input data set. The OUTPOST= option specifies the output data set for posterior samples of parameters. The THIN= option controls the thinning of the Markov chain and specifies that one of every 5 samples is kept. Thinning is often used to reduce the correlations among posterior sample draws.The SEED= option specifies a seed for the random number generator (the seed guarantees the reproducibility of the random stream). The NBI= option specifies the number of burn-in iterations. The NMC= option specifies the number of posterior simulation iterations. The PROPCOV=QUANEW option uses the estimated inverse Hessian matrix as the initial proposal covariance matrix. The MONITOR=(_PARMS_ PEARSON) option outputs analysis on the _PARMS_ (which is shorthand for all model parameters) and the Pearson chi-square statistic. The DIC option specifies that the deviance information criterion (DIC) is requested. The PARMS statement specifies the parameters in the model and assigns initial values to each of them. The PRIOR statements specify priors for all the parameters. The notation beta: in the PRIOR statement is shorthand for all variables that start with beta. The shorthand notation is not necessary, but it keeps your code succinct. The LAMBDA assignment statement calculates 1. The MODEL statement specifies the Poisson function for SALM. The next two lines of statements use SALM and LAMBDA (which are the expected value and the variance of the Poisson distribution, repectively) to calculate the Pearson chi-square statistic. The IF statement sets the value of PEARSON to 0 at the top of the data set (that is, when the value of the data set variable PLATE is 1). As PROC MCMC cycles through the data set at each iteration, the procedure cumulatively adds the Pearson chi-square statistic over each value of SALM. By the end of the data set, you obtain the Pearson chi-square statistic, as defined in Equation 3. Figure 1 displays diagnostic plots to assess whether the Markov chains have converged. The trace plot in Figure 1 indicates that the chain appears to have reached a stationary distribution. It also has good mixing and is dense. The autocorrelation plot indicates low autocorrelation and efficient sampling. Finally, the kernel density plot shows close to a unimodal shape of posterior marginal distribution for PROC MCMC produces formal diagnostic tests by default. They are omitted here since informal checks on the chains, autocorrelation, and posterior density plots display stabilization and convergence. Figure 2 reports summary and interval statistics for each parameter’s posterior distribution. PROC MCMC calculates the sampled value of the Pearson chi-square statistic at each iteration and produces its corresponding posterior summary statistics. The MCMC Procedure 2000 2.1768 0.2110 2.0391 2.1834 2.3142 2000 -0.00101 0.000233 -0.00117 -0.00101 -0.00085 2000 0.3184 0.0549 0.2817 0.3191 0.3549 2000 49.3974 3.4673 46.8851 48.4551 50.7996 0.050 1.7421 2.5790 1.7467 2.5798 0.050 -0.00148 -0.00056 -0.00146 -0.00054 0.050 0.2136 0.4288 0.2169 0.4305 0.050 45.5715 59.2921 45.3427 56.2309 Figure 3 reports the calculated DIC (Spiegelhalter et al.; 2002) for the Bayesian Poisson regression model. The DIC is a model assessment tool and a Bayesian alternative to Akaike’s or Bayesian information criterion. The DIC can be applied to non-nested models and models that have data which are not independent an didentically distributed. When models with different DIC values are compared, a smaller DIC indicates a better fit to the data set. The DIC for this model is 141.882. Dbar (posterior mean of deviance) 139.070 Dmean (deviance evaluated at posterior mean) 136.258 pD (effective number of parameters) 2.812 DIC (smaller is better) 141.882 Bayesian Hierarchical Poisson Regression Model In overdispersed Poisson regression, the parameter estimates do not vary much from the Poisson model, but the estimated variance is inflated. Draper (1996) considers Bayesian hierarchical Poisson regression models for this type of data with density for the 2, but now there are additional parameters in the model. You add random effects Suppose the priors are placed on the regression and variance parameters as follows: The mean and variance, The following SAS statements use the likelihood function and prior distributions to fit the Bayesian Poisson hierarchical regression model and calculate the Pearson chi-square statistic. ods graphics on; proc mcmc data=assay outpost=postout thin=5 seed=248601 nbi=10000 nmc=100000 monitor=(beta1-beta3 Pearson) dic; parms beta1 2 beta2 0.01 beta3 0.5; parms s2 1; prior beta: ~ normal(0,var=1000); prior s2 ~ igamma(0.01, s=0.01); w = beta1 + beta2*dose + beta3*l_dose10; random gamma~ normal(0,var=s2) subject=plate; lambda = exp(w+gamma); model salm ~ poisson(lambda); mu = exp(w + s2/2); sigma2 = mu + (exp(s2) - 1)*(mu**2); if plate eq 1 then Pearson = 0; Pearson = Pearson + ((salm - mu)**2/(sigma2)); ods graphics off; The first PARMS statement places all regression parameters in a single block and assigns them initial values. The second PARMS statement places the variance parameter in a separate block and assigns it an initial value of 1. The PRIOR statement on the The W assignment statement calculates The RANDOM statement specifies the random effect, The LAMBDA assignment statement calculates 4. The next four lines of statements use SALM and the sampled values of 3. The moments are evaluated in the MU and SIGMA2 assignment variables according to Equation 5 and 6, respectively. The diagnostic plot for Figure 4. It displays the desired convergence, low autocorrelation, and smooth unimodal marginal posterior density for the parameter. The remaining diagnostic plots (not shown here) similarly indicate good convergence in the other parameters. Figure 5 reports summary and interval statistics for the parameters and for the Pearson chi-square statistic. The fit statistic in the Bayesian hierarchical Poisson regression model is greatly reduced with a value of 19.4658, which suggests a much better fit compared to its value of 49.3974 in the Poisson regression model. The value of the fit statistic is now much closer to the desired number of observations minus the number of parameters. Compared to the results in Figure 2, the parameter estimates for intercept and treatment change a small amount, their standard errors are inflated, and the confidence intervals are wider. Thus the treatment effect of quinoline does not decrease with the Bayesian hierarchical Poisson regression model. The MCMC Procedure 20000 2.1710 0.3635 1.9361 2.1770 2.4082 20000 -0.00097 0.000442 -0.00125 -0.00097 -0.00069 20000 0.3117 0.0994 0.2462 0.3105 0.3754 20000 19.4658 7.1714 14.2401 18.6810 23.8964 0.050 1.4646 2.8591 1.4874 2.8794 0.050 -0.00185 -0.00012 -0.00182 -0.00010 0.050 0.1235 0.5078 0.1155 0.4976 0.050 7.8557 35.4789 6.7314 33.5488 Figure 3 reports the computed DIC for the Bayesian hierarchical Poisson regression model. The DIC for the hierarchical model is 123.870 and is smaller than the DIC for the Poisson regression model shown in Figure 3. A smaller value of DIC suggests a better fit; you see that the hierarchical model provides a better fit to the data. Dbar (posterior mean of deviance) 110.899 Dmean (deviance evaluated at posterior mean) 97.929 pD (effective number of parameters) 12.971 DIC (smaller is better) 123.870 In order to examine the convergence diagnostics of some of the random-effects parameters, you can use the MONITOR= option in the RANDOM statement. Then PROC MCMC produces the visual display of the posterior samples. If you forget to include the MONITOR= option in the initial program that you run, you can use gamma_10, which is also shown in Figure 7: ods graphics on; %tadplot(data=postout, var=gamma_10); ods graphics off; Figure 7 indicates good mixing for the random-effects parameter gammma_10. The following statement shows how to create convergence diagnostics for all the random-effects parameters: ods graphics on; %tadplot(data=postout, var=gamma_:); ods graphics off; gamma_: is the shorthand for all variables that start with gamma_. You can also use the ods graphics on; %CATER(data=postout, var=gamma_:); ods graphics off; Figure 8 is a caterpillar plot of the random-effects parameters gamma_1–gamma_18. The dots on the caterpillar plots are the posterior mean estimates for each of the parameters, and the vertical line in the middle is the overall mean, which is very close to 0. Varying gamma indicates nonconstant dispersion in the Poisson model.
{"url":"https://support.sas.com/rnd/app/examples/stat/BayesSalm93/new_example/index.html","timestamp":"2014-04-17T07:09:44Z","content_type":null,"content_length":"100667","record_id":"<urn:uuid:05a97ab6-650c-41b6-96f2-b2d973ef490c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US5218625 - Method for the automatic determination of the exposure time of a radiographic film and system of implementation thereof 1. Field of the Invention The invention relates to radiology systems that have a radiological film and are used to examine objects and, more particularly in such systems, it relates to a method that enables the estimation, while the object is being examined, of the "lumination" or "luminous exposure" (i.e. the quantity of light received multiplied by the exposure time) to which the radiological film is subjected, and enables the stopping of the exposure when the film has reached a given level of blackening or optical density. 2. Description of the Prior Art A radiology system essentially comprises an X-ray tube and a receiver of such radiation, between which the object to be examined, for example a part of a patient's body, is interposed. The image receiver which is, for example, a film/screen couple, gives an image of the object after an appropriate exposure time and the development of the film. For the image of the object to be used as efficiently as possible, the different dots that constitute it should have sufficient contrast with respect to one another, namely, the blackening of the radiographic film should be appropriate from one X-ray image to the next one, despite the possible differences in opacity of the radiographed object. The blackening of the film is related to the quantity of energy of the radiation incident to the film/screen couple, namely, the product of the intensity of the radiation to which the radiographic film is subjected, or "film" dose rate, by the time during which the film is exposed to this radiation. Consequently, to obtain a constant blackening of the film from one radiography to another, there is a known way of making measurements, during the examination, of the incident energy on the film by means of a detection cell, generally placed before the receiver, that is sensitive to X-radiation and gives a current proportional to the "film" dose rate. This current is integrated, from the start of the exposure, in an integrator circuit that gives an increasing value during the exposure. This increasing value is compared, during the exposure time, with a fixed reference value, established beforehand as a function of the characteristics of the film. The end of the exposure time is determined by the instant at which the comparison indicates that the value representing the incident energy on the film is equal to the reference value. Should the radiographic film be directly subjected to X-radiation, and should the variation in the exposure times from one examination to another be small enough, a constant blackening of the film is obtained from one exposure to the next one, independently of the duration of the exposure time S, provided that the product of the exposure time S by the dose rate F is constant, i.e. the value resulting from the integration should remain constant. This is true only if the characteristics of the film obey the law of reciprocity which indicates that the optical density of the film is proportional to the product F independent of the quality of the incident X-ray beam. This law of reciprocity is no longer met when the variation in the exposure times is great. Besides, should the radiographic film be associated with an intensifying screen, the blackening of the film depends on the quality of the spectrum. For, the response of the screen depends on the energy distribution of the spectrum of the radiation received, which means that it is sensitive to the hardening of the spectrum and to the change in voltage of the X-ray tube. Finally, there are certain applications wherein it is costly for the detection cell to be placed before the film (for example in mammography) for the radiation energy is such that the detection cell would then be visible on the film. In this case, it is placed behind the image receiver but this creates an additional difficulty for the signal perceived by the detector cell is the one that has not contributed to the blackening of the film. The result thereof is that the measurement made by the detection cell does not generally represent the incident lumination on the radiographic film. The deviation from the law of reciprocity, which varies according to the type of film, represents the relative variation of the lumination needed to obtain a constant optical density when the exposure time S varies while the spectrum of the X-radiation is constant. This is expressed by the fact that, to obtain a same optical density of the film, the lumination should be, for example 1 for an exposure time S=0.1 second, 1.3 for S =1 second and 2 for S=4 seconds. This deviation from the law of reciprocity is due to the phenomenon known as the Schwarzschild effect. This effect is described notably in the work by Pierre GLAFKIDES, CHIMIE ET PHYSIQUE PHOTOGRAPHIQUES, 4th edition, pages 234 to 238, PUBLICATIONS PHOTO-CINEMA Paul MONTEL. To account for this deviation from the law of reciprocity, various approaches have been proposed, and one of them has been described in the French patent No. 2 584 504. This patent proposes the comparison of the integrated value of the signal given by the detection cell with a reference value that varies during the exposure according to a determined relationship. More precisely, from the start of each exposure period, an additional value is added to the difference between the values of the integrated signal and of the reference value. This additional value increases as a function of time according to a previously determined relationship, for example an exponential relationship. This previously determined relationship, whether it is exponential or otherwise, takes account of the deviation from the law of reciprocity only imperfectly. In particular, it does not take account of the variations in the luminous intensity effectively received by the film. Furthermore, this correction does not take account of the effects of other phenomena such as the hardening of the X-radiation due to the thickness of the object crossed and the modification of the spectrum due to the voltage of the X-ray tube. Furthermore, in this method, the detection cell is placed before the image receiver. An object of the present invention, therefore, is to implement a method for the automatic determination, during the time of exposure, of the instant when the exposure is stopped, taking account of the different effects that come into play, notably the variations in the tube current, the hardening of the spectrum due to the thickness of the object crossed, the modification of the spectrum due to the voltage of the tube and, when an intensifier screen is present, the absorption response of said screen. The invention relates to a method for automatically determining the exposure time of a radiographic film in a radiology system designed, to examine an object that includes an X-ray tube having a supply voltage V which may assume various values, V.sub.m, with continuous or discrete variation. The X-ray tube emits an X-ray beam in the form of pulses of variable duration S towards the object to be examined. A receiver detects the X-radiation that has crossed the object, to form an image of said object. The receiver is constituted by at least one intensifier screen and a film sensitive to the light emitted by this screen. A cell detects the X-rays that have crossed the object, to be examined and is placed behind the image receiver to enable the conversion of a physical variable, characterizing the X-ray beam, into a measurement signal L. An integrator circuit integrates the measurement signal L for, the duration S of the exposure and produces a signal M a device computes the yield D given by determining the ratio of M to the product I the anode current I of the tube by the duration S of the exposure. The method includes the following operations. (a) A first calibration of the radiology system by means of objects with a thickness E.sub.p by using a receiver without the intensifier screen or screens so as to determine the function: D.sub.se =f' (V.sub.m, E.sub.p) (4) and the inverse function: E.sub.p =g' (V.sub.m, D.sub.se) (5) (b) A second calibration of the radiology system by means of the objects with a thickness E.sub.p by using a receiver with intensifier screen so as to determine the function: D.sub.c =f" (V.sub.m, E.sub.p) (6) the inverse function E.sub.p =g" (V.sub.m, D.sub.c) (7) and the function D.sub.f =f' (V.sub.m, E.sub.p)-f"(V.sub.m,E.sub.p) (8) (c) A third calibration to determine the reference lumination L.sub.ref that must be received by the film, under fixed reference conditions, to achieve the blackening (or optical density) chosen as a reference value by the practitioner. When these calibration operations have been performed, it is possible to go on to the radiological examination of the object which consists of the following steps of (or operations for) : (e1) positioning the object to be radiographed; (e2) triggering the start of the exposure by the practitioner; (e3) measuring the yield D.sub.c1 at a certain time t' after the start of the exposure; (e4) calculating the equivalent thickness E.sub.1 by the equation (7); (e5) calculating the yield D.sub.f1 at the film for the thickness E.sub.1 by the equation (8); (e6) calculating the lumination L.sub.f received by the film according to the equation: L.sub.f =L.sub.am +D.sub.f1 (e7) calculating the lumination L.sub.ra remaining to be acquired to obtain the blackening (or optical density) chosen by the equation: L.sub.ra =L.sub.ref -L.sub.f ( 10) (e8) calculating the estimated mA mAs.sub.r to obtain the blackening (or optical density) chosen by the equation : mAs.sub.r =L.sub.ra /D.sub.f1 ( 11) (e9) measuring the mA the operation (e3); (e10) stopping the exposure when the mA e9) are equal to or greater than mAs.sub.r, --or returning to the step (e3) when the mA To take account of the time t.sub.o of the operations (e4) to (e8), the step (e8) further includes the calculating of the mA (mAs.sub.c) during the steps (e4) to (e8) defined by the equation: mAs.sub.c =I which makes it possible to determine the real value of the mA remaining to be acquired (mAs.sub.ra) by the equation: mAs.sub.ra =mAs.sub.r -mAs.sub.c ( 12) In a first variant, the step (e10) further includes a step of computing the remaining exposure time, such that ##EQU1## so as to end the exposure in an open loop if t.sub.rc is smaller than a value t" corresponding to the interval of time between two successive operations (e3). In a second variant, the steps (e3) to (e10) are replaced by: a task of estimation (T.E.) of the mA constituted by the steps (e4) to (e8) and a step of converting the mA CE.sub.target =mAs.sub.ra a task of interrupting (T.C.) the exposure which consists in decrementing the target value CE.sub.target by the signals received by the cell (12) and in terminating the exposure when the decremented value becomes smaller than or equal to a value Val.sub.o (Val.sub.o is equal to zero for example). The task of estimation (T.E.) is renewed periodically during the exposure at the instants t.sub.1, t.sub.2. . . t.sub.n separated by a period that is at least equal to the computation time t.sub.c. In another variant, the step (e10) is replaced by the step of computing the remaining exposure time t.sub.rc so as to end the exposure in an open loop. To take account of the effect of non-reciprocity of the film, the steps (e6) and (e8) are modified to introduce a coefficient CNRD (film dose rate) of non-reciprocity of the film into the equations (9) and (11) which become: ##EQU2## These are formulae in which CNRD (film dose rate) is the coefficient of non-reciprocity indexed as a function of the film dose rate of the receiver such that: film dose rate=D.sub.f1 The coefficient CNRD (film dose rate) is obtained by performing the following steps of : measuring the coefficients of non-reciprocity CNRT (t.sub.i) of the film/screen couple as a function of the exposure time (t.sub.i), measuring for each exposure time (t.sub.i) the film dose rate d.sub.i, determining the function of modelization of the coefficients CNRD (d.sub.i) such that: CNRD (d)=A'.sub.O +A'.sub.1 log 1/d+A'.sub.2 [log 1/d].sup.2( 20) which makes it possible to determine the coefficient corresponding to a given film dose rate. The film dose rate d.sub.i is given, for example, by the formula: ##EQU3## The reference lumination L.sub.ref is determined by a calibration method that includes the following steps of (or operations for) : taking a shot under determined radiological conditions for a reference optical density DO.sub.refo, a thickness standard E.sub.o, a supply voltage V.sub.o, an exposure time t.sub.o a value of the product I.sub.o measuring the yield D.sub.o; calculating the equivalent thickness E.sub.po by the formula: E.sub.po =g" (V.sub.o, D.sub.o) (7) calculating the yield D.sub.fo on the film by the formula: D.sub.fo =f'(V.sub.o, E.sub.o)-f" (V.sub.o, E.sub.o) (8) calculating the luminance L.sub.film on the film by the formula: ##EQU4## calculating the illumination step Ech.sub.ref corresponding to the reference optical density DO.sub.refo by means of the sensitometric curve; measuring the optical density DO.sub.m of the shot obtained and calculating the illumination step Ech.sub.m by means of the sensitometric curve; calculating the reference lumination L.sub.ref by the formula : ##EQU5## The coefficients of non-reciprocity CNRD(d) as a function of the film dose rate are obtained by r performing the following measuring the coefficients of non-reciprocity CNRT (t.sub.i) of the film/screen couple as a function of the exposure time (t.sub.i), measuring, for each exposure time (t.sub.i), the film dose rate d.sub.i, determining the function of modelization of the coefficients CNRD (di) such that: CNRD (d)=A'.sub.0 +A'.sub.1 log 1/d+A'.sub.2 [log 1/d].sup.2( 20) which makes it possible to determine the coefficient corresponding to a given film dose rate. The coefficients of non-reciprocity CNRT (t.sub.i) as a function of the exposure time (t.sub.i) may be obtained in different ways, for example by performing the following steps of : (a1) modifying the tube heating current so as to obtain different values of said current, (a2) reading the values M (t.sub.i) given by the integrator circuit for different exposure times so as to obtain an optical density DO.sub.1 of the film (a3) calculating the ratio ##EQU6## which gives the coefficient CNRT (t.sub.i) with M (t.sub.ref) the value M (t.sub.i) for t.sub.i =t.sub.ref The coefficients CNRT (t.sub.i) may be modelized by the function : CNRT (t)=A.sub.0 +A.sub.1 log t+A.sub.2 [log t].sup.2 ( 18) Should the image receiver of the radiology apparatus be of the film type, in which the detection cell is placed before or after the image receiver, the operations (a),(b),(c), and steps (el) to (e10) described here above are reduced to the following steps of : (a') calibrating the radiology system so as to determine the analytical model : D'.sub.f =f"' (V.sub.m, E.sub.p) (30) (c') with the film as an image receiver, determining by calibration the reference lumination L'.sub.ref that should be received by the film, under fixed reference conditions, to achieve the blackening (or optical density) chosen as a reference value by the practitioner; (e'1) positioning the object to be radiographed; (e'2) triggering the start of the exposure by the practitioner; (e'3) measuring the yield D'.sub.f1 a certain time t' after the start of the exposure; (e'6) calculating the lumination L'.sub.f received by the film by the equation : L'.sub.f =L.sub.am +D'.sub.f1 rate) (9") (e'7) calculating the lumination L'.sub.ra remaining to be acquired to obtain the blackening (or optical density) chosen by the equation : L'.sub.ra =L'.sub.ref -L'.sub.f ( 10") (e'8) calculating the estimated mA mAs'.sub.ra to obtain the blackening (or optical density) by the equation: mAs'.sub.ra =L'.sub.ra /D'.sub.f1 the other steps (e9) and (e10) that follow being unchanged. Other objects, features and advantages of the present invention shall appear from the following description of the method according to the invention and from a particular exemplary embodiment of the radiology system used to implement it, said description being made with reference to the appended drawings, of which : FIG. 1 is a block diagram of a radiology system enabling the implementation of the method according to the invention, FIG. 2 is a graph showing curves obtained by implementing a method of calibration used in the method according to the invention; FIG. 3 is a graph showing a curve of variation of the coefficients of non-reciprocity CNRT as a function FIG. 4 is a graph showing curves of variation of the coefficients of non-reciprocity CNRD as a function of the inverse of the film dose rate d, FIG. 5 is a graph showing curves of variation of the optical density of a radiographic film as a function of the lumination, and FIG. 6 is a block diagram of a radiology system, similar to the one of FIG. 1 but in which the detection cell is incorporated in the receiver and is subject to light emitted by the screen. A radiology system to which the method, according to the invention, for the automatic determination of the exposure time of an object 13 to be radiographed, can be applied, comprises an X-ray source 11 such as an X-ray tube that gives an X-ray beam illuminating this object 13 and an image receiver 17 such as a film/intensifier screen couple that is positioned so as receive the X-rays having crossed said object and that gives an image of the object 13 after an appropriate exposure time S and development of the film. To implement the method of the invention, the system further includes a detection cell 12, that is placed behind the image receiver 17 in the case of a radiographic film with an intensifier screen. This cell may be placed in front of the receiver in the case of a film without an intensifier screen. The detection cell 12 enables the conversion of a physical variable characteristic of the X-radiation that has crossed the object and the image receiver, such as the KERMA or the energy fluence into a measurement signal L, for example of the electrical type. The signal L, produced by the detection cell 12, is applied to a circuit 16 that carries out an integration of the electrical signal during the duration S of the exposure. The signal M that results from the integration is a measurement of the radiation that has crossed the object 13 during the duration S of the exposure. The X-radiation source 11 is associated with a power supply 15 that produces a variable high voltage V.sub.m for the X-ray tube and includes an instrument for the measurement of the anode current I of said tube. In order to modify the duration of the exposure time S, the power supply device 15 and the X-ray tube include means to start the X-ray emission at a precise,. instant and to stop it after a variable time S. Time s is determined, in accordance with the invention, as a function of the signal M produced by the circuit 16 and of the values of I, S and V.sub.m and, more precisely, of the ratio M/I is computed by the device 18. The values of the yield D are processed by a computer or microprocessor 19 in accordance with the method of the invention so as to give an end-of-exposure The first operation of the method consists of performing a calibration of the radiology system of FIG. 1 that leads to a function of estimation of the lumination received by the radiographic film. This calibration and the function of estimation are described in the French patent application filed on the same date and entitled: METHOD FOR THE ESTIMATION AND CALIBRATION OF THE LUMINATION RECEIVED BY A RADIOGRAPHIC FILM corresponding to U.S. Pat. application Ser. No. 07/726,204, filed Jul. 5, 1991. For an understanding of the remaining part of the description, it shall be recalled that the method for estimating the lumination received by a radiographic film is based on calibration operations that result in the definition of a function that is proportional to the dose rate of photons on to the film, called the film dose rate, and on a calibration that can be used to establish the relationship between the film dose rate function and the lumination received by the film under fixed reference conditions and results in a given blackening of the film. This latter calibration shall be described in fuller detail hereinafter in the description. The calibrations that enable a definition of a film dose rate function are derived from a calibration method described in U.S. patent application Ser. No. 07/535 520 filed on Jun. 8, 1990 and entitled: METHOD FOR THE CALIBRATION OF A RADIOLOGICAL SYSTEM AND FOR THE MEASUREMENT OF THE EQUIVALENT THICKNESS OF AN OBJECT. This method consists of measuring the yield D of the cell for each standard at the chosen supply voltages V.sub.m. More precisely, with a first thickness standard E.sub.1, a measurement of yield D.sub.1m is made for each value V.sub.m constituting a determined set. These values D.sub.1m as a function of the voltage V.sub.m may be entered in a graph to obtain the points 21' of FIG. 2. The measurements of yield D are made for another thickness standard E.sub.2 and the values D.sub.2m, corresponding to the points 22' of FIG. 2, are obtained, and the operation continues thus successively to obtain the other series of points 23', 24' and 25' corresponding respectively to the yields D.sub.3m D.sub.4m and D.sub.5m and to the thicknesses E.sub.3, E.sub.4 and E.sub.5. It must be noted that, in FIG. 2, the yields D.sub.pm have been entered as logarithmic y-axis values while the supply voltages have been entered as x-axis values from 20 kilovolts to 44 kilovolts. These series of points 21' to 25' are used to define the parameters of an analytical model that describes the behavior of the yield D as a function of the parameters V.sub.m and E.sub.p for a given configuration of the radiological system. This analytical model shall be written as: D=f (V.sub.m, E.sub.p) (1) The parameters of the analytical model may be adjusted by means of standard estimation tools such as the minimal mean square error method. The curves 21 to 25 represent the value of the yield D given by the analytical model represented by the expression : D=f (V.sub.m,E.sub.p)=exp [f.sub.1 (V.sub.m)+E.sub.p (V.sub.m)] (2) in which f.sub.1 (V.sub.m) and f.sub.2 (V.sub.m) are second-degree polynomials, the expression of which is given by: f.sub.1 (V.sub.m)=A.sub.o +A.sub.1 V.sub.m +A.sub.2 V.sup.2.sub.m f.sub.2 (V.sub.m)=B.sub.o +B.sub.1 V.sub.m +B.sub.2 V.sup.2.sub.m The inverse function of that expressed by the formula (2) enables E.sub.p to be computed, if D and V.sub.m are known, by using the following formula (3): ##EQU7## it being known that f.sub.2 (V.sub.m) cannot get cancelled for the current values of V.sub.m because the yield D always depends on the thickness E.sub.p at the voltages V.sub.m considered. In other words, to a couple of values (E.sub.p, V.sub.m) there corresponds a measurement of yield D, which makes it possible to determine E.sub.p as a function of V.sub.m and D. During a radiological examination, a measurement of yield D, which is done with a given supply voltage V.sub.m, makes it possible to determine an equivalent thickness expressed in the units used for E.sub.p. This calibration is performed twice with configurations of the radiology system that differ as regards the receiver 17. The first of these calibration operations is done with the receiver 17 without an intensifier screen. By the equation 1, a function f'is determined, giving rise to yield values of the cell 12 referenced D.sub.se such that: D.sub.se =f' (V.sub.m, E.sub.p) (4) and the inverse function: E.sub.p =g' (V.sub.m, D.sub.se) (5) The second operation of the method consists of performing a second calibration provided with a receiver 17 including an intensifier screen and then a series of yield values D.sub.c is obtained and, as above, the function f" is determined such that: D.sub.c =f" (V.sub.m, E.sub.p) (6) and the inverse function E.sub.p =g" (V.sub.m, D.sub.c) (7) From the above two calibration operations, a function D.sub.f is deduced representing the yield on the film such that: D.sub.f =D.sub.se -D.sub.c that is D.sub.f =f'(V.sub.m, E.sub.p)-f"(V.sub.m,E.sub.p) (8) This function D.sub.f does not take account of the modification of the spectrum of the X-radiation due to the additional filtration between the intensifier screen and the detection cell 12 that comes, for example, from the output face of the cartridge containing the film/screen couple. To take account of it, E.sub.p in the equation (8) is replaced by (E.sub.p -sup.filter) where sup.filter is the thickness equivalent to the radiographed object corresponding to this filtration. This equivalent thickness is obtained by placing, for example, in the beam 14, an object equivalent to this filtration and by using the calibrated function determining the equivalent thickness g' or g" according to the configuration of the machine. Since the product D.sub.f absorbed in the intensifier screen during a duration t and for an anode current I, the quantity D.sub.f proportional to the dose rate of incident photons on the film and is expressed in the units of measurement of the signal of the detector cell 12. This law of proportionality is verified all the more efficiently as the number of light photons emitted by the intensifier screen is itself proportional to the energy absorbed. If the number of light photons emitted by the screen meets another relationship as a function of the energy absorbed, this other relationship must be applied to D.sub.f A final calibration consists of linking the above-described electrical functions to a value of the blackening of the film, namely to an optical density, that is to be obtained at the end of the exposure. This value is chosen by the practitioner as a function of the film/screen couple, the type of diagnosis, the part of the patient's body to be examined and his usual practices in examining radiographs. This choice makes it possible to determine the reference lumination, referenced L.sub.ref, namely the lumination that must be received by the film, under fixed reference conditions, to arrive at a degree of blackening such as this. The method used to determine L.sub.ref shall be described here below. These calibration operations are not performed at each radiological examination of an object or a patient, but from time to time to take account of the variations in the characteristics of the radiology system in the course of time, notably variations such as the ageing of the X-ray tube. The results of these operations are recorded in the memory of the microprocessor 19 in the form of functions represented by the equations 4 to 8. This means that the microprocessor 19 is capable of computing E.sub.p if it knows D.sub.c and can then compute D.sub.f. During the radiological examination of the patient, the method according to the invention further consists of the performance of the following main steps of (or operations for) : (e1) positioning the object or patient to be radiographed, (e2) triggering the start of the exposure by the practitioner, (e3) measuring the yield D.sub.c a certain time t' after the start of the exposure, (e4) calculating the equivalent thickness from the measurement of yield D.sub.c, (e5) calculating the yield D.sub.f at the film, (e6) estimating the lumination received by the film since the start of the exposure, (e7) calculating the lumination remaining to be acquired to obtain the chosen blackening, (e8) calculating the estimated mA the X-ray tube to obtain the chosen blackening, (e9) measuring the mA case may be since the start of the exposure or since the preceding measurement, (e10) stopping the X-radiation when the mAs.sub.mes are greater than or equal to the mA (e3). It must be noted that the term "lumination" is defined as the product of the quantity of light received, for example the illumination EC of the sensitive surface, by the duration of exposure. The step (e3) consists of measuring the integrated value D given by the device 18 at a certain time t' after the start of the exposure, it being known that the integrator circuit 16 has been reset at zero either, as the case may be, at the start of the exposure, or after the last measurement. The integration time t' corresponds, as the case may be, to the time that has elapsed since the start of the exposure or to the time that has elapsed since the last measurement. The step (e4) is performed by the microprocessor 19 from the first calibration of the radiology system as described here above: it is governed by the equation (7); a value E.sub.1 of the equivalent thickness is then obtained. It must be observed that, for the second iteration of the method and for the following ones, it is not necessary to perform the step (e4) to the extent that the estimation of the equivalent thickness has been sufficiently precise during the first iteration. The step (e5) consists of computing the yield of the film D.sub.f1 corresponding to the thickness E.sub.1 in using the function defined by the equation (8), which makes it possible to take account, notably, of the influence of the screen of the receiver. This operation has been described briefly here above. The step (e6) consists of estimating the lumination L.sub.f received by the film from the start of the exposure in applying the following equation: L.sub.f =L.sub.am +D.sub.f1 This is an equation in which L.sub.am is the lumination received by the film before the step (e3) and δmA mA the product of the tube current I by the integration time S. The step (e7) consists of calculating the lumination remaining to be acquired L.sub.ra to obtain the determined blackening; it is given by the equation: L.sub.ra =L.sub.ref -L.sub.f (10) The step (e8) consists of calculating the mA delivered to obtain the chosen blackening which is given by the equation : mAs.sub.r =L.sub.ra /D.sub.f1 (11) It is then possible to deduce the number of mA the calculations, referenced mAs.sub.c that actually remain to be acquired, referenced mAs.sub.ra, are defined by : mAs.sub.ra =mAs.sub.r -mAs.sub.c (12) mAs.sub.c =I with t.sub.c being the time taken for the calculations The step (e10) consists of making a choice: either to stop the exposure or to continue it according to the value of the mAs remaining to be delivered or, again, the exposure time still to elapse, or to recompute the estimation of the estimated value of the end-of-exposure time. The end-of-exposure criterion could be the following: If the value : Dif (mA is nil or below a fixed value Val.sub.0, the microprocessor 19 stops the X-radiation by action on the power supply device 15. If not, the step (e3) is returned to. It is possible to envisage an additional test on the value of the exposure time still to elapse t.sub.rc defined by the relationship : ##EQU8## This additional test consists of not modifying the value of the estimation mAs.sub.ra should t.sub.rc be smaller than a value t.sub.o. Then the end of exposure terminates in an open loop through the continuance of only the end-of-exposure operations, namely the decrementation of the number of mA becomes smaller than or equal to zero. A possible value of t.sub.o is a value substantially equal to the time interval between two measurements corresponding to the step (e3). Thus, in this case, the operation (e10) comprises two tests : a first test on mAs.sub.ra to decide whether or not the exposure must be stopped, then a test on t.sub.rc to decide whether to undertake a new estimation of the mAs remaining to be delivered or whether the value mAs.sub.ra will remain fixed until the end of the exposure. In the latter case, the end-of-exposure test will be done periodically with the value mAs.sub.ra. Besides, the operations for estimating the time still to elapse and that of the interruption of exposure may be separated in order to further refine the precision of the exposer. Thus, the method may be split up as follows: a task T.E. designed to estimate the mA delivered before the end of the exposure and a task T.C. for interrupting the exposure. These are two independent tasks that occur in The task T.E. for estimating the mA constituted by the operations (e3) to (e8) to which there is added an operation (e'9) of conversion of the mA units of the cell 12 such that : CE.sub.target =mAs.sub.ra This task of estimation T.E. is renewed periodically during the exposure, for example at the instants t.sub.1, t.sub.2,. . . t.sub.n which are instants of measurement separated by a period that is at least equal to the computation time t.sub.c. At the end of the task of estimation T.E., the target value CE.sub.target is updated. This updating should take account of the signal received by the detector cell 12 between the instant of measurement at the start of the operation (e3) and the instant when the value CE.sub.target is updated at the end of the operation T.E. The task (T.C.) of interrupting the exposure is one that consists of decrementing a given value (or target) as a function of the signal actually received by the cell 12. This task interrupts the exposure as soon as the value CE.sub.target becomes smaller than or equal to Val.sub.o, equal to zero for example. Thus, the working of the task T.C. can be summarized in the following steps of (or operations for) : (f1) measuring the integrated signal M.sub.m by the cell 12 after a certain time t.sub.TC; (f2) decrementing this value to the target value : (CE.sub.target -M.sub.m) (f3) stopping the exposure when (CE.sub.target -M.sub.m) is lower than Val.sub.o, if not return to(f1). The method that has just been described works accurately to the extent that there is no deviation from the law of reciprocity for the receiver 17 and the detection cell 12. If this is not the case, the operations (e6) and (e8) must be supplemented to take account of it and a coefficient of correction has to be determined by particular measurements and computations. This coefficient of correction is introduced into the equations (9) and (11) where the lumination and yield of the film come into play. It is thus that the equation (9) and (11) become : ##EQU9## with film dose rate=D.sub.f1 CNRD is the function representing the effect of non-reciprocity expressed as a function of the dose rate of photons on the film. The function CNRD is obtained by a method of calibration that is described in the patent application filed on this date and entitled : METHOD FOR THE DETERMINATION OF THE FUNCTION REPRESENTING THE EFFECT OF NON-RECIPROCITY OF A RADIOGRAPHIC FILM, Ser. No. 07/726,175. For an understanding of the remaining part of the description, it may be recalled that this calibration method consists, first of all, in determining the coefficients of non-reciprocity of the film as a function of the period of exposure t.sub.i, said coefficients being referenced CNRT (t.sub.i). This function CNRT is determined experimentally and may be represented by an analytical function. More precisely, the method consists in the determination, for various values IR.sub.i of the intensity of the radiation, of the value t.sub.i of the time of exposure needed to obtain a fixed optical density DO.sub.refo of the film, for example DO.sub.refo= 1, and in the reading of the values given by the integrator circuit 16 for the different exposure times t.sub.i, namely values that shall be called M (t.sub.i). These values are compared with a reference value M (t.sub.ref), which is, for example, the value corresponding to an exposure time of one second, in computing the ratio ##EQU10## It is this ratio that determines the coefficient of non-reciprocity in time CNRT (t.sub.i) for the exposure time t.sub.i. Another way to determine the coefficients CNRT (t.sub.i) shall be described further below. These coefficients CNRT (t.sub.i) are related to one another as a function of the exposure time by the curve of FIG. 3 in the case, for example of an optical density DO.sub.refo= 1 and a reference exposure time t.sub.ref= 1 second. This curve shows that the lumination needed to achieve the desired optical density increases with the exposure time. It is thus that, in this example, the ratio between the energies for the two exposure times of 50 ms and 6.5 s is of the order of 1.6. The curve of FIG. 3 may be modelized by means of a function having the form: CNRT (t)=A.sub.o +A.sub.1 log t+A.sub.2 [log t].sup.2 (18) the parameters A.sub.0, A.sub.1 and A.sub.2 of which are estimated from the measurement points by a least square error method of estimation. In principle, the Schwarzschild effect that is taken into account in the equations (9') and (11') could be modelized by the function CNRT. The interest of using the function CNRD indexed in film dose rate is that it is possible to take account of the variations of the anode current. Hence, an automatic exposer that uses the function CNRD according to the equations (9') and (11') has, for example, the advantage wherein the tube can work in decreasing load. To go from the time-indexed coefficients CNRT (t) to the rate-indexed coefficients CNRD (d), it is necessary to take account of the fact that the coefficients CNRT (t) have been determined by measurements with variable exposure times under conditions where the values of the photon dose rate on the film are not necessarily known. If the film dose rate d.sub.i is measured for each exposure time t.sub.i, the value of the coefficient CNRD (di) for di will be equal to that of the coefficient CNRT (t.sub.i) for the corresponding exposure time t.sub.i according to the relationship : CNRD (d.sub.i)=CNRT (t.sub.i) (19) These different values of CNRD (d.sub.i) are related to one another by a curve (FIG. 4) as a function of the reciprocal 1/d of the dose rate. This curve may be modelized by means of a function having the form : CNRD (d)=A'.sub.o +A'.sub.1 log 1/d+A'.sub.2 [log 1/d].sup.(20) It may be the case that the values d.sub.i are not given by the calibration, especially because they are expressed in the measurement unit of the cell 12 which is not necessarily the one used in the calibration. Thus, in general, the values d.sub.i must be linked to the known values t.sub.i by the relationship: or again : ##EQU11## It is recalled here that L.sub.ref is the lumination received by the film under fixed and known radiological conditions when the film attains a given blackening and when the non-reciprocity effect is To finalize the definition of the function CNRD as well as to explain the last calibration of the method, there remains to be explained the method used to assess the reference lumination. This method is described in the above-mentioned patent application, entitled : METHOD FOR THE ESTIMATION AND CALIBRATION OF THE LUMINATION RECEIVED BY A RADIOGRAPHIC FILM. The reference lumination depends on the optical density to be obtained on the film. To determine this lumination, the first step is to make a sensitogram of the type of film used, then a shot must be taken under determined radiological conditions with a known thickness standard. These determined radiological conditions are, for example, a reference optical density DO.sub.refo chosen as a function of the practitioner's usual practices, for example DO.sub.refo= 1 a thickness standard E.sub.o, a supply voltage V.sub.o, a value of the exposure time t.sub.o, a value of the product I.sub.o For this shot, the optical density DO.sub.m as well as the values M.sub.o, I.sub.o, to are measured. This makes it possible to compute the equivalent thickness E.sub.p by means of the equation (7). The yield D.sub.f on the film is then computed by means of the equation (6): this makes it possible to compute the lumination received by the film L.sub.film by the formula: L.sub.film=D.sub.f xI.sub.o xt.sub.o (23) The reference optical density DO.sub.refo makes it possible to compute the illumination step corresponding to DO.sub.refo on the sensitometric curve of the film used, (FIG. 5), this curve having been plotted by means of a sensitograph and a densitometer. This makes it possible to take account of the characteristics of the developing machine used. The curve is recorded, for example, in the form of a function in the microprocessor 19 (FIG. 1). The optical density measured DO.sub.m enables the computation of the measurement step Ech.sub.m which is the value of the illumination step corresponding to DO.sub.m on the sensitometric curve (FIG. With the values L.sub.film of the lumination on the film, the reference step Ech.sub.ref and the measurement step Ech.sub.m, it is possible to compute the reference lumination L.sub.ref to obtain the optical density DO.sub.refo by using the equation that defines the change in scale between the lumination and the illumination step of the x-axis of the sensitometric curve (FIG. 5), that is : ## EQU12## From this equation (24), we derive : ##EQU13## The sensitometric constant K corresponds to the scale chosen for the illumination steps. The value L.sub.ref depends on t.sub.o through L.sub.film by the equations (23) and (25). Thus, the value L.sub.ref is sensitive to the non-reciprocity effects of the film. To correct the influence of non-reciprocity on the value of L.sub.ref, it is enough to use, in the equation (23), the value L.sub.film defined by : ##EQU14## This reference lumination L.sub.ref is the one that must be used in the equation (10) to obtain the reference optical density DO.sub.refo and the formula (25) shows that it depends, notably, on the difference between the reference step and the measurement step. The knowledge of the lumination received by the film provides for knowing d.sub.i by the application of the formula (22) and for deducing CNRD (d.sub.i) therefrom by the formula (20). For an optical density of the radiographic film other than DO.sub.refo= 1, the above-described operations have to be repeated so as to determine the new values of CNRT (t.sub.i) and of L.sub.ref. In order to simplify these operations, the coefficients CNRT (t.sub.i) may be obtained by performing the following steps of: (g1) making, by means of a variable time sensitograph, a first sensitogram S.sub.refo (FIG. 5) when the exposure time is set for a reference time t.sub.refo; (g2) making, by means of a variable time sensitograph, q sensitograms S.sub.1 to S.sub.q (FIG. 5) for different exposure times t.sub.i; (g3) choosing a reference optical density DO.sub.refo, for example DO.sub.refo= 1 (g4) measuring, on each sensitogram, the illumination step Ech.sub.refo, Ech.sub.1 . . . Ech.sub.i . . . Ech.sub.1 corresponding to the optical density DO.sub.refo= 1 (g5) calculating the coefficient CNRT (t.sub.i) by the equation: ##EQU15## If the practitioner decides to work at a different optical density, it is proposed, in order to avoid the above-described calibration, to use the optical density deliberately corrected for the blackening DO.sub.cvn. Then, the reference lumination L.sub.ref, used in the equation (10) should be replaced by the corrected lumination L.sub.cvn which is expressed by : L.sub.cvn =L.sub.ref CVN is the deliberate correction of blackening expressed by a whole number from -10 to +10 for example, P is the elementary step in optical density, for example 0,1, Γ is the slope of the linear part of the sensitometric curve (FIG. 5). The method that has just been described shows that its implementation calls for a certain number of calibrations that are, briefly, the following : (a) the calibration of the radiological system so as to determine the analytical models D.sub.se =f' (V.sub.m, E.sub.p) (4) with cartridge without screen and D.sub.c =f" (V.sub.m, E.sub.p) (6) E.sub.p =g" (V.sub.m, D.sub.c) (7) with cartridge and screen; The difference D.sub.f =(D.sub.se -D.sub.c) (equation (8)) will make it possible to deduce the yield absorbed by the screen; (b) the calibration of the film so as to determine the law of non-reciprocity CNRT (t) expressed as a function of time; this law will be used to determine the law of non-reciprocity CNRD (d) expressed as a function of the dose rate; (c) the calibration of the reference lumination L.sub.ref. When these different calibrations have been performed, the method consists of the following steps of: (d) choosing, by the practitioner, the blackening value or of the value of the deliberate correction of blackening so as to determine the target lumination L.sub.cvn that should be received by the film under fixed reference conditions to arrive at the chosen blackening (or optical density). The lumination L.sub.cvn is computed from the equation (27) where the lumination L.sub.ref is determined by the calibration (c) and the equations (25) and (26); (e1) positioning the object to the radiographed, (e2) triggering the start of the exposure by the practitioner (e3) measuring after a time t' the yield D.sub.c1 at the cell 12; (e4) measuring the equivalent thickness E.sub.1 by the equation (7); (e5) calculating the yield D.sub.f1 at the film for the thickness E.sub.1 by the equation (8); (e6) calculating the lumination L.sub.f, received by the film, by the equation; L.sub.f =L.sub.am +D.sub.f1 rate) (9') (e7) calculating the lumination L.sub.ra remaining to be acquired to obtain the blackening (or optical density) chosen by the equation L.sub.ra =L.sub.cvn -L.sub.f (10) (e8) calculating the estimated mA mAs.sub.ra to obtain the blackening (or optical density) by the equation: mAs.sub.ra =L.sub.ra /D.sub.f1 (e9) calculating the mA operation (e3); (e10) stopping the exposure when the mA equal to or greater than mAs.sub.ra or returning to the step (e3) when the mA less than mAs.sub.ra. The description of the method that has just been given corresponds to a certain configuration of the radiology system. Should it be possible for this system to assume several configurations involving, for example, the choice of : the material of the anode the dimensions of the focus, the spectrum modifying filter, the collimation, the presence or absence of a diffusion-preventing screen, the type of image receiver, the type of detection cell, it is necessary to perform calibrations (a), (b) and (c) for each of these configurations. The number of these calibrations may be reduced by taking account of the similarities of behavior from one configuration to another, as described for the calibration (a) in U.S. Pat. application Ser. No. 07/535 520 filed on the Jun. 8, 1990. When the practitioner implements the method, he defines the configuration, and the characteristics of this configuration are transmitted to the microprocessor 19 so that the latter uses the corresponding models. The method according to the invention has been described in its application to a receiver 17 of the film/screen couple type. It can also be implemented in the case of a receiver 17 having only a film sensitive to X-radiation. With such a film, the calibrations of the operations (a) and (b) become : (a') the calibration of the radiological system so as to determine the analytical model D'.sub.f =f'" (V.sub.m, E.sub.p) (30) with the film as the image receiver. In the unfolding of the method, the modifications are as follows : (e3) becomes (e'3) : measuring after a time t' of the yield D'.sub.f1 at the cell 12; (e4) and (e5) are eliminated and the steps (e6) to (e8) are modified in the following way : (e'6) computing the lumination L'.sub.f received by the film by the equation : L'.sub.f =L.sub.am +D'.sub.f1 rate) (9") (e'7) calculating the lumination L'.sub.ra remaining to be acquired to obtain the blackening (or optical density) chosen by the equation L'.sub.ra =L.sub.ref -L'.sub.f (10") (e'8) calculating the estimated mA mAs'.sub.ra to obtain the blackening (or optical density) by the equation: mAs'.sub.ra =L'.sub.ra /D'.sub.f1 The other steps (e9) and the ones that follow remain unchanged. Besides, it must be noted that the sensitograph may, in this case, be of the X-ray emission type. Furthermore, with a receiver such as this, having no intensifier screen, the detection cell 12 may be placed either behind the receiver 17, as in the case of the film/screen type receiver, or before the receiver 17 if the energy of the radiation allows it. The method according to the invention has been described in an application to a radiology system (FIG. 1) in which the X-ray detection cell 12 is disposed outside the receiver but said method may be applied to a radiology system (FIG. 6) in which said detection cell is incorporated inside the receiver 17 as the element bearing the reference numeral 4. Then, the receiver 17 comprises a film 3, an intensifying screen below the film 3 and said new detection cell 4 below the screen 2. Such a new detection cell 4 is of the type described in French patent application 89 05668 filed on the Apr. 28, 1989 and entitled : "An X-ray cassette incorporating an automatic exposure detector This new detection cell detects and measures the light emitted by the screen 2 as compared to the detection cell 12 which detects and measures the X-radiation behind the receiver. As a result, there is no need to perform the first and second calibrations of the method described above, which are directed to take account of the attenuations of the X-rays by the film and the screen. Moreover, corresponding steps (e4) and (e5) are no longer needed. This leads to a modified method which comprises the following steps of (or operations for) : (a) determining by calibration the reference lumination L.sub.ref which must be received by the film, under fixed reference conditions, to achieve the blackening (or optical density) chosen as a reference value by the practitioner ; (b1) positioning the object to be radiographed ; (b2) triggering the start of the exposure by the practitioner ; (b3) measuring yield D.sub.f1 at a certain time t' after the start of the exposure ; (b4) calculating the lumination L.sub.f received by the film according to the equation : L.sub.f =L.sub.am +D.sub.f1 (b5) calculating the lumination L.sub.ra remaining to be acquired to obtain the blackening for optical density determined by the equation : L.sub.ra =L.sub.ref -L.sub.f (10) (b6) calculating the estimated mA mAs.sub.r to obtain the blackening (or optical density) determined by the equation : mAs.sub.r =L.sub.ra /D.sub.f1 (11) (b7) measuring the mA step (b3) ; (b8) --stopping the exposure when the mA step (b7) are equal to or greater than mAs.sub.r, --or returning to step (b3) when the mA are smaller than mAs.sub.r. The use of such a detection cell 4 inside the receiver 17 makes the method simpler to implement. It must be noted that this simpler method, which can be implemented when a light detector cell 4 inside the receiver 17 is used, can make use of all features related to the first method described above inasmuch as they are related to steps (a"), (b1) to (b8).
{"url":"http://www.google.com/patents/US5218625?dq=6,163,776","timestamp":"2014-04-24T04:21:53Z","content_type":null,"content_length":"144684","record_id":"<urn:uuid:6f6b1ada-993b-4d6c-9554-88e242d479ab>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 43 , 1999 "... Abstract. In this survey the following model is considered. We assume that an instance I of a computationally hard optimization problem has been solved and that we know the optimum solution of such instance. Then a new instance I ′ is proposed, obtained by means of a slight perturbation of instance ..." Cited by 176 (1 self) Add to MetaCart Abstract. In this survey the following model is considered. We assume that an instance I of a computationally hard optimization problem has been solved and that we know the optimum solution of such instance. Then a new instance I ′ is proposed, obtained by means of a slight perturbation of instance I. How can we exploit the knowledge we have on the solution of instance I to compute a (approximate) solution of instance I ′ in an efficient way? This computation model is called reoptimization and is of practical interest in various circumstances. In this article we first discuss what kind of performance we can expect for specific classes of problems and then we present some classical optimization problems (i.e. Max Knapsack, Min Steiner Tree, Scheduling) in which this approach has been fruitfully applied. Subsequently, we address vehicle routing problems and we show how the reoptimization approach can be used to obtain good approximate solution in an efficient way for some of these problems. 1 , 1992 "... The grammar problem, a generalization of the single-source shortest-path problem introduced by Knuth, is to compute the minimum-cost derivation of a terminal string from each non-terminal of a given context-free grammar, with the cost of a derivation being suitably defined. This problem also subsume ..." Cited by 116 (1 self) Add to MetaCart The grammar problem, a generalization of the single-source shortest-path problem introduced by Knuth, is to compute the minimum-cost derivation of a terminal string from each non-terminal of a given context-free grammar, with the cost of a derivation being suitably defined. This problem also subsumes the problem of finding optimal hyperpaths in directed hypergraphs (under varying optimization criteria) that has received attention recently. In this paper we present an incremental algorithm for a version of the grammar problem. As a special case of this algorithm we obtain an efficient incremental algorithm for the single-source shortest-path problem with positive edge lengths. The aspect of our work that distinguishes it from other work on the dynamic shortest-path problem is its ability to handle "multiple heterogeneous modifications": between updates, the input graph is allowed to be restructured by an arbitrary mixture of edge insertions, edge deletions, and edge-length - Journal of the ACM , 1990 "... We consider in this paper the shortest-path problem in networks in which the delay (or weight) of the edges changes with time according to arbitrary functions. We present algorithms for finding the shortest-path and minimum-delay under various waiting constraints and investigate the properties of th ..." Cited by 92 (6 self) Add to MetaCart We consider in this paper the shortest-path problem in networks in which the delay (or weight) of the edges changes with time according to arbitrary functions. We present algorithms for finding the shortest-path and minimum-delay under various waiting constraints and investigate the properties of the derived path. We show that if departure time from the source node is unrestricted then a shortest path can be found that is simple and achieves a delay as short as the most unrestricted path. In the case of restricted transit, it is shown that there exist cases where the minimum delay is finite but the path that achieves it is infinite. , 2002 "... We study novel combinatorial properties of graphs that allow us to devise a completely new approach to dynamic all pairs shortest paths problems. Our approach yields a fully dynamic algorithm for general directed graphs with non-negative realvalued edge weights that supports any sequence of operatio ..." Cited by 73 (9 self) Add to MetaCart We study novel combinatorial properties of graphs that allow us to devise a completely new approach to dynamic all pairs shortest paths problems. Our approach yields a fully dynamic algorithm for general directed graphs with non-negative realvalued edge weights that supports any sequence of operations in e O(n amortized time per update and unit worst-case time per distance query, where n is the number of vertices. We can also report shortest paths in optimal worst-case time. These bounds improve substantially over previous results and solve a long-standing open problem. Our algorithm is deterministic and uses simple data structures. , 1992 "... We give an efficient algorithm for maintaining a minimum spanning forest of a plane graph subject to on-line modifications. The modifications supported include changes in the edge weights, and insertion and deletion of edges and vertices which are consistent with the given embedding. Our algorithm r ..." Cited by 68 (26 self) Add to MetaCart We give an efficient algorithm for maintaining a minimum spanning forest of a plane graph subject to on-line modifications. The modifications supported include changes in the edge weights, and insertion and deletion of edges and vertices which are consistent with the given embedding. Our algorithm runs in O(log n) time per operation and O(n) space. , 1999 "... Introduction In many applications of graph algorithms, including communication networks, graphics, assembly planning, and VLSI design, graphs are subject to discrete changes, such as additions or deletions of edges or vertices. In the last decade there has been a growing interest in such dynamicall ..." Cited by 55 (0 self) Add to MetaCart Introduction In many applications of graph algorithms, including communication networks, graphics, assembly planning, and VLSI design, graphs are subject to discrete changes, such as additions or deletions of edges or vertices. In the last decade there has been a growing interest in such dynamically changing graphs, and a whole body of algorithms and data structures for dynamic graphs has been discovered. This chapter is intended as an overview of this field. In a typical dynamic graph problem one would like to answer queries on graphs that are undergoing a sequence of updates, for instance, insertions and deletions of edges and vertices. The goal of a dynamic graph algorithm is to update efficiently the solution of a problem after dynamic changes, rather than having to recompute it from scratch each time. Given their powerful versatility, it is not surprising that dynamic algorithms and dynamic data structures are often more difficult to design and analyze than their static c - THEORETICAL COMPUTER SCIENCE , 1996 "... ..." , 1994 "... We present a new complexity theoretic approach to incremental computation. We define complexity classes that capture the intuitive notion of incremental efficiency and study their relation to existing complexity classes. We show that problems that have small sequential space complexity also have sma ..." Cited by 43 (4 self) Add to MetaCart We present a new complexity theoretic approach to incremental computation. We define complexity classes that capture the intuitive notion of incremental efficiency and study their relation to existing complexity classes. We show that problems that have small sequential space complexity also have small incremental time complexity. We show that all common LOGSPACE-complete problems for P are also incr-POLYLOGTIME-complete for P. We introduce a restricted notion of completeness called NRP-completeness and show that problems which are NRP-complete for P are also incr-POLYLOGTIME-complete for P. We also give incrementally complete problems for NLOGSPACE, LOGSPACE, and non-uniform NC¹. We show that under certain restrictions problems which have efficient dynamic solutions also have efficient parallel solutions. We also consider a non-uniform model of incremental computation and show that in this model most problems have almost linear complexity. In addition, we present some techniques f... - In IEEE Symposium on Foundations of Computer Science , 2001 "... We present the first fully dynamic algorithm for maintaining all pairs shortest paths in directed graphs with real-valued edge weights. Given a dynamic directed graph G such that each edge can assume at most S di#erent real values, we show how to support updates in O(n amortized time and que ..." Cited by 35 (10 self) Add to MetaCart We present the first fully dynamic algorithm for maintaining all pairs shortest paths in directed graphs with real-valued edge weights. Given a dynamic directed graph G such that each edge can assume at most S di#erent real values, we show how to support updates in O(n amortized time and queries in optimal worst-case time. No previous fully dynamic algorithm was known for this problem. In the special case where edge weights can only be increased, we give a randomized algorithm with one-sided error which supports updates faster in O(S We also show how to obtain query/update trade-o#s for this problem, by introducing two new families of algorithms. Algorithms in the first family achieve an update bound of O(n/k), and improve over the best known update bounds for k in the . Algorithms in the second family achieve an update bound of ), and are competitive with the best known update bounds (first family included) for k in the range (n/S) # Work partially supported by the IST Programme of the EU under contract n. IST-199914. 186 (ALCOM-FT) and by CNR, the Italian National Research Council, under contract n. 01.00690.CT26. Portions of this work have been presented at the 42nd Annual Symp. on Foundations of Computer Science (FOCS 2001) [8] and at the 29th International Colloquium on Automata, Languages, and Programming (ICALP'02) [9]. , 2005 "... Heuristic search methods promise to find shortest paths for path-planning problems faster than uninformed search methods. Incremental search methods, on the other hand, promise to find shortest paths for series of similar path-planning problems faster than is possible by solving each path-planning p ..." Cited by 28 (3 self) Add to MetaCart Heuristic search methods promise to find shortest paths for path-planning problems faster than uninformed search methods. Incremental search methods, on the other hand, promise to find shortest paths for series of similar path-planning problems faster than is possible by solving each path-planning problem from scratch. In this article, we develop Lifelong Planning A * (LPA*), an incremental version of A * that combines ideas from the artificial intelligence and the algorithms literature. It repeatedly finds shortest paths from a given start vertex to a given goal vertex while the edge costs of a graph change or vertices are added or deleted. Its first search is the same as that of a version of A * that breaks ties in favor of vertices with smaller g-values but many of the subsequent searches are potentially faster because it reuses those parts of the previous search tree that are identical to the new one. We present analytical results that demonstrate its similarity to A * and experimental results that demonstrate its potential advantage in two different domains if the path-planning problems change only slightly and the changes are close to the goal.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4407","timestamp":"2014-04-16T08:35:55Z","content_type":null,"content_length":"37898","record_id":"<urn:uuid:8f6e51fd-076f-4c20-838b-06c45ba7b9f4>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
ombination of Results 1 - 10 of 12 - Frontiers of Combining Systems, volume 3 of Applied Logic Series , 1996 "... The Nelson-Oppen combination procedure, which combines satisfiability procedures for a class of first-order theories by propagation of equalities between variables, is one of the most general combination methods in the field of theory combination. We describe a new non-deterministic version of the p ..." Cited by 74 (4 self) Add to MetaCart The Nelson-Oppen combination procedure, which combines satisfiability procedures for a class of first-order theories by propagation of equalities between variables, is one of the most general combination methods in the field of theory combination. We describe a new non-deterministic version of the procedure that has been used to extend the Constraint Logic Programming Scheme to unions of constraint theories. The correctness proof of the procedure that we give in this paper not only constitutes a novel and easier proof of Nelson and Oppen's original results, but also shows that equality sharing between the satisfiability procedures of the component theories, the main idea of the method, can be confined to a restricted set of variables. - Journal of Symbolic Computation , 1994 "... ion An atomic constraint p ? (t 1 ; : : : ; t m ) is decomposed into a conjunction of pure atomic constraints by introducing new equations of the form (x = ? t), where t is an alien subterm in the constraint and x is a variable that does not appear in p ? (t 1 ; : : : ; t m ). This is formalized tha ..." Cited by 28 (7 self) Add to MetaCart ion An atomic constraint p ? (t 1 ; : : : ; t m ) is decomposed into a conjunction of pure atomic constraints by introducing new equations of the form (x = ? t), where t is an alien subterm in the constraint and x is a variable that does not appear in p ? (t 1 ; : : : ; t m ). This is formalized thanks to the notion of abstraction. Definition 4.2. Let T be a set of terms such that 8t 2 T ; 8u 2 X [ SC; t 6= E1[E2 u: A variable abstraction of the set of terms T is a surjective mapping \Pi from T to a set of variables included in X such that 8s; t 2 T ; \Pi(s) = \Pi(t) if and only if s =E1 [E2 t: \Pi \Gamma1 denotes any substitution (with possibly infinite domain) such that \Pi(\Pi \Gamma1 (x)) = x for any variable x in the range of \Pi. It is important to note that building a variable abstraction relies on the decidability of E 1 [ E 2 -equality in order to abstract equal alien subterms by the same variable. Let T = fu #R j u 2 T (F [ X ) and u #R2 T (F [ X )n(X [ SC)g... - In Proceedings of the First International Conference on Principles and Practice of Constraint Programming "... When combining languages for symbolic constraints, one is typically faced with the problem of how to treat "mixed" constraints. The two main problems are (1) how to define a combined solution structure over which these constraints are to be solved, and (2) how to combine the constraint solving metho ..." Cited by 26 (3 self) Add to MetaCart When combining languages for symbolic constraints, one is typically faced with the problem of how to treat "mixed" constraints. The two main problems are (1) how to define a combined solution structure over which these constraints are to be solved, and (2) how to combine the constraint solving methods for pure constraints into one for mixed constraints. The paper introduces the notion of a "free amalgamated product" as a possible solution to the first problem. Subsequently, we define so-called simply-combinable structures (SC-structures). For SC-structures over disjoint signatures, a canonical amalgamation construction exists, which for the subclass of strong SC-structures yields the free amalgamated product. The combination technique of [BS92, BaS94a] can be used to combine constraint solvers for (strong) SC-structures over disjoint signatures into a solver for their (free) amalgamated product. In addition to term algebras modulo equational theories, the class of - In Proceedings of the 6th International Conference on Rewriting Techniques and Applications, volume 914 of Lecture Notes in Computer Science "... . In a previous paper we have introduced a method that allows one to combine decision procedures for unifiability in disjoint equational theories. Lately, it has turned out that the prerequisite for this method to apply---namely that unification with so-called linear constant restrictions is dec ..." Cited by 16 (7 self) Add to MetaCart . In a previous paper we have introduced a method that allows one to combine decision procedures for unifiability in disjoint equational theories. Lately, it has turned out that the prerequisite for this method to apply---namely that unification with so-called linear constant restrictions is decidable in the single theories---is equivalent to requiring decidability of the positive fragment of the first order theory of the equational theories. Thus, the combination method can also be seen as a tool for combining decision procedures for positive theories of free algebras defined by equational theories. Complementing this logical point of view, the present paper isolates an abstract algebraic property of free algebras--- called combinability---that clarifies why our combination method applies to such algebras. We use this algebraic point of view to introduce a new proof method that depends on abstract notions and results from universal algebra, as opposed to technical - In Proceedings of the 1st International Conference on Logic Programming and Automated Reasoning, St. Petersburg (Russia), volume 624 of Lecture Notes in Artificial Intelligence , 1992 "... . We extend the results on combination of disjoint equational theories to combination of equational theories where the only function symbols shared are constants. This is possible because there exist finitely many proper shared terms (the constants) which can be assumed irreducible in any equational ..." Cited by 15 (3 self) Add to MetaCart . We extend the results on combination of disjoint equational theories to combination of equational theories where the only function symbols shared are constants. This is possible because there exist finitely many proper shared terms (the constants) which can be assumed irreducible in any equational proof of the combined theory. We establish a connection between the equational combination framework and a more algebraic one. A unification algorithm provides a symbolic constraint solver in the combination of algebraic structures whose finite domains of values are non disjoint and correspond to constants. Primal algebras are particular finite algebras of practical relevance for manipulating hardware descriptions. 1 Introduction The combination problem for unification can be stated as follows: given two unification algorithms in two (consistent) equational theories E 1 on T (F 1 ; X) and E 2 on T (F 2 ; X), how to design a unification algorithm for E 1 [ E 2 on T (F 1 [ F 2 ; X)? The ge... - in Proceedings of the Eleventh International Conference on Automated Deduction, Springer-Verlag LNAI 607 , 1992 "... Let E be a first-order equational theory. A translation of typed higher-order E-unification problems into a typed combinatory logic framework is presented and justified. The case in which E admits presentation as a convergent term rewriting system is treated in detail: in this situation, a modifi ..." Cited by 9 (3 self) Add to MetaCart Let E be a first-order equational theory. A translation of typed higher-order E-unification problems into a typed combinatory logic framework is presented and justified. The case in which E admits presentation as a convergent term rewriting system is treated in detail: in this situation, a modification of ordinary narrowing is shown to be a complete method for enumerating higher-order E-unifiers. In fact, we treat a more general problem, in which the types of terms contain type variables. 1 Introduction Investigation of the interaction between first-order and higher-order equational reasoning has emerged as an active line of research. The collective import of a recent series of papers, originating with [Bre88] and including (among others) [Bar90], [BG91a], [BG91b], [Dou92], [JO91] and [Oka89], is that when various typed -calculi are enriched by first-order equational theories, the validity problem is well-behaved, and furthermore that the respective computational approaches to ... - In Tenth Annual IEEE Symposium on Logic in Computer Science , 1995 "... We design combination techniques for symbolic constraint solving in the presence of associative and commutative (AC) function symbols. This yields an algorithm for solving AC-RPO constraints (where ACRPO is the AC-compatible total reduction ordering of [16]), which was a missing ingredient for autom ..." Cited by 7 (4 self) Add to MetaCart We design combination techniques for symbolic constraint solving in the presence of associative and commutative (AC) function symbols. This yields an algorithm for solving AC-RPO constraints (where ACRPO is the AC-compatible total reduction ordering of [16]), which was a missing ingredient for automated deduction strategies with AC-constraint inheritance [15, 19]. As in the AC-unification case (actually the AC-unification algorithm of [9] is an instance of ours), for this purpose we first study the pure case, i.e. we show how to solve AC-ordering constraints built over a single AC function symbol and variables. Since AC-RPO is an interpretation-based ordering, our algorithm also requires the combination of algorithms for solving interpreted constraints and non-interpreted constraints. , 1992 "... In the context of constraint logic programming and theorem proving, the development of constraint solvers on algebraic domains and their combination is of prime interest. A constraint solver in finite algebras is presented for a constraint language including equations, disequations and inequations o ..." Cited by 7 (2 self) Add to MetaCart In the context of constraint logic programming and theorem proving, the development of constraint solvers on algebraic domains and their combination is of prime interest. A constraint solver in finite algebras is presented for a constraint language including equations, disequations and inequations on finite domains. The method takes advantage of the embedding of a finite algebra in a primal algebra that can be presented, up to an isomorphism, by an equational presentation. We also show how to combine this constraint solver in finite algebras with other unification algorithms, by extending the techniques used for the combination of unification. 1 Introduction Finite algebras provide valuable domains for constraint logic programming. Unification in this context has attracted considerable interest for its applications: it is of practical relevance for manipulating hardware descriptions and solving formulas of propositional calculus; its implementation in constraint logic programming lan... - Proceedings 11th Annual Symposium on Theoretical Aspects of Computer Science, Caen (France), volume 775 of Lecture Notes in Computer Science , 1994 "... . This paper addresses the problem of systematically building a matching algorithm for the union of two disjoint equational theories. The question is under which conditions matching algorithms in the single theories are sufficient to obtain a matching algorithm in the combination? In general, the bl ..." Cited by 5 (0 self) Add to MetaCart . This paper addresses the problem of systematically building a matching algorithm for the union of two disjoint equational theories. The question is under which conditions matching algorithms in the single theories are sufficient to obtain a matching algorithm in the combination? In general, the blind use of combination techniques introduces unification. Two different restrictions are considered in order to reduce this unification to matching. First, we show that combining matching algorithms (with linear constant restriction) is always sufficient for solving a pure fragment of combined matching problems. Second, we present a combined matching algorithm which is complete for the largest class of theories where unification is not needed, including collapse-free regular theories and linear theories. 1 Introduction The process of matching is crucial in term rewriting, from automated deduction involving simplification rules to the implementation of operational semantics for programming l... - Communications of the ACM , 1998 "... In a recent paper, Baader and Schulz presented a general method for the combination of constraint systems for purely positive constraints. But negation plays an important role in constraint solving. E.g., it is vital for constraint entailment. Therefore it is of interest to extend their results to t ..." Cited by 3 (0 self) Add to MetaCart In a recent paper, Baader and Schulz presented a general method for the combination of constraint systems for purely positive constraints. But negation plays an important role in constraint solving. E.g., it is vital for constraint entailment. Therefore it is of interest to extend their results to the combination of constraint problems containing negative constraints. We show that the combined solution domain introduced by Baader and Schulz is a domain in which one can solve positive and negative "mixed" constraints by presenting an algorithm that reduces solvability of positive and negative "mixed" constraints to solvability of pure constraints in the components. The existential theory in the combined solution domain is decidable if solvability of literals with so-called linear constant restrictions is decidable in the components. We also give a criterion for ground solvability of mixed constraints in the combined solution domain. The handling of negative constraints can be
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=399908","timestamp":"2014-04-23T13:34:58Z","content_type":null,"content_length":"41730","record_id":"<urn:uuid:c44b4b95-de7b-4ebf-a668-1b697967766f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: All in Calculus for Functions, Graphs, and Limits, Using this in my calculus course Discussion: All in Calculus for Functions, Graphs, and Limits Topic: Using this in my calculus course Related Item: http://mathforum.org/mathtools/tool/1133/ << see all messages in this Subject: Using this in my calculus course Author: Gene Date: Sep 6 2003 This is a Flash Animation that goes through some examples and the basic ideas of a limit step-by-step. My impression is that it should be very good for students struggling with the concept, since they can go forward or backward one step at a (My course is not so theoretical that I do much with limits, but I at least want to give students a shot at seeing what one of the most important fundamental underpinnings is Larry Husch' "Visual Calculus Project" has a number of other similar units linked to by Math Tools. Reply to this message Quote this message when replying? yes no Post a new topic to the All in Calculus for Functions, Graphs, and Limits discussion Visit related discussions: What is a Limit? tool Functions, Graphs, and Limits Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?co=c&context=cell&do=r&msg=c_oa__disc_1133_g-1","timestamp":"2014-04-19T15:26:50Z","content_type":null,"content_length":"16318","record_id":"<urn:uuid:ffba47e0-60b9-4cd4-994b-5589ed460db9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
Is every manifold triangulable? In Lee's Intro to topological manifolds, p.105, it is written that every manifold of dimension 3 or below is triangulable. But in dimension 4, threre are known examples of non triangulable manifolds. In dimensions greater than four, the answer is unknown. But in Bott-Tu p.190, it is written that every manifold admits a triangulation. Which is right? R. ]Kirby and L. C. Siebenmann, On the triangulation of manifolds and the hauptvermutung, Bull. Amer. Math. Soc., 75 (1969), 742-749. This paper is said to have an example of a non-triangulable 6 manifold
{"url":"http://www.physicsforums.com/showthread.php?t=431232","timestamp":"2014-04-16T04:19:06Z","content_type":null,"content_length":"37019","record_id":"<urn:uuid:7cec71e8-d991-4ea1-87e7-55755a5a6b7a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonexistence of boundary between convergent and divergent series? up vote 45 down vote favorite The following is a FAQ that I sometimes get asked, and it occurred to me that I do not have an answer that I am completely satisfied with. In Rudin's Principles of Mathematical Analysis, following Theorem 3.29, he writes: One might thus be led to conjecture that there is a limiting situation of some sort, a “boundary” with all convergent series on one side, all divergent series on the other side—at least as far as series with monotonic coefficients are concerned. This notion of “boundary” is of course quite vague. The point we wish to make is this: No matter how we make this notion precise, the conjecture is false. Exercises 11(b) and 12(b) may serve as illustrations. Exercise 11(b) states that if $\sum_n a_n$ is a divergent series of positive reals, then $\sum_n a_n/s_n$ also diverges, where $s_n = \sum_{i=1}^n a_n$. Exercise 12(b) states that if $\sum_n a_n$ is a convergent series of positive reals, then $\sum_n a_n/\sqrt{r_n}$ converges, where $r_n = \sum_{i\ge n} a_i$. Although these two exercises are suggestive, they are not enough to convince me of Rudin’s strong claim that no matter how we make this notion precise, the conjecture is false. Are there any stronger theorems in this direction? Edit. For example, are there any theorems about the topology/geometry of the spaces of all convergent/divergent series, where a series is viewed as a point in $\mathbb{R}^\infty$ or $(\mathbb{R}^+)^\ infty$ in the obvious way? ca.analysis-and-odes sequences-and-series 5 Nice question; that claim has always bothered me. – Andres Caicedo Dec 14 '10 at 19:05 1 Perhaps a partial answer could be found within the theory of Abel's summations and Tauberian theorem. But my knowledge of these topics is not accurate enough. – Denis Serre Dec 15 '10 at 7:34 We could define the topology to consist of the set of all sequences, the empty set, the set of such that the associated series converges, and the series where the set the associated series diverge. With this topology, their is a well defined boundary which is the empty set. Not that this helps. – Spice the Bird Jun 4 '13 at 4:25 add comment 4 Answers active oldest votes A rather detailed discussion of the subject can be found in Knopp's Theory and Application of Infinite Series (see § 41, pp. 298-305). He mentiones that the idea of a possible boundary between convergent and divergent series was suggested by du Bois-Reymond. There are many negative (and mostly elementary) results showing that no such boundary, in whatever sense it might be defined, can exist. Stieltjes observed that for an arbitary monotone decreasing sequence $(\epsilon_n)$ with the limit $0$, there exist a convergent series $\sum c_n$ and a divergent series $\sum d_n$ such that $c_n=\epsilon_nd_n$. (This can be easily deduced from the Abel-Dini theorem). Pringsheim remarked that, for a convergent and a divergent series with positive terms, the ratio $c_n/d_n$ can assume all possible values, since one may have simultaneously $$\liminf\ up vote 22 down frac{c_n}{d_n}=0\qquad\mbox {and}\qquad\limsup\frac{c_n}{d_n}=\infty.$$ vote accepted I like the following geometric interpretation. Given a (convergent or divergent) series $\sum a_n$, let's mark the sequence of points $(n,a_n)\in\mathbb R^2$ and join the consecutive points by straight segments. Then there is a convergent series $\sum c_n$ and a divergent series $\sum d_n$ (both with positive and monotonically decreasing terms) such that the corresponding polygonal graphs can intersect in an indefinite number of points. The results remain essentially unaltered even if one requires that both sequences $(c_n)$ and $(d_n)$ are fully monotone, which is a very strong monotonicity assumption. This was shown by Hahn ("Über Reihen mit monoton abnehmenden Gliedern", Monatsh. für Math., Vol. 33 (1923), pp. 121-134). 1 Thanks. Rudin actually mentions Knopp, but for some reason I always thought Rudin was giving Knopp as a general reference and not as a specific reference for this particular question. I should have at least checked out the library's copy of Knopp first! – Timothy Chow Dec 14 '10 at 21:37 2 You are welcome. The book is a gem. – Andrey Rekalo Dec 14 '10 at 21:46 add comment The following exercise appears in Bruce Driver's analysis lecture notes (Exercise 24.23). Proposition. There does not exist a sequence $\{a_n\}$ such that, for all sequences $\{\lambda_n\}$, $\sum_n |\lambda_n| < \infty$ iff $\sup_n |a_n^{-1} \lambda_n| < \infty$. up vote 10 down vote The proof goes by showing that $\{\lambda_n \} \mapsto \{a_n \lambda_n\}$ would give a bijective bounded linear operator from $\ell^\infty$ to $\ell^1$. By the open mapping theorem this would be a homeomorphism, which is absurd. 3 This can exercise can also be found in Folland's Real Analysis book in the section on the Baire category theorem. Great question and great answer! – Dylan Wilson Dec 14 '10 at 6 Suppose there is such a sequence ${a_n}$. It follows that $\sum_n |a_n|<\infty$, contradicting the above mentioned exercise 12(b). So in what sense is this an answer? – Guntram Dec 14 '10 at 19:22 @Guntram: Hmm, good point. I read this proposition as "there is no slowest rate of decay for summable sequences", but you are quite right that it follows from 12(b), which I had not noticed. SO it is not in fact stronger. – Nate Eldredge Dec 14 '10 at 19:49 add comment An amusing exercise I like to set from time to time is this. As is well known, the sum of $1/n$ diverges, as does the sum of $1/n\log n$ and the sum of $1/n\log n\log\log n$, and so on. But what happens if we define $f_k(n)$ to be $1/n\log n\dots\log_kn$, where $\log_kn$ stands for the k-fold iterated logarithm, and $k$ is maximal such that $f_k(n)\geq 1$? I'm not asking for up vote the answer -- just drawing attention to this function that's close to the non-existent boundary. (A follow-up question might be to find a reasonably natural function that is even closer. For 7 down instance, can one define $f_\alpha(n)$ for some very large countable ordinals $\alpha$?) 3 I would imagine that a natural way of making rigorous the notion of "boundary" would be by introducing some kind of (ordinal) rank on series, and then a boundary would be a natural gap in the resulting pre-ordering. – Andres Caicedo Dec 14 '10 at 22:16 3 I am a little confused by the statement of this excercise. Are we considering a single function $f(n):=1/n\log{n}\ldots\log_k{n}$, where $k$ is maximal such that $\log_k{n}\geq 1$? This question (in a slightly different form) appeared as A4 on the 2008 Putnam. – Julian Rosen Dec 15 '10 at 7:49 This was also Problem E3381 in the American Mathematical Monthly. The solution (AMM 1992, page 165-166) is followed by an editorial note mentioning a paper by Reingold and Shen (More 2 nearly optimal algorithms for unbounded searching. II. The transfinite case. MR1082143)) discussing a hierarchy of slowly converging series of height $\epsilon_0$, apparently with motiviation from theoretical Computer Science. – Goldstern May 7 '11 at 17:33 add comment As Dylan Wilson pointed out, the following question appears in Folland's real analysis book: up vote 2 down (second edition, pg. 164) (33) There is no slowest rate of decay of the terms of an absolutely convergent sequence; that is, there is no sequence $\{ a_n \}$ of positive numbers such vote that $\sum a_n|c_n| < \infty$ iff $\{ c_n \}$ is bounded. add comment Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes sequences-and-series or ask your own question.
{"url":"http://mathoverflow.net/questions/49415/nonexistence-of-boundary-between-convergent-and-divergent-series/132681","timestamp":"2014-04-18T05:41:39Z","content_type":null,"content_length":"82531","record_id":"<urn:uuid:4f388f4a-b6c1-42bc-b69a-f9f87803afd8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Entire function interpolation with control over multiplicities/derivatives up vote 0 down vote favorite Let's say I have a multiset of complex numbers $\lbrace a_1,\cdots,a_n\rbrace$ (so some of the elements may be repeated) and I would like to construct an entire function $p(z)$ with those numbers as zeroes. However, I also have a multiset of complex numbers $B = \lbrace b_1,\cdots,b_n \rbrace$ such that I wish $p(b_i) = 1$ - p is only 1 on the $b_i$'s. It seems like trying to use Lagrange's polynomial interpolation formula gives you a polynomial with too high a degree (greater than $n$ and less than or equal to $2n$), and then there's the possibility that $p^{-1}(1) \nsubseteq B$. I've been thinking about doing the following: Let $g(z) = (x-a_1) \cdots (x - a_n)$, and then via Weierstrass construct an entire function $h(z)$ such that $e^{h(b_i)} = 1/g(b_i)$. Then it seems like the entire function $e^{h(z)}g(z)$ is getting somewhat closer to what I want - but then again I don't know if there are any other $\alpha$'s such that $e^{h(\alpha)}g(\alpha) = 1$ where $\alpha \notin B$. The problem of polynomial interpolation and fitting seems very well studied; however, I can't seem to find a reference for this particular puzzle. Thanks in advance! You're imposing too many conditions. The space of polynomials of degree at most $n$ has dimension $n+1$. You are trying to impose $2n$ linear conditions on that space, which when $n > 1$ is more conditions than the dimension of your space. So there will be no solution in general. – Pete L. Clark May 19 '10 at 8:14 Based on a closer reading of your question, it sounds like you are aware of what I said in my previous comment. But then I can't figure out what you're asking: of course you can interpolate by an entire function, but not by a polynomial in general. – Pete L. Clark May 19 '10 at 8:16 Ah, I guess I was not clear at all. I'm not looking for a polynomial (because of what you just said), but rather an entire function with 0's at only those places (the $a_i$'s), and 1's at those places (the $b_i$'s). I know I can construct a Weierstrass entire function with the specified zeros, but can I force the entire function to have 1's at only those places? – Henry Yuen May 19 '10 at add comment 3 Answers active oldest votes If I read you right, you want an entire function that takes the values $0$ and $1$ at only finitely many (specified) points. This implies that the function must be a polynomial, by up vote 8 down Picard's great theorem, since there will be deleted neighbourhoods of infinity where the function misses two values. vote accepted Then, based on Pete Clark's comment above, I'm imposing too many conditions on the polynomial for it to exist (in general)? – Henry Yuen May 19 '10 at 9:08 1 Henry, yes you are imposing too stringent conditions. – Robin Chapman May 19 '10 at 9:16 add comment In your statement, you do not say explicitly, whether $p$ is aloowed to have other zeros, except those in the set $A$. If you want to construct an entire function with zeros and ones exactly prescribed, this is clearly impossible when your sets $A$ and $B$ are both finite. For the reason explained by Robin Chapman. up vote 1 down vote If you want ones to be exactly prescribed, and function having zeros on the set $A$, and perhaps other zeros, then this is possible: take $p(z)=1+(z-b_1)...(z-b_n)\exp g(z)$ and use interpolation for $g$. add comment Some very nice instances of your problem (but of course not all) are solved by so-called Shabbat polynomials, ie. by polyomials such that $p^{-1}[0,1]$ is a tree and such that $\{0,1,\ infty\}$ are the only critical values. Every planar tree can be realized by an (essentially unique up to affine transformation) Shabbat polynomial. You have thus a polynomial solution if your points $a_i$ and $b_i$ form a bipartition of the vertices of such a "Shabbat-tree". up vote 0 down vote Let me add that Shabbat polynomials are the simplest instances of "dessins d'enfants" defined by Grothendieck in the hope of understanding the absolute Galois group. (Suitably normalized Shabbat polynomials have algebraic coefficients and the action of the absolute Galois group preserves them and acts thus on the corresponding trees by permuting them.) add comment Not the answer you're looking for? Browse other questions tagged polynomials or ask your own question.
{"url":"http://mathoverflow.net/questions/25211/entire-function-interpolation-with-control-over-multiplicities-derivatives/104554","timestamp":"2014-04-21T13:09:04Z","content_type":null,"content_length":"63005","record_id":"<urn:uuid:ed66d08b-add4-4955-8e13-0c11cdac83aa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
[OpenGL ES 2.0] Point Lighting gone haywire [Archive] - OpenGL Discussion and Help Forums 03-21-2010, 04:32 PM I'm trying to implement the fixed function pipeline from OpenGL in OpenGL ES 2. At the present moment i'm working on the point light per vertex. However, it appears that something is not quite right. I took the shader code from the Orange Book. As you can see in the images below, the light appears to be coming from more than one place, when in fact it should be in the center between the 3 cubes. The top cube is at (0.0,2.0,0.0). The bottom left cube is at (-2.0,-2.0,0.0). The bottom right cube is at (2.0,-2.0,0.0). The point light is at (0.0,0.0,0.0). The Camera is at (0.0,0.0,10.0). Vertex Shader precision highp float; attribute vec4 zecubevertex; attribute vec3 zecubenormal; attribute vec4 zecubecolor; varying vec4 colorVarying; attribute vec2 zetexcoord; varying vec2 texcoord; struct ZECamera { mat4 projection; mat4 view; mat4 mv; //ModelView mat4 mvp; mat4 mv_inv; //ModelView Inverse mat4 projection_inv; mat3 normal; mat4 texture; struct ZEMaterial{ vec4 ambient; vec4 diffuse; vec4 specular; vec4 emissive; float shininess; uniform ZECamera ZEMatrixData; uniform ZEMaterial TMaterial; const vec4 lightPos = vec4(0.0,0.0,0.0,1.0); void PointLight(const in vec3 normal, const in vec3 cam_pos, const in vec3 vertex, inout vec4 color){ float nDotVP; //normal . light direction float nDotHV; // normal . light halfvector float pf; // power factor float attenuation; float d; //distance from surface to light source vec3 VP; //direction from surface to light position vec3 halfVector; // direction of maximum highlights // Compute vector from surface to light position VP = (ZEMatrixData.view*lightPos).xyz - vertex; // Compute distance between surface and light position d = length(VP); // Normalize the vector from surface to light position VP = normalize(VP); // Compute attenuation attenuation = 1.0 / (d*1.0); // linear attenuation of 0.5 halfVector = normalize(VP + cam_pos); nDotVP = max(0.0, dot(normal, VP)); nDotHV = max(0.0, dot(normal, halfVector)); if (nDotVP == 0.0) pf = 0.0; pf = pow(nDotHV, TMaterial.shininess); color =TMaterial.ambient * attenuation; color += TMaterial.diffuse * nDotVP * attenuation; color += TMaterial.specular * pf * attenuation; void main() gl_Position = ZEMatrixData.mvp * zecubevertex; texcoord = zetexcoord; vec4 ecPos = ZEMatrixData.mv * zecubevertex; vec3 normal = normalize(zecubenormal*ZEMatrixData.normal); PointLight(normal,-normalize(ecPos.xyz),ecPos.xyz,colorVarying ); Fragment Shader precision highp float; varying highp vec4 colorVarying; varying highp vec2 texcoord; uniform sampler2D zetexcoord; void main() highp vec4 txcl = vec4(texture2D(zetexcoord,texcoord)); gl_FragColor = txcl * colorVarying; I'm clueless as to what i'm doing wrong. I've checked all the math calculations for the matrices, and believe these are correct. Perhaps i'm miscalculating something in the shaders? Thanks in advance. P.S.: I'm not sure which section is the most appropriate for this topic :whistle:
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-170392.html","timestamp":"2014-04-20T00:44:18Z","content_type":null,"content_length":"7698","record_id":"<urn:uuid:ed1c057e-bc8a-4263-9758-19c776409663>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Servicios Personalizados Links relacionados versión On-line ISSN 0717-9707 J. Chil. Chem. Soc. v.48 n.3 Concepción sep. 2003 J. Chil. Chem. Soc., 48, N 3 (2003) ISSN 0717-9324 DETERMINATION OF THE HENRY'S CONSTANT OF VOLATILE AND SEMI-VOLATILE ORGANIC COMPONUDS OF ENVIRONMENTAL CONCERN BY THE BAS (BATCH AIR STRIPPING) TECHNIQUE: A NEW MATHEMATICAL APPROACH. Roberto Bobadilla^1, Tom Huybrechts^2, Jo Dewulf^2 & Herman Van Langenhove^2. ^1 Departamento de Prevención de Riesgos y Medio Ambiente. Facultad de Ciencias de la Construcción y Ordenamiento Territorial. Universidad Tecnológica Metropolitana. Chile. ^2 Research Group of Environmental Organic Chemistry and Technology. Department of Organic Chemistry. Faculty of Agricultural and Applied Biological Sciences. University of Ghent. Belgium. (Received: May 15, 2002 - Accepted: January 2,2003) Chemical transfer between environmental compartments plays a key role in an adequate command and control of pollutants. The gas-liquid partitioning equilibrium constant, better known as the Henry's constant (K[H]) represents a crucial parameter in order to determine the environmental fate of chemicals and therefore, an accurate determination at ambient conditions is extremely important to assess the mentioned process. Within the experimental dynamic methods, the batch air stripping technique (BAS) has as major drawback the equilibrium condition among phases, which is hardly achieved in open natural systems. The present work, based on previously published mathematical models (4, 22), centers in the development of a new mathematical approach to determine K[H] by means of the BAS in non-equilibrium conditions through experimental and theoretical determinations of volumetric mass transfer coefficients (K[L]a) for volatile (1,1-DCE, ethylbenzene, p-xylene and toluene) and semivolatile (1,1,2-TCE, 1,2-DCP, penhylmethylether (anisole) and naphthalene) organic compounds of environmental concern. In order to validate the approach, values obtained were compared to the K[H] determined through the static EPICS (Equilibrium Partitioning in Closed Systems) method, confirming the calculated K[H] and ratifying the new approach. The determination of the fate and distribution of polluting chemical compounds in the different environmental compartments is an area of tremendous importance in the development of successful strategies for the solution of the problematic that entails environmental contamination. The organic compound transfer between the atmosphere and water bodies and vice versa, constitute important routes in the dispersion of polluting agents. The balance of distribution of a compound between a liquid and a gas phase is a process governed first by the affinities of the substance for both phases and by factors such as concentration, temperature, pH, reactivity and solubility, being this process characterized by the Henry's constant (K[H]). The determination of K[H] has been made in theoretical and experimental forms through diverse approaches. Between the methods used to determine K[H] a widely employed is the calculation through the ratio between the vapor pressure and the solubility based on the vapor pressure of the pure compound and on the activity coefficient, respectively, values that evidently vary in solution and which therefore transform the calculation into a rough approach. On the other hand, predictive theoretical methods developed with the use of data bases as well as semi-empirical relations like the UNIFAC method (8) or the use of continuum-solvation models (13, 20), presents the disadvantage to associate compound with different properties, generating a low correlation between the theoretical value and the experimental one and, in the specific case of UNIFAC, overestimating the dependency of the activity coefficient (g) with respect to temperature (14). Between experimental determination procedures, there are the static and dynamic methods. Static techniques are based on the determination of K[H] in closed systems. Diverse methodologies such as the method of Drozd & Novak (7) or the technique of multiple balance of Macaulife (17) have laid the foundation for the development of the more used static method, the EPICS (9). In general, static methods lose utility in the case of less volatile compounds due to detection limits. In the case of EPICS, studies have demonstrated a tendency to the overestimate K[H] (3). On the other hand, dynamic methods use open-flow systems, promoting phase exchange until equilibrium. The most used dynamic method is the Batch Air Stripping (BAS) (15). This technique requires as a fundamental condition, equilibrium between the interacting phases. Nevertheless, equilibrium is hardly achieved due to the heterogeneity of the liquid phase and the short time of exchange at the interphase, being this dependent on the type of compound used. Different authors have simplified the complex interactions that happen in two-phase systems through mathematical models such as the two-film model of Whitman (22) and the surface renewal model of Danckwerts (4). Both mentioned models gather the idea that the interaction of the phases happens through the creation of an interphase where concentration gradients are observed (Figure 1). In this interphase, a dynamic equilibrium establishes with a mass transfer proportional to the concentration gradient between the bulk phase and the interphase, giving place to mass transfer coefficients for each phase (k[G] and k[L] for the gaseous and liquid phases, respectively), proportional to molecular diffusion coefficients, D[i] (4, 22) and dependent of the dimensions of the formed interphase, specifically the length of the interphase z[i] and the exchange surface between phases, a. The main difference between both models lies in the definition of the mass transfer coefficients and particularly in the proportionality of these with the respective molecular diffusion coefficients. In the case of the two-film model, a direct proportionality between k[i] and D[i] exists. On the other hand, in the model of surface renewal is indicated that k[i] is proportional to the square root of D[i]. Recent studies indicate that depending on the limiting stage in the process of mass transfer, the relation varies between both models (16). Fig. 1. Schematic representation of a compound transfer between a gaseous and a liquid phase, through a double interphase. Dotted lines show the concentration gradient between the interphase and the bulk phase, process governed by the molecular diffusion phenomenon. C[G] : gaseous bulk phase concentration, C[GI] : gaseous interphase concentration, C[LI] : liquid interphase concentration, C[L] : liquid bulk phase concentration, K'[H] : dimensionless Henry's constant, relating C[G] and C[L]. k[G] and k[L] : mass transfer coefficients at the gaseous and liquid interphase, respectively, D[i]: molecular diffusion coefficient, z[i] : interphase length. Since it is experimentally impossible to determine the surface of exchange between phases in turbulent conditions, one resorts to volumetric mass transfer coefficients (k[i]a). As well, the experimental determination makes very difficult the calculation of the individual mass transfer coefficients for each phase, reason why the determination of a total volumetric mass transfer coefficient is made (K[L]a). Nevertheless, depending on the characteristics of the assayed compound (volatility and solubility, mainly) normally one of the processes of mass transfer, either the transfer at level of the gaseous or at the liquid interphase, constitutes the limiting critical stage in the process of mass transfer and therefore one of the mass transfer coefficients will dominate the whole transfer (k[G] > > k[L] or k[L] > > k[G]), giving rise to compounds whose transference is under gaseous or liquid control, directly associating the determination of K[L]a, in many cases, with the limiting mass transfer coefficient. It is evident that in the case of some compounds, the two phases can have an important incidence in the transfer process being both stages important. In the present work, a mathematical approach for the determination of K[H] through the incorporation of volumetric mass transfer coefficients is presented, quantifying the variation in concentration of volatile and semi-volatile organic compounds of environmental interest in a system of dynamic exchange between a liquid and a gaseous phase, by means of the BAS method. Materials and Methods 1,1-dichloroethane, 1,2-dichloropropane, p-dimethylbenzene (p-xylene) (Aldrich), methylphenylether (anisole), ethylbenzene, methylbenzene (toluene) (Fluka), 1,1,2-trichloroethane and naphtalene (Janssen) were investigated. Stock solution were prepared in methanol (purge and trap grade, Aldrich) stored in darkness at ­20C. Trichloroethene was used as internal standard (Aldrich). Experimental design. Used system BAS consisted in air from a high-pressure cylinder passed through a low-pressure regulator at a constant flow rate and pre-saturated with water in order to keep the liquid volume in the stripping vessel constant. The gas was passed through a coil submerged into a thermostatic bath in order to ensure temperature equilibration between phases, then introduced into the bottom of the stripping vessel through a glass frit. The system was maintained at 25 ± 0.1°C. The exit gas flow rate was measured by a soap bubble flow meter (10-120 ml/min). Homogeneity of the liquid phase was maintained with a magnetic stirrer. The general procedure consisted in the addition of a specific amount (between 25-200 ml) of a methanol stock solution containing one or more organic compounds to a fixed volume of ultrapure water (20-80 ml) in such a way that final concentration was below 10% of the water solubility of each added compound. Immediately after, air was bubbled through the vessel, at a constant flow rate, stripping the organic compounds out of the water phase. Sampling and Quantification. While stripping, 100 ml of the solution were sampled at regular time intervals. The obtained sample was placed in a 12 ml vial and 5 ml of internal standard were added. The vial was hermetically closed with a Teflon-faced rubber septum and placed overnight for equilibration, at a constant temperature (25 ± 1°C). Later, the concentration of the sample was quantified indirectly through the determination of the concentration of the gaseous phase of the vial by means of SPME (Solid-phase Microextraction) (2) using a 100 mm thickness non-bonded poly(dimethylsiloxane) coated fiber (SUPELCO), exposed during 30 min. Finally, the fiber was inserted into the injection port (5 min to 220C) of a gas chromatograph (VARIAN, model 3700) equipped with a column DB5 30 m x 0.53 mm, 1.5 micron (J&W Scientific) maintained at 40°C for 5 min and then ramped to 220°C at a rate of 10°C/min. The separated compounds were analyzed by a flame ionization detector (FID) maintained at 250C. Mathematical Approach. Based on the previous developed mathematical approach (21), homogeneity in the liquid phase is assumed and it is considered that the gaseous phase does not reach equilibrium before leaving the system. In that case, an expression that accounts for the mass transfer between phases must be incorporated in the traditional equation used to determine K'[H] (15) so that: where dC/dt is the infinitesimal variation of the concentration as a function of time, V corresponds to the volume (ml), K'[H] is the dimensionless Henry's constant, Q is the flow of an ideal gas (ml /s), C[i] is the concentration of the compound (mol/ml) and K[L]a is the total volumetric mass transfer coefficient (s^-1), that can be subdivided between K[L],[ ]the mass transfer coefficient (cm/s) and a, the specific transfer area (cm^2/ml). Integrating from initial conditions (t[0] and initial concentration C[0]) to a time t results: In the calculation of the previous formula all the variables are clearly known with the exception of K[L]a. Scarce values of K[L]a are found in literature for compounds of environmental interest. Some determinations have been made for highly volatile compounds (16) and rough approaches have been used to determine K'[H] under non-equilibrium conditions (15). Non-equilibrium conditions. Initially, ethylbenzene and anisole Henry's constant was determined experimentally assuming equilibrium conditions using the traditional approach (15). The obtained values named K'[H] [eq], were compared with reference values obtained in different studies through the EPICS method, named K'[H] [EPICS] (5, 6). This comparison demonstrated the inaccuracy of the traditional approach, showing underestimations as high as 70% respect to the reference value. Results are shown in Table 1. Of the used parameters a volume of 40 ml and a 50 ml/min airflow were selected for accomplishment of later experiments. Depending on the degree of volatility of the assayed compounds, the sampling interval varied between 2 and 10 minutes. Table 1. Comparison of K'[H] experimental values. * : slope from the curve ln(C[0]/C[t]) vs. time. K'[H eq] : dimensionless Henry's constant, determined experimentally through BAS assuming equilibrium. Y : Dewulf et al., 1995 # : Dewulf et al., 1999 Determination of K[L]a. Using the mathematical approach previously described (equation (2)) the total volumetric mass transfer coefficients (K[L]a) for ethylbenzene and anisole were determined experimentally. To do so, the natural logarithm of the variation of the relative concentration ln(C[t]/C[0]) was plotted as a function of time (Figure 2), by means of experimental quantification, solving the slope equation using validated dimensionless Henry's constants (5, 6, 19). (Table 2). Fig. 2. Linear semi-logarithmic relation of the concentration decrease with time obtained for anisole (A) and ethylbenzene (B). Mean values from three independent experiments are plotted. Table 2. Experimental determination of K[L]a. * : slope from the curve ln(C[0]/C[t]) vs. time of three independent experiments. ^§: arithmetic mean expressed with their standard deviation. +: volumetric mass transfer coefficient obtained from three independent experiments. Correction of K[L]a for volatile and semi-volatile compounds of environmental interest. Based on the relation between the molecular diffusion coefficients, D[i], and the mass transfer coefficient, k[i], the K[L]a were calculated for different organic compounds. D[i] was calculated theoretically using the approach of Hayduk & Laudie (10) for each compound to be analyzed. The experimental values of K[L]a obtained for ethylbenzene and anisole were used as reference in the calculation of total volumetric mass transfer coefficients by means of the relation described for compounds under liquid control (16, 18): The calculated values are listed in Table 3. Table 3. K[L]a Corrected values. Determination of the dimensionless Henry's constant, K'[H]. Keeping the same experimental conditions used for the determination of K[L]a, quantifications of the variation of the relative concentration in time were made for the volatile organic compounds: 1,1-dichloroethane, 1,2-dichloropropane, p-xylene and toluene and semi-volatile: 1,1,2-trichloroethane and naphthalene, using again the mathematical approach (equation (2)), including the corrected value of K[L]a for each compound and solving the slope equation for the dimensionless Henry's constant, named K'[H] [BAS]. The obtained value was compared with the one calculated through the traditional approach assuming equilibrium, K'[H] [eq], and with the value of reference from other studies where the EPICS method was used (5, 6), K'[H] [EPICS] (Table 4). Table 4. Comparison of K'[H] experimentally determined *. *: The results presented here were calculated using ethylbenzene as reference compound, only in the case of naphtalene anisole was used instead. K'[H BAS]: dimensionless Henry's constant, determined experimentally through BAS using the new mathematical approach (equation (2)). Results correspond to the experimental determination obtained from three independent experiments. ¥: arithmetic mean expressed with their standard deviation. § : Dewulf et al., 1995 # : Dewulf et al., 1999 + : Sander, 1999 Different methodologies are used for the determination of the Henry's constant, a crucial variable in the study of the dispersion of volatile and semi-volatile polluting agents between gaseous and liquid phases. This study focused on the validation of a new mathematical approach that incorporates the process of mass transfer in the traditional determination by means of the method of Batch Air Stripping (BAS). It is evident, when comparing the values obtained experimentally by means of the BAS technique, assuming equilibrium conditions, respect to the values of different studies obtained by means of the EPICS method (5, 6) used as reference in this study (Tables 1 and 4), that depending on the analyzed compound and the experimental conditions in which the BAS is carried out, the resulting K´[H] can be highly underestimated (until 70%), demonstrating why is necessary to incorporate in the traditional approach a mathematical expression who accounts for the mass transfer, which is an independent dynamic process of equilibrium between phases, doing feasible the determination of the Henry's constant for a wide range of semi-volatile organic compounds for which the static methods of determination are not useful (9) and where equilibrium in dynamic systems is evidently not reached. Nevertheless, this expression has the disadvantage to require the determination of mass transfer coefficients, highly specific variables that are dependent on system conditions and difficult to determine experimentally. In the present work, resorting on validated dimensionless Henry's constants (K'[H]) (5, 6, 19) total volumetric mass transfer coefficients (K[L]a) were experimentally determined. This determination was made for ethylbenzene (K'[H] = 0.270) and anisole (K'[H] = 0.015), organic compounds representative of the groups of whose mass transfer is under liquid or liquid and gaseous control, respectively. The total volumetric mass transfer coefficients experimentally obtained for ethylbenzene (K[L]a = 0.5809 min^-1) and anisole (0.0201 K[L]a = min^-1) are in the orders of magnitude previously described in other studies for organic compounds (18), adding them to the scarce published values. Later, the values of K[L]a for organic compounds of environmental interest were corrected by means of theoretically calculated molecular diffusion coefficients (10). This correction threw a great difference in the corrected K[L]a depending on the compound used as reference. In addition, close values for each compound were obtained respect to the value of reference ( Tables 3) explained by the similarity of the calculated molecular diffusion coefficients. When incorporating the corrected values of K[L]a in the experimental determination of K'[H] (denoted as K'[H] [BAS]) by means of the mathematical approach proposed in this study (equation (2)), it was observed that the experimental determination of K'[H BAS ]approaches the reference K'[H EPICS] value when the compound under study shares similar physical properties (volatility) with the reference compound. In the case of anisole, used as reference compound, it just allowed the calculation of naphthalene, which presents a similar volatility. In this particular case, a low exactitude in the determination of K´[H] is observed, which is attributable to the great structural difference of both compounds, which makes the approach to loose soundness. On the contrary, when using ethylbenzene as reference compound, the results obtained for all compounds assayed, with the sole exception of naphthalene, approached the value of reference significantly (Table 4). From the before mentioned, it is possible to conclude that the relation of proportionality between k[i] and D[i] is only valid for compounds with similar physical properties, which is directly related to the control stages of the mass transfer process. Only those compounds that are under liquid control (where the limiting stage of the process of mass transfer is at the liquid interphase) will be representative and able to be used as reference in the K[L]a correction for other compounds with equal limiting stage. On the matter, special interest presented the analysis of gaseous controlled compounds such as phenols and polycyclic hydrocarbons of greater complexity than naphthalene. Nevertheless, these compounds displayed disadvantages respect to the methodology used in this study in terms of their water solubility and sampling techniques. It is important to mention that a more accurate mathematical expression that accounts for the phenomena of mass transfer between a liquid and a gas phase in a dynamic system as it is the BAS technique, requires to consider factors such as hydrophobicity and polarity among others. For example, polar compounds of low solubility can present an anomalous behavior accumulating at level of the liquid interphase and generating the phenomenon of "interphase partitioning" (11) where the interphase constitutes an independent phase in which these compounds accumulate reaching higher concentrations than those of the liquid phase and establishing a new equilibrium with the gaseous phase, generating overestimations of the Henry's constant (1, 11). Finally, it is possible to conclude that the mathematical approach proposed in this study accounts for the process of mass transfer between phases extending considerably the use of BAS technique in the determination of the Henry's constant. 1. Atlas, E., Foster, R. & Glam, C. S.. Air-sea exchange of high molecular weight organic pollutants: laboratory studies. Environ. Sci. Technol. 16: 283-286. 1982. [ Links ] 2. Boyd-Boland, A. A., Chai, M., Luo, Y. Z., Zhang, Z., Yang, M. J., Pawliszyn, J. B. & Goreki, T.. New solvent-free sample preparation techniques based on fiber and polymer technologies. Environ. Sci. Technol. 28 (13): 569A-574A, 1994. [ Links ] 3. Chiang, P-C. , Hung , C-H., Mar, J. C. & Chang, E. E.. Henry's constants and mass transfer coefficient of halogenated organic pollutants in an air stripping packed column. Water. Sci. Tech. 38 (6): 287-294, 1998. [ Links ] 4. Danckwerts, P. V.. Significance of liquid-film coefficients in gas absorption. Ind. Eng. Chem. 43: 1460-1467, 1951. [ Links ] 5. Dewulf, J., Drijvers, D. & Van Langenhove, H.. Measurement of Henry"s law constant as function of the temperature and salinity for the low temperature range. Atmosph. Environ. 29: 323-331, 1995. [ Links ] 6. Dewulf, J., Van Langenhove, H. & Everaert, P.. Determination of Henry's law coefficients by combination of the equilibration partitioning in closed systems and solid-phase microextraction techniques. J. Chromatogr A. 830: 353-363, 1999. [ Links ] 7. Drozd, J. & Novak, J.. Quantitative head-space gas analysis by the standard additions method. Determination of hydrophilic solutes in equilibrated gas-aqueous liquid systems. J. Chromatogr. 136: 37-44, 1977. [ Links ] 8. Fredenslund A., Jones, R.L. & Prausnitz, J.M.. Group-Contribution Estimation of Activity Coefficients in Nonideal Liquid Mixtures, AIChE J., No. 6, Vol. 21, 1086-1099, 1975. [ Links ] 9. Gosset, J. M.. Measurement of Henry's law constants for C1 and C2 Chlorinated Hydrocarbons. Environ. Sci. Technol. 21 (2): 202- 208, 1987. [ Links ] 10. Hayduk, W. & Laudie, H.. Prediction of diffusion coefficients for non-electrolytes in dilute aqueous solution. AIChE. 20: 611-615, 1974. [ Links ] 11. Hoff, J. T., Mackay, D., Giliham R. & Shiu, W. Y.. Partitioning of organic chemicals at the air-water interface in environmental systems. Environ. Sci. Technol. 27 (10): 2174-2180. 1993. [ Links 12. Holmen, K., Liss, P.. Models for air-water gas transfer: An experimental investigation. Tellus. 36B: 92-100. 1984. [ Links ] 13. Klamt; A. Conductor-like Screening Model for Real Solvents: A New Approach to the Quantitative Calculation of Solvation Phenomena. J. Phys. Chem. 99(7); 2224-2235. 1995. [ Links ] 14. Leighton, D. T. & Calo, J. M.. Distribution coefficients of chlorinated hydrocarbons in dilute air-water systems for groundwater applications. J. Chem. Eng. Data. 36: 382-385, 1981. [ Links ] 15. Mackay, D., Shiu, W. Y. & Sutherland, R. P.. Determination of air- water Henry's law constants for hydrophobic pollutants. Environ. Sci. Technol. 13 (3): 333-337, 1979. [ Links ] 16. Mackay, D. & Yeun, A. T. K.. Mass transfer coefficient correlations for volatilization of organic solutes from water. Environ. Sci. Technol. 17 (4): 211-217, 1983. [ Links ] 17. Macaulife, D.. GC determination of solutes by multiple gas phase equilibration. Chem. Technol. 1: 46-51, 1971. [ Links ] 18. Roberts, P. V. & Dändliker, P. G.. Mass transfer of volatile organic contaminants from aqueous solution to the atmosphere during surface aeration. Environ. Sci. Technol. 17 (8): 484-489. 1983. [ Links ] 19. Sander, R.. Compilation of Henry's law constants for inorganic and organic species of potential importance in environmental chemistry. http://www.mpchmainz.mpg.de/~sander/res/henry.html. 1999. [ Links ] 20. Schüürmann, G. Prediction of Henry's law constant of benzene derivatives using quatum chemical continuum-solvation models. J. Comput. Chem. 21(1): 17-21. 2000. [ Links ] 21. Van Langenhove, H. Unpublished results. 1998. [ Links ] 22. Whitman, W., G.. The two-film theory of gas absorption. Chem. Metal. Eng. 29: 146-148, 1923. [ Links ]
{"url":"http://www.scielo.cl/scielo.php?script=sci_arttext&pid=S0717-97072003000300001&lng=es&nrm=iso&tlng=en","timestamp":"2014-04-19T17:36:54Z","content_type":null,"content_length":"52442","record_id":"<urn:uuid:6c2c5ddb-8267-4bca-9a16-34cff39866e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of digits in n! up vote 0 down vote favorite Is there any efficient algorithm for counting number of digits in n! without actually calculating n!? [Addendum -- PLC]: I voted to close the question as "no longer relevant" because of Gerry's answer that one could just use Stirling's formula, as supplemented by a comment which referred to a formula of Kamenetsky given on the online journal of integer sequences. It seems now that it is an open question whether Kamenetsky's formula always (rather than just "most of the time") gives exactly the right answer, so there is more here than I had realized. A followup question which provides more context has been asked here: How good is Kamenetsky's formula for the number of digits in n-factorial? 6 This sounds like somebody's homework problem. – Andrej Bauer Mar 23 '10 at 9:39 6 I think marking this down is a bit harsh. It's not a poor, uninteresting or inappropriate question: experts in particular fields might have knowledge about this problem that is not widely documented or easily found. (+1 countermeasure) – Rhubbarb Mar 23 '10 at 17:14 4 I agree with rhubbarb that downvoting this and voting to close are a bit harsh. – Kevin H. Lin Mar 23 '10 at 23:38 2 In my opinion, reopening is not necessary, since a better phrased version of the question has already been asked: see the link in my addendum above. I agree that the downvoting now appears unjustified (I have just upvoted it, which more than compensates). – Pete L. Clark Mar 24 '10 at 6:21 add comment closed as off topic by Steve Huntsman, Harald Hanche-Olsen, Pete L. Clark, François G. Dorais♦, Reid Barton Mar 24 '10 at 2:10 Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. 1 Answer active oldest votes Counting the digits is the same problem as estimating the size (and then taking the logarithm to the base 10) so Stirling's formula (q.v.) and its refinements should do the trick. up vote 7 down vote 8 It means, "Google it." – Jonas Meyer Mar 23 '10 at 5:54 15 Good answer, Jonas. For the sake of completeness, q.v. abbreviates the Latin, "quod vide," which translates to "which see," which basically means, in the words of Casey Stengel (q.v.), "you could look it up." – Gerry Myerson Mar 23 '10 at 6:03 3 Sloane's research.att.com/~njas/sequences/A034886 gives the following note: Using Stirling's formula we can derive a formula, which is very fast to compute in practice: floor((log (2*pi*n)/2+n*(log(n)-log(e)))/log(10))+1. - Dmitry Kamenetsky (dkamen(AT)rsise.anu.edu.au), Jul 07 2008 – Douglas S. Stones Mar 23 '10 at 9:44 1 There was a discussion of the Kamenetsky formula in the Usenet newsgroup, sci.math, Subject: Number of digits in factorial, in January-February. The question was whether the formula was exact, or just an outstanding approximation. No proof of exactness was forthcoming, neither was any counterexample. – Gerry Myerson Mar 23 '10 at 22:11 1 See also an answer at Ask Dr. Math: mathforum.org/library/drmath/view/68245.html – lhf Mar 24 '10 at 2:00 show 1 more comment Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/19086/number-of-digits-in-n?sort=newest","timestamp":"2014-04-20T06:27:10Z","content_type":null,"content_length":"58428","record_id":"<urn:uuid:13167a4c-e274-412a-8b86-d3a81cde1bd5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
NITheP - National Institute • Aug 30 2011 WITS Theory Seminar Series The National Institute for Theoretical Physics, and the Centre for Theoretical Physics, School of Physics, would like to invite to its coming talk in the theory seminar series, entitled: "Theory of Quantum Space-Time" to be presented by Prof. Lane Hughston (Imperial College) Abstract: A generalised equivalence principle is put forward according to which space-time symmetries and internal quantum symmetries are indistinguishable before symmetry breaking. Based on this principle, a higher-dimensional extension of Minkowski space is proposed and its properties examined. In this scheme the structure of space-time is intrinsically quantum mechanical. It is shown that the causal geometry of such a quantum space-time possesses a rich hierarchical structure. The natural extension of the Poincare group to quantum space-time is investigated. In particular, we prove that the symmetry group of a quantum space-time is generated in general by a system of irreducible Killing tensors. When the symmetries of a quantum space-time are spontaneously broken, then the points of the quantum space-time can be interpreted as space-time valued operators. The generic point of a quantum space-time in the broken symmetry phase thus becomes a Minkowski space-time valued operator. Classical space-time emerges as a map from quantum space-time to Minkowski space. It is shown that the general such map satisfying appropriate causality-preserving conditions ensuring linearity and Poincare invariance is necessarily a density matrix. Date: Tuesday, 30th August 2011 Venue: Frank Nabarro Lecture Theatre P216 Time: 13.20 - 14.10 Yours sincerely, Alan S. Cornell School of Physics University of the Witwatersrand Private Bag 3 Wits 2050 South Africa
{"url":"http://www.nithep.ac.za/3ls.htm","timestamp":"2014-04-16T19:29:35Z","content_type":null,"content_length":"12743","record_id":"<urn:uuid:bcee7b9a-3e36-490e-9f49-c9d59ecdb195>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
You are in a maze of twisty little equations, all alike. A long calculation is like playing a computer game. This is actually somewhat true, for the sort of computer games where you realize that you've gone down the wrong path and have to backtrack. In both cases you're trying to find your way through a maze of twisty little passages, all alike , from the beginning of the maze (the beginning of the game, or the state of having no knowledge) to the end of the maze (the end of the game, or the result of the computation). There are in both cases a small number of ways to succeed (to successfully reach the end of the game or the result of the computation), but usually more than one. (In the case of computation, "how many ways" there are to do a particular computation depends very strongly on what sorts of computations you consider to be "the same".) But there are a much larger number of ways to fail; the problem is to find a path to "success" in some reasonable amount of time. Neither breadth-first search (where one does all the possible one-step computations, then all the possible two-step computations, and so on) nor depth-first search (where one follows a computation to its logical end, and only then tries something else) are really practical in most cases; it feels like a lot of learning "how to do mathematics" is learning when to give up on one particular approach to a problem and try something completely different. And a large part of that is to get rid of the qualifier "all alike" in my title. If equations look "all alike", one has no way of knowing what to do; mathematical education is as much about developing a facility for knowing what sort of mathematical expressions are amenable to what sort of tools as it is about learning how to use those tools. 3 comments: John Armstrong said... Long calculations usually don't have walkthroughs easily available on the net, though. That said, our course is clear: we must find some mathematical object that can be appropriately named a "zork". I don't run out of quarters when I do calculations. :-) Mark said... This really rang true with me. So much so, that I thought it would be fun to imagine theorem proving as a text adventure. I posted the result on my blog.
{"url":"http://godplaysdice.blogspot.com/2007/10/you-are-in-maze-of-twisty-little.html","timestamp":"2014-04-16T05:10:57Z","content_type":null,"content_length":"57418","record_id":"<urn:uuid:aaeb73ae-1240-40f7-98a8-e6336db3ad7b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 151 - 175 of 217 151. CJM 2003 (vol 55 pp. 292) Infinitely Divisible Laws Associated with Hyperbolic Functions The infinitely divisible distributions on $\mathbb{R}^+$ of random variables $C_t$, $S_t$ and $T_t$ with Laplace transforms $$ \left( \frac{1}{\cosh \sqrt{2\lambda}} \right)^t, \quad \left( \frac{\ sqrt{2\lambda}}{\sinh \sqrt{2\lambda}} \right)^t, \quad \text{and} \quad \left( \frac{\tanh \sqrt{2\lambda}}{\sqrt{2\lambda}} \right)^t $$ respectively are characterized for various $t>0$ in a number of different ways: by simple relations between their moments and cumulants, by corresponding relations between the distributions and their L\'evy measures, by recursions for their Mellin transforms, and by differential equations satisfied by their Laplace transforms. Some of these results are interpreted probabilistically via known appearances of these distributions for $t=1$ or $2$ in the description of the laws of various functionals of Brownian motion and Bessel processes, such as the heights and lengths of excursions of a one-dimensional Brownian motion. The distributions of $C_1$ and $S_2$ are also known to appear in the Mellin representations of two important functions in analytic number theory, the Riemann zeta function and the Dirichlet $L$-function associated with the quadratic character modulo~4. Related families of infinitely divisible laws, including the gamma, logistic and generalized hyperbolic secant distributions, are derived from $S_t$ and $C_t$ by operations such as Brownian subordination, exponential tilting, and weak limits, and characterized in various ways. Keywords:Riemann zeta function, Mellin transform, characterization of distributions, Brownian motion, Bessel process, Lévy process, gamma process, Meixner process Categories:11M06, 60J65, 60E07 152. CJM 2003 (vol 55 pp. 225) Short Kloosterman Sums for Polynomials over Finite Fields We extend to the setting of polynomials over a finite field certain estimates for short Kloosterman sums originally due to Karatsuba. Our estimates are then used to establish some uniformity of distribution results in the ring $\mathbb{F}_q[x]/M(x)$ for collections of polynomials either of the form $f^{-1}g^{-1}$ or of the form $f^{-1}g^{-1}+afg$, where $f$ and $g$ are polynomials coprime to $M$ and of very small degree relative to $M$, and $a$ is an arbitrary polynomial. We also give estimates for short Kloosterman sums where the summation runs over products of two irreducible polynomials of small degree. It is likely that this result can be used to give an improvement of the Brun-Titchmarsh theorem for polynomials over finite fields. Categories:11T23, 11T06 153. CJM 2003 (vol 55 pp. 331) The Maximum Number of Points on a Curve of Genus $4$ over $\mathbb{F}_8$ is $25$ We prove that the maximum number of rational points on a smooth, geometrically irreducible genus 4 curve over the field of 8 elements is 25. The body of the paper shows that 27 points is not possible by combining techniques from algebraic geometry with a computer verification. The appendix shows that 26 points is not possible by examining the zeta functions. Categories:11G20, 14H25 154. CJM 2002 (vol 54 pp. 1305) Continued Fractions Associated with $\SL_3 (\mathbf{Z})$ and Units in Complex Cubic Fields Continued fractions associated with $\GL_3 (\mathbf{Z})$ are introduced and applied to find fundamental units in a two-parameter family of complex cubic fields. Keywords:fundamental units, continued fractions, diophantine approximation, symmetric space Categories:11R27, 11J70, 11J13 155. CJM 2002 (vol 54 pp. 1202) Octahedral Galois Representations Arising From $\mathbf{Q}$-Curves of Degree $2$ Generically, one can attach to a $\mathbf{Q}$-curve $C$ octahedral representations $\rho\colon\Gal(\bar{\mathbf{Q}}/\mathbf{Q})\rightarrow\GL_2(\bar\mathbf{F}_3)$ coming from the Galois action on the $3$-torsion of those abelian varieties of $\GL_2$-type whose building block is $C$. When $C$ is defined over a quadratic field and has an isogeny of degree $2$ to its Galois conjugate, there exist such representations $\rho$ having image into $\GL_2(\mathbf{F}_9)$. Going the other way, we can ask which $\mod 3$ octahedral representations $\rho$ of $\Gal(\bar\mathbf{Q}/\mathbf{Q})$ arise from $\mathbf{Q}$-curves in the above sense. We characterize those arising from quadratic $\mathbf{Q}$-curves of degree $2$. The approach makes use of Galois embedding techniques in $\GL_2(\ mathbf{F}_9)$, and the characterization can be given in terms of a quartic polynomial defining the $\mathcal{S}_4$-extension of $\mathbf{Q}$ corresponding to the projective representation $\bar{\ Categories:11G05, 11G10, 11R32 156. CJM 2002 (vol 54 pp. 673) Local $L$-Functions for Split Spinor Groups We study the local $L$-functions for Levi subgroups in split spinor groups defined via the Langlands-Shahidi method and prove a conjecture on their holomorphy in a half plane. These results have been used in the work of Kim and Shahidi on the functorial product for $\GL_2 \times \GL_3$. 157. CJM 2002 (vol 54 pp. 828) Spherical Functions for the Semisimple Symmetric Pair $\bigl( \Sp(2,\mathbb{R}), \SL(2,\mathbb{C}) \bigr)$ Let $\pi$ be an irreducible generalized principal series representation of $G = \Sp(2,\mathbb{R})$ induced from its Jacobi parabolic subgroup. We show that the space of algebraic intertwining operators from $\pi$ to the representation induced from an irreducible admissible representation of $\SL(2,\mathbb{C})$ in $G$ is at most one dimensional. Spherical functions in the title are the images of $K$-finite vectors by this intertwining operator. We obtain an integral expression of Mellin-Barnes type for the radial part of our spherical function. Categories:22E45, 11F70 158. CJM 2002 (vol 54 pp. 468) Mahler's Measure and the Dilogarithm (I) An explicit formula is derived for the logarithmic Mahler measure $m(P)$ of $P(x,y) = p(x)y - q(x)$, where $p(x)$ and $q(x)$ are cyclotomic. This is used to find many examples of such polynomials for which $m(P)$ is rationally related to the Dedekind zeta value $\zeta_F (2)$ for certain quadratic and quartic fields. Categories:11G40, 11R06, 11Y35 159. CJM 2002 (vol 54 pp. 449) Théorème de Vorono\"\i\ dans les espaces symétriques On d\'emontre un th\'eor\`eme de Vorono\"\i\ (caract\'erisation des maxima locaux de l'invariant d'Hermite) pour les familles de r\'eseaux param\'etr\'ees par les espaces sym\'etriques irr\'e\ -ductibles non exceptionnels de type non compact. We prove a theorem of Vorono\"\i\ type (characterisation of local maxima of the Hermite invariant) for the lattices parametrized by irreducible nonexceptional symmetric spaces of noncompact type. Keywords:réseaux, théorème de Vorono\"\i, espaces symétriques Categories:11H06, 53C35 160. CJM 2002 (vol 54 pp. 352) On Connected Components of Shimura Varieties We study the cohomology of connected components of Shimura varieties $S_{K^p}$ coming from the group $\GSp_{2g}$, by an approach modeled on the stabilization of the twisted trace formula, due to Kottwitz and Shelstad. More precisely, for each character $\olomega$ on the group of connected components of $S_{K^p}$ we define an operator $L(\omega)$ on the cohomology groups with compact supports $H^i_c (S_{K^p}, \olbbQ_\ell)$, and then we prove that the virtual trace of the composition of $L(\omega)$ with a Hecke operator $f$ away from $p$ and a sufficiently high power of a geometric Frobenius $\Phi^r_p$, can be expressed as a sum of $\omega$-{\em weighted} (twisted) orbital integrals (where $\omega$-{\em weighted} means that the orbital integrals and twisted orbital integrals occuring here each have a weighting factor coming from the character $\olomega$). As the crucial step, we define and study a new invariant $\alpha_1 (\gamma_0; \gamma, \delta)$ which is a refinement of the invariant $\alpha (\gamma_0; \gamma, \delta)$ defined by Kottwitz. This is done by using a theorem of Reimann and Zink. Categories:14G35, 11F70 161. CJM 2002 (vol 54 pp. 417) Slim Exceptional Sets for Sums of Cubes We investigate exceptional sets associated with various additive problems involving sums of cubes. By developing a method wherein an exponential sum over the set of exceptions is employed explicitly within the Hardy-Littlewood method, we are better able to exploit excess variables. By way of illustration, we show that the number of odd integers not divisible by $9$, and not exceeding $X$, that fail to have a representation as the sum of $7$ cubes of prime numbers, is $O(X^{23/36+\eps})$. For sums of eight cubes of prime numbers, the corresponding number of exceptional integers is $O(X^{11/36+\eps})$. Keywords:Waring's problem, exceptional sets Categories:11P32, 11P05, 11P55 162. CJM 2002 (vol 54 pp. 263) Intégrales orbitales pondérées sur les algèbres de Lie : le cas $p$-adique Soit $G$ un groupe réductif connexe défini sur un corps $p$-adique $F$ et $\ggo$ son algèbre de Lie. Les intégrales orbitales pondérées sur $\ggo(F)$ sont des distributions $J_M(X,f)$---$f$ est une fonction test---indexées par les sous-groupes de Lévi $M$ de $G$ et les éléments semi-simples réguliers $X \in \mgo(F)\cap \ggo_{\reg}$. Leurs analogues sur $G$ sont les principales composantes du côté géométrique des formules des traces locale et globale d'Arthur. Si $M=G$, on retrouve les intégrales orbitales invariantes qui, vues comme fonction de $X$, sont bornées sur $\mgo(F)\cap \ggo_{\reg}$~: c'est un résultat bien connu de Harish-Chandra. Si $M \subsetneq G$, les intégrales orbitales pondérées explosent au voisinage des éléments singuliers. Nous construisons dans cet article de nouvelles intégrales orbitales pondérées $J_M^b(X,f)$, égales à $J_M(X,f)$ à un terme correctif près, qui tout en conservant les principales propriétés des précédentes (comportement par conjugaison, développement en germes, {\it etc.}) restent bornées quand $X$ parcourt $\mgo(F)\cap\ggo_{\reg}$. Nous montrons également que les intégrales orbitales pondérées globales, associées à des éléments semi-simples réguliers, se décomposent en produits de ces nouvelles intégrales locales. Categories:22E35, 11F70 163. CJM 2002 (vol 54 pp. 92) Comparisons of General Linear Groups and their Metaplectic Coverings I We prepare for a comparison of global trace formulas of general linear groups and their metaplectic coverings. In particular, we generalize the local metaplectic correspondence of Flicker and Kazhdan and describe the terms expected to appear in the invariant trace formulas of the above covering groups. The conjectural trace formulas are then placed into a form suitable for comparison. Categories:11F70, 11F72, 22E50 164. CJM 2002 (vol 54 pp. 71) Small Prime Solutions of Quadratic Equations Let $b_1,\dots,b_5$ be non-zero integers and $n$ any integer. Suppose that $b_1 + \cdots + b_5 \equiv n \pmod{24}$ and $(b_i,b_j) = 1$ for $1 \leq i < j \leq 5$. In this paper we prove that \begin {enumerate}[(ii)] \item[(i)] if $b_j$ are not all of the same sign, then the above quadratic equation has prime solutions satisfying $p_j \ll \sqrt{|n|} + \max \{|b_j|\}^{20+\ve}$; and \item[(ii)] if all $b_j$ are positive and $n \gg \max \{|b_j|\}^{41+ \ve}$, then the quadratic equation $b_1 p_1^2 + \cdots + b_5 p_5^2 = n$ is soluble in primes $p_j$. \end{enumerate} Categories:11P32, 11P05, 11P55 165. CJM 2001 (vol 53 pp. 1194) Explicit Upper Bounds for Residues of Dedekind Zeta Functions and Values of $L$-Functions at $s=1$, and Explicit Lower Bounds for Relative Class Numbers of $\CM$-Fields We provide the reader with a uniform approach for obtaining various useful explicit upper bounds on residues of Dedekind zeta functions of numbers fields and on absolute values of values at $s=1$ of $L$-series associated with primitive characters on ray class groups of number fields. To make it quite clear to the reader how useful such bounds are when dealing with class number problems for $\CM$-fields, we deduce an upper bound for the root discriminants of the normal $\CM$-fields with (relative) class number one. Keywords:Dedekind zeta functions, $L$-functions, relative class numbers, $\CM$-fields Categories:11R42, 11R29 166. CJM 2001 (vol 53 pp. 897) On Some Exponential Equations of S.~S.~Pillai In this paper, we establish a number of theorems on the classic Diophantine equation of S.~S.~Pillai, $a^x-b^y=c$, where $a$, $b$ and $c$ are given nonzero integers with $a,b \geq 2$. In particular, we obtain the sharp result that there are at most two solutions in positive integers $x$ and $y$ and deduce a variety of explicit conditions under which there exists at most a single such solution. These improve or generalize prior work of Le, Leveque, Pillai, Scott and Terai. The main tools used include lower bounds for linear forms in the logarithms of (two) algebraic numbers and various elementary arguments. Categories:11D61, 11D45, 11J86 167. CJM 2001 (vol 53 pp. 866) Inverse Problems for Partition Functions Let $p_w(n)$ be the weighted partition function defined by the generating function $\sum^\infty_{n=0}p_w(n)x^n=\prod^\infty_{m=1} (1-x^m)^{-w(m)}$, where $w(m)$ is a non-negative arithmetic function. Let $P_w(u)=\sum_{n\le u}p_w(n)$ and $N_w(u)=\sum_{n\le u}w(n)$ be the summatory functions for $p_w(n)$ and $w(n)$, respectively. Generalizing results of G.~A.~Freiman and E.~E.~Kohlbecker, we show that, for a large class of functions $\Phi(u)$ and $\lambda(u)$, an estimate for $P_w(u)$ of the form $\log P_w(u)=\Phi(u)\bigl\{1+O(1/\lambda(u)\bigr)\bigr\}$ $(u\to\ infty)$ implies an estimate for $N_w(u)$ of the form $N_w(u)=\Phi^\ast(u)\bigl\{1+O\bigl(1/\log\lambda(u)\bigr)\bigr\}$ $(u\to\infty)$ with a suitable function $\Phi^\ast(u)$ defined in terms of $\ Phi(u)$. We apply this result and related results to obtain characterizations of the Riemann Hypothesis and the Generalized Riemann Hypothesis in terms of the asymptotic behavior of certain weighted partition functions. Categories:11P82, 11M26, 40E05 168. CJM 2001 (vol 53 pp. 449) Descending Rational Points on Elliptic Curves to Smaller Fields In this paper, we study the Mordell-Weil group of an elliptic curve as a Galois module. We consider an elliptic curve $E$ defined over a number field $K$ whose Mordell-Weil rank over a Galois extension $F$ is $1$, $2$ or $3$. We show that $E$ acquires a point (points) of infinite order over a field whose Galois group is one of $C_n \times C_m$ ($n= 1, 2, 3, 4, 6, m= 1, 2$), $D_n \times C_m$ ($n= 2, 3, 4, 6, m= 1, 2$), $A_4 \times C_m$ ($m=1,2$), $S_4 \times C_m$ ($m=1,2$). Next, we consider the case where $E$ has complex multiplication by the ring of integers $\o$ of an imaginary quadratic field $\k$ contained in $K$. Suppose that the $\o$-rank over a Galois extension $F$ is $1$ or $2$. If $\k\neq\Q(\sqrt{-1})$ and $\Q(\sqrt{-3})$ and $h_{\k}$ (class number of $\k$) is odd, we show that $E$ acquires positive $\o$-rank over a cyclic extension of $K$ or over a field whose Galois group is one of $\SL_2(\Z/3\Z)$, an extension of $\SL_2(\Z/3\Z)$ by $\Z/2\Z$, or a central extension by the dihedral group. Finally, we discuss the relation of the above results to the vanishing of $L$-functions. Categories:11G05, 11G40, 11R32, 11R33 169. CJM 2001 (vol 53 pp. 414) Nombres premiers de la forme $\floor{n^c}$ For $c>1$ we denote by $\pi_c(x)$ the number of integers $n \leq x$ such that $\floor{n^c}$ is prime. In 1953, Piatetski-Shapiro has proved that $\pi_c(x) \sim \frac{x}{c\log x}$, $x \rightarrow +\ infty$ holds for $c<12/11$. Many authors have extended this range, which measures our progress in exponential sums techniques. In this article we obtain $c < 1.16117\dots\;$. Categories:11L07, 11L20, 11N05 170. CJM 2001 (vol 53 pp. 310) On a Product Related to the Cubic Gauss Sum, III We have seen, in the previous works [5], [6], that the argument of a certain product is closely connected to that of the cubic Gauss sum. Here the absolute value of the product will be Keywords:Gauss sum, Lagrange resolvent Categories:11L05, 11R33 171. CJM 2001 (vol 53 pp. 434) Values of the Dedekind Eta Function at Quadratic Irrationalities: Corrigendum Habib Muzaffar of Carleton University has pointed out to the authors that in their paper [A] only the result \[ \pi_{K,d}(x)+\pi_{K^{-1},d}(x)=\frac{1}{h(d)}\frac{x}{\log x}+O_{K,d}\Bigl(\frac {x} {\log^2x}\Bigr) \] follows from the prime ideal theorem with remainder for ideal classes, and not the stronger result \[ \pi_{K,d}(x)=\frac{1}{2h(d)}\frac{x}{\log x}+O_{K,d}\Bigl(\frac {x}{\log^2x} \Bigr) \] stated in Lemma~5.2. This necessitates changes in Sections~5 and 6 of [A]. The main results of the paper are not affected by these changes. It should also be noted that, starting on page 177 of [A], each and every occurrence of $o(s-1)$ should be replaced by $o(1)$. Sections~5 and 6 of [A] have been rewritten to incorporate the above mentioned correction and are given below. They should replace the original Sections~5 and 6 of [A]. Keywords:Dedekind eta function, quadratic irrationalities, binary quadratic forms, form class group Categories:11F20, 11E45 172. CJM 2001 (vol 53 pp. 244) On the Tempered Spectrum of Quasi-Split Classical Groups II We determine the poles of the standard intertwining operators for a maximal parabolic subgroup of the quasi-split unitary group defined by a quadratic extension $E/F$ of $p$-adic fields of characteristic zero. We study the case where the Levi component $M \simeq \GL_n (E) \times U_m (F)$, with $n \equiv m$ $(\mod 2)$. This, along with earlier work, determines the poles of the local Rankin-Selberg product $L$-function $L(s, \tau' \times \tau)$, with $\tau'$ an irreducible unitary supercuspidal representation of $\GL_n (E)$ and $\tau$ a generic irreducible unitary supercuspidal representation of $U_m (F)$. The results are interpreted using the theory of twisted endoscopy. Categories:22E50, 11S70 173. CJM 2001 (vol 53 pp. 33) Merit Factors of Polynomials Formed by Jacobi Symbols We give explicit formulas for the $L_4$ norm (or equivalently for the merit factors) of various sequences of polynomials related to the polynomials $$ f(z) := \sum_{n=0}^{N-1} \leg{n}{N} z^n. $$ and $$ f_t(z) = \sum_{n=0}^{N-1} \leg{n+t}{N} z^n. $$ where $(\frac{\cdot}{N})$ is the Jacobi symbol. Two cases of particular interest are when $N = pq$ is a product of two primes and $p = q+2$ or $p = q+4$. This extends work of H{\o}holdt, Jensen and Jensen and of the authors. This study arises from a number of conjectures of Erd\H{o}s, Littlewood and others that concern the norms of polynomials with $-1,1$ coefficients on the disc. The current best examples are of the above form when $N$ is prime and it is natural to see what happens for composite~$N$. Keywords:Character polynomial, Class Number, $-1,1$ coefficients, Merit factor, Fekete polynomials, Turyn Polynomials, Littlewood polynomials, Twin Primes, Jacobi Symbols Categories:11J54, 11B83, 12-04 174. CJM 2001 (vol 53 pp. 122) A Truncated Integral of the Poisson Summation Formula Let $G$ be a reductive algebraic group defined over $\bQ$, with anisotropic centre. Given a rational action of $G$ on a finite-dimensional vector space $V$, we analyze the truncated integral of the theta series corresponding to a Schwartz-Bruhat function on $V(\bA)$. The Poisson summation formula then yields an identity of distributions on $V(\bA)$. The truncation used is due to Arthur. Categories:11F99, 11F72 175. CJM 2001 (vol 53 pp. 98) On the Curves Associated to Certain Rings of Automorphic Forms In a 1987 paper, Gross introduced certain curves associated to a definite quaternion algebra $B$ over $\Q$; he then proved an analog of his result with Zagier for these curves. In Gross' paper, the curves were defined in a somewhat {\it ad hoc\/} manner. In this article, we present an interpretation of these curves as projective varieties arising from graded rings of automorphic forms on $B^\ times$, analogously to the construction in the Satake compactification. To define such graded rings, one needs to introduce a ``multiplication'' of automorphic forms that arises from the representation ring of $B^\times$. The resulting curves are unions of projective lines equipped with a collection of Hecke correspondences. They parametrize two-dimensional complex tori with quaternionic multiplication. In general, these complex tori are not abelian varieties; they are algebraic precisely when they correspond to $\CM$ points on these curves, and are thus isogenous to a product $E \times E$, where $E$ is an elliptic curve with complex multiplication. For these $\CM$ points one can make a relation between the action of the $p$-th Hecke operator and Frobenius at $p$, similar to the well-known congruence relation of Eichler and Shimura. Previous 1 ... 6 7 8 9 Next
{"url":"http://cms.math.ca/cjm/msc/11?page=7","timestamp":"2014-04-17T06:57:51Z","content_type":null,"content_length":"70025","record_id":"<urn:uuid:39aac418-0855-460a-8873-cc3ca779cb9e>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
10.27 Updatable Binary Trees—library(trees) This libary module provides updatable binary trees with logarithmic access time. Exported predicates: gen_label(?Index, +Tree, ?Value) assumes that Tree is a proper binary tree, and is true when Value is the Index-th element in Tree. Can be used to enumerate all Values by ascending Index. get_label(+Index, +Tree, -Label) treats the tree as an array of N elements and returns the Index-th. If Index < 1 or > N it simply fails, there is no such element. As Tree need not be fully instantiated, and is potentially unbounded, we cannot enumerate Indices. list_to_tree(+List, -Tree) takes a given List of N elements and constructs a binary Tree where get_label(K, Tree, Lab) <=> Lab is the Kth element of List. map_tree(:Pred, +OldTree, ?NewTree) is true when OldTree and NewTree are binary trees of the same shape and Pred(Old,New) is true for corresponding elements of the two trees. put_label(+Index, +OldTree, -Label, -NewTree) constructs a new tree the same shape as the old which moreover has the same elements except that the Index-th one is Label. Unlike the "arrays" of library(arrays), OldTree is not modified and you can hang on to it as long as you please. Note that O(lg N) new space is needed. put_label(+Index, +OldTree, -OldLabel, -NewTree, +NewLabel) is true when OldTree and NewTree are trees of the same shape having the same elements except that the Index-th element of OldTree is OldLabel and the Index-th element of NewTree is NewLabel. You can swap the <Tree,Label> argument pairs if you like, it makes no difference. tree_size(+Tree, -Size) calculates the number of elements in the Tree. All trees made by list_to_tree/2 that are the same size have the same shape. tree_to_list(+Tree, -List) is the converse operation to list_to_tree/2. Any mapping or checking operation can be done by converting the tree to a list, mapping or checking the list, and converting the result, if any, back to a tree. It is also easier for a human to read a list than a tree, as the order in the tree goes all over the place. Send feedback on this subject.
{"url":"http://sicstus.sics.se/sicstus/docs/latest/html/sicstus.html/lib_002dtrees.html","timestamp":"2014-04-17T01:51:23Z","content_type":null,"content_length":"6718","record_id":"<urn:uuid:b8573d34-d744-44fd-befd-aa74d8cf0943>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Is a double really unsuitable for money? up vote 44 down vote favorite I always tell in c# a variable of type double is not suitable for money. All weird things could happen. But I can't seem to create an example to demonstrate some of these issues. Can anyone provide such an example? (edit; this post was originally tagged C#; some replies refer to specific details of decimal, which therefore means System.Decimal). (edit 2: I was specific asking for some c# code, so I don't think this is language agnostic only) c# language-agnostic decimal money add comment 6 Answers active oldest votes Very, very unsuitable. Use decimal. double x = 3.65, y = 0.05, z = 3.7; Console.WriteLine((x + y) == z); // false up vote 72 down vote accepted (example from Jon's page here - recommended reading ;-p) 22 Darn it, if I'd known I had an example on my own page, I wouldn't have come up with a different one ;) – Jon Skeet Nov 25 '08 at 8:55 4 But hey, 2 examples is better than 1... – Marc Gravell♦ Nov 25 '08 at 8:56 add comment You will get odd errors effectively caused by rounding. In addition, comparisons with exact values are extremely tricky - you usually need to apply some sort of epsilon to check for the actual value being "near" a particular one. Here's a concrete example: using System; class Test up vote 23 down { vote static void Main() double x = 0.1; double y = x + x + x; Console.WriteLine(y == 0.3); // Prints False add comment Since decimal uses a scaling factor of multiples of 10, numbers like 0.1 can be represented exactly. In essence, the decimal type represents this as 1 / 10 ^ 1, whereas a double would represent this as 104857 / 2 ^ 20 (in reality it would be more like really-big-number / 2 ^ 1023). up vote 5 down vote A decimal can exactly represent any base 10 value with up to 28/29 significant digits (like 0.1). A double can't. 3 Decimal doesn't have 96 significant digits. It has 96 significant bits. Decimal has around 28 significant digits. – Jon Skeet Nov 25 '08 at 9:21 In which language are you speaking of the decimal type? Or do all languages that support this type support it in exactly the same way? Might want to specify. – Adam Davis Nov 25 '08 at 9:25 @Adam - this post originally had the C# tag, so we are talking about System.Decimal specifically. – Marc Gravell♦ Nov 25 '08 at 9:29 Oops, well spotted Jon! Corrected. Adam, I'm talking C#, as per the question. Do any other languages have a type called decimal? – Richard Poole Nov 25 '08 at 9:30 @Richard: Well, all languages that is based on .NET does, since System.Decimal is not a unique C# type, it is a .NET type. – awe Jan 12 '10 at 9:51 show 1 more comment Yes it's unsuitable. If I remember correctly double has about 17 significant numbers, so normally rounding errors will take place far behind the decimal point. Most financial software uses 4 decimals behind the decimal point, that leaves 13 decimals to work with so the maximum number you can work with for single operations is still very much higher than the USA national debt. But rounding errors will add up over time. If your software runs for a long time you'll eventually start losing cents. Certain operations will make this worse. For example adding large amounts to small amounts will cause a significant loss of precision. up vote 5 You need fixed point datatypes for money operations, most people don't mind if you lose a cent here and there but accountants aren't like most people.. down vote According to this site http://msdn.microsoft.com/en-us/library/678hzkk9.aspx Doubles actually have 15 to 16 significant digits instead of 17. @Jon Skeet decimal is more suitable than double because of its higher precision, 28 or 29 significant decimals. That means less chance of accumulated rounding errors becoming significant. Fixed point datatypes (ie integers that represent cents or 100th of a cent like I've seen used) like Boojum mentions are actually better suited. Note that System.Decimal, the suggested type to use in .NET, is still a floating point type - but it's a floating decimal point rather than a floating binary point. That's more important than having fixed precision in most cases, I suspect. – Jon Skeet Nov 25 '08 at 9:19 1 That's precisely the issue. Currency is nowadays typically decimal. Back before the US stock markets decimalized, however, binary fractions were in use (I started seeing 256ths and even 1024ths at one point) and so doubles would have been more appropriate than decimals for stock prices! Pre-decimalization pounds sterling would have been a real pain though at 960 farthings to the pound; that's neither decimal nor binary, but it certainly provides a generous variety of prime factors for easy fractions. – Jeffrey Hantin Nov 10 '10 at 22:47 1 Even more important than just beind a decimal floating point, decimal the expression x + 1 != x is always true. Also, it retains precision, so you can tell the difference between 1 and 1.0. – Gabe Mar 16 '11 at 14:00 @Gabe: Those properties are only meaningful if one scales one's values so that a value of 1 represents the smallest currency unit. A Decimal value may lose precision to the right of the decimal point without indicating any problem. – supercat Mar 1 '13 at 22:51 add comment My understanding is that most financial systems express currency using integers -- i.e., counting everything in cents. up vote 2 IEEE double precision actually can represent all integers exactly in the range -2^53 through +2^53. (Hacker's Delight, pg. 262) If you use only addition, subtraction and multiplication, down vote and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division or more complex operations, however. If you're only going to use integers though, why not use an integer type to start with? – Jon Skeet Nov 25 '08 at 9:52 1 Heh - int64_t can represent all integers exactly in the range -2^63 to +2^63-1. If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division, however. – Steve Jessop Nov 25 '08 at 11:33 Some antiquated systems which are (alas?) still in use support double, but do not support any 64-bit integer type. I would suggest that performing calculations as double, scaled that any semantically-required rounding will always be to whole units, is apt to be the most efficient approach. – supercat Jun 9 '12 at 23:45 add comment No a double will always have rounding errors, use "decimal" if you're on .Net... up vote 0 down vote Careful. Any floating-point representation will have rounding errors, decimal included. It's just that decimal will round in ways that are intuitive to humans (and generally 6 appropriate for money), and binary floating point won't. But for non-financial number-crunching, double is often much, much better than decimal, even in C#. – Daniel Pryden Aug 31 '09 at 8:06 add comment Not the answer you're looking for? Browse other questions tagged c# language-agnostic decimal money or ask your own question.
{"url":"http://stackoverflow.com/questions/316727/is-a-double-really-unsuitable-for-money/316788","timestamp":"2014-04-18T00:54:13Z","content_type":null,"content_length":"105222","record_id":"<urn:uuid:dbd28544-c4ae-4df8-aaba-4ac836b7e56f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Michael Jeffrey Ward • Position: Professor of Mathematics • Member: Institute for Applied Mathematics • Office Location: Math Annex 1217 • Math Office Telephone: (604) 822-5869 • Email Address: ward at math dot ubc dot ca • Postal Address: 1984 Mathematics Road, Dept. of Mathematics, Univ. of British Columbia, Vancouver, B.C., Canada, V6T 1Z2. • Biography and CV and a Publication List • Editorial Boards: • Research Interests: Applied Analysis, Singular Perturbations, Reaction-Diffusion Theory, Mathematical Modeling and Scientific Computation. • For a ScienceWatch.com interview based on the paper An Asymptotic Analysis of the Mean First Passage Time for Narrow Escape Problems: Part II: The Sphere with A. Cheviakov and R. Straube (SIAM Multiscale Modeling and Simulation, Vol. 8, No. 3, (2010), pp. 836-870), please click here • CAIMS Research Prize 2011, Click here Research: Research Areas and Highlights Papers: Preprints, Reprints, and Talks Current and Former Graduate Students (Last modified August 2013)
{"url":"http://www.math.ubc.ca/~ward/","timestamp":"2014-04-19T12:25:28Z","content_type":null,"content_length":"2797","record_id":"<urn:uuid:32f04b4b-ac50-4716-a982-f015511938a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Problems In Highschool Chemistry This book will deal with all the topics covered in the high school syllabus, through numerical problems and other questions. Unlike a textbook which teaches students, we expect the content of this book to teach students to teach themselves. The articles, problems and solutions discussed here will focus on the thought process rather than the steps involved. The second goal of this problem book is to make students realize that Chemistry is indeed simpler than Physics, Mathematics and Biology at the highschool level. Presently, this book is randomly edited as per the questions put forward to the contributors. As we submit more and more content, we will start integrating the pieces into a picture. Volunteers are welcome to discuss the content policy, make contributions, and above all - giving touch ups to the already submitted content. Basic TopicsEdit Inorganic ChemistryEdit Physical ChemistryEdit First Law Of ThermodynamicsEdit Second Law Of ThermodynamicsEdit Chemical KineticsEdit Vapour PressureEdit Colligative Properties of SolutionsEdit Surface ChemistryEdit Solid StateEdit Chemical EquilibriumEdit Introduction to Ionic EquilibriumEdit Ionic Equilibrium - Level IIEdit Electrolytic ConductanceEdit Electrochemistry : Electrodes and ReactionsEdit Electrochemistry : Electrolytic Cells and ElectrolysisEdit Organic ChemistryEdit Last modified on 13 September 2011, at 01:07
{"url":"http://en.m.wikibooks.org/wiki/Problems_In_Highschool_Chemistry","timestamp":"2014-04-21T09:47:38Z","content_type":null,"content_length":"19176","record_id":"<urn:uuid:08ffd21a-2beb-4ceb-8bf1-489e7d0e94ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
calculating enclosure volume??? [Archive] - Car Audio Forum - CarAudio.com (L-2xthickness) (w-2x thickness) (D-2xthickness)...? then any other variables are take out... port bracing 45s sub... i always use the manual formulas its not that hard. basic fuckin math Show us the formula for finding the sealed volume to achieve a target Qtc. LOL, thanks boogeyman. I already know it well but I wanted to see if the OP could expand past HxWxL=in^3/1728 for finding volume. I was hoping he would come back at me with the simple Vb = Vas / (( Qtc / Qts ) ² - 1) but he chose not to respond. I certainly do appreciate you taking the time to type all that out and it comforts me to know that some people are using the math once in a while. :) no problem....Im old-school........thats how we did it before the internet.
{"url":"http://www.caraudio.com/forums/archive/index.php/t-447296.html","timestamp":"2014-04-21T16:45:59Z","content_type":null,"content_length":"9608","record_id":"<urn:uuid:5ea398b6-5a4d-494a-917c-1bd8276c6f39>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus is a central branch of mathematics, developed from algebra and geometry, and built on two major complementary ideas. One concept is differential calculus. It studies rates of change, which are usually illustrated by the slope of a line. Differential calculus is based on the problem of finding the instantaneous rate of change of one quantity relative to another. Examples of typical differential calculus problems are finding the following quantities: • The acceleration and speed of a free-falling body at a particular moment. • The loss in speed and trajectory of a fired projectile, such as an artillery shell or bullet. • Change in profitability over time of a growing business at a particular point. The other key concept is integral calculus. It studies the accumulation of quantities, such as areas under a curve, linear distance travel, or volume displaced. Integral calculus is the mirror image of differential calculus. Examples of integral calculus problems include finding the following quantities: • The amount of water pumped by a pump with a set power input but varying conditions of pumping losses and pressure. • The amount of money accumulated by a business under varying business conditions. • The amount of parking lot plowed by a snowplow of given power with varying rates of snowfall. The two concepts, differentiation and integration, define inverse operations in a sense made precise by the fundamental theorem of calculus. In teaching calculus, either concept may be given priority. The usual educational approach is to introduce differential calculus first. Main article: History of calculus Though the origins of integral calculus are generally regarded as going no farther back than to the ancient Greeks, there is evidence that the ancient Egyptians may have harbored such knowledge as well. (See Moscow Mathematical Papyrus.) Eudoxus is generally credited with the method of exhaustion, which made it possible to compute the area and volume of regions and solids. Archimedes developed this method further, while also inventing heuristic methods which resemble modern day concepts. An Indian Mathematician, Bhaskara (1114-1185), gave an example of what is now called "differential coefficient" and the basic idea of what is now known as "Rolle's theorem". Leibniz and Newton are usually designated the inventors of calculus, mainly for their separate discoveries of the fundamental theorem of calculus and work on notation. There has been considerable debate about whether Newton or Leibniz was first to come up with the important concepts of the calculus. The truth of the matter will likely never be known. Leibniz' greatest contribution to calculus was his notation; he often spent days trying to come up with the appropriate symbol to represent a mathematical idea. This controversy between Leibniz and Newton was unfortunate in that it divided English-speaking mathematicians from those in Europe for many years, setting back British analysis (i.e. calculus-based mathematics) for a very long time. Newton's terminology and notation was clearly less flexible than that of Leibniz, yet it was retained in British usage until the early 19th century, when the work of the Analytical Society successfully saw the introduction of Leibniz's notation in Great Britain. It is now thought that Newton had discovered several ideas related to calculus earlier than Leibniz had; however, Leibniz was the first to publish. Today, both Leibniz and Newton are considered to have discovered calculus independently. Lesser credit for the development of calculus is given to Barrow, Descartes, de Fermat, Huygens, and Wallis. A Japanese mathematician, Kowa Seki, lived at the same time as Leibniz and Newton and also elaborated some of the fundamental principles of integral calculus, though this was not known in the West at the time, and he had no contact with Western scholars. [1] (http://www2.gol.com/users/ Differential calculus Main article: Derivative The derivative measures the sensitivity of one variable to small changes in another variable. A hint is the formula: $Speed = \frac{Distance}{Time}$ for an object moving at constant speed. One's speed (a derivative) in a car describes the change in location relative to the change in time. The speed itself may be changing; the calculus deals with this more complex but natural and familiar situation. Differential calculus determines the instantaneous speed, at any given specific instant in time, not just average speed during an interval of time. The formula Speed = Distance/Time applied to a single instant is the meaningless quotient "zero divided by zero". This is avoided, however, because the quotient Distance/Time is not used for a single instant (as in a still photograph), but for intervals of time that are very short. The derivative answers the question: as the elapsed time approaches zero, what does the average speed computed by Distance/Time approach? In mathematical language, this is an example of "taking a More formally, differential calculus defines the instantaneous rate of change (the derivative) of a mathematical function's value, with respect to changes of the variable. The derivative is defined as a limit of a difference quotient. The derivative of a function gives information about small pieces of its graph. It is directly relevant to finding the maxima and minima of a function — because at those points the graph is flat (i.e. the slope of the graph is zero). Another application of differential calculus is Newton's method, an algorithm to find zeroes of a function by approximating the function by its tangent lines. Differential calculus has been applied to many questions that are not first formulated in the language of calculus. The derivative lies at the heart of the physical sciences. Newton's law of motion, Force = Mass × Acceleration, has meaning in calculus because acceleration is a derivative. Maxwell's theory of electromagnetism and Einstein's theory of gravity (general relativity) are also expressed in the language of differential calculus, as is the basic theory of electrical circuits and much of Integral calculus Main article: Integral The definite integral evaluates the cumulative effect of many small changes in a quantity. The simplest instance is the formula Distance = Speed x Time for calculating the distance a car moves during a period of time when it is traveling at constant speed. The distance moved is the cumulative effect of the small distances moved in each of the many seconds the car is on the road. The calculus is able to deal with the natural situation in which the car moves with changing speed. Integral calculus determines the exact distance traveled during an interval of time by creating a series of better and better approximations, called Riemann sums, that approach the exact distance. More formally, we say that the definite integral of a function on an interval is a limit of Riemann sum approximations. Applications of integral calculus arise whenever the problem is to compute a number that is in principle (approximately) equal to the sum of the solutions of many, many smaller problems. The classic geometric application is to area computations. In principle, the area of a region can be approximated by chopping it up into many very tiny squares and adding the areas of those squares. (If the region has a curved boundary, then omitting the squares overlapping the edge does not cause too great an error.) Surface areas and volumes can also be expressed as definite integrals. Many of the functions that are integrated are rates, such as a speed. An integral of a rate of change of a quantity on an interval of time tells how much that quantity changes during that time period. It makes sense that if one knows their speed at every instant in time for an hour (i.e. they have an equation that relates their speed and time), then they should be able to figure out how far they go during that hour. The definite integral of their speed presents a method for doing so. Many of the functions that are integrated represent densities. If, for example, the pollution density along a river (tons per mile) is known in relation to the position, then the integral of that density can determine how much pollution there is in the whole length of the river. Probability, the basis for statistics, provides one of the most important applications of integral calculus. The rigorous foundation of calculus is based on the notions of a function and of a limit; the latter has a theory ultimately depending on that of the real numbers as a continuum. Its tools include techniques associated with elementary algebra, and mathematical induction. The modern study of the foundations of calculus is known as real analysis. This includes full definitions and proofs of the theorems of calculus. It also provides generalisations such as measure theory and distribution theory. Fundamental theorem of calculus The fundamental theorem of calculus states that differentiation and integration are, in a certain sense, inverse operations. More precisely, antiderivatives can be calculated with definite integrals, and vice versa. This connection allows us to recover the total change in a function over some interval from its instantaneous rate of change, by integrating the latter. This realization, made by both Newton and Leibniz, was key to the massive proliferation of analytic results after their work became known. The fundamental theorem provides an algebraic method of computing many definite integrals --without performing limit processes--by finding formulas for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives, and are ubiquitous in the sciences. 1^st Fundamental Theorem of Calculus: If a function f is continuous on the interval [a, b] and F is an antiderivative of f on the interval [a, b], then $\int_{a}^{b} f(x)\,dx = F(b) - F(a).$ 2^nd Fundamental Theorem of Calculus: If f is continuous on an open interval I containing a, then, for every x in the interval, $\frac{d}{dx}\int_a^x f(t)\, dt = f(x).$ The development and use of calculus has had wide reaching effects on nearly all areas of modern living. It underlies nearly all of the sciences, especially physics. Virtually all modern developments such as building techniques, aviation, and other technologies make fundamental use of calculus. Many algebraic formulas now used for ballistics, heating and cooling, and other practical sciences were worked out through the use of calculus. In a handbook, an algebraic formula based on calculus methods may be applied without knowing its origins. The success of calculus has been extended over time to differential equations, vector calculus, calculus of variations, complex analysis, and differential topology. See also Further reading • Tom M. Apostol. (1967) ISBN 0-471-00005-1 and ISBN 0-471-00007-8 Calculus, 2nd Ed. Wiley. • Robert A. Adams. (1999) ISBN 0-201-39607-6 Calculus: A complete course. • Michael Spivak. (Sept 1994) ISBN 0914098896 Calculus. Publish or Perish publishing. • Cliff Pickover. (2003) ISBN 0-471-26987-5 Calculus and Pizza: A Math Cookbook for the Hungry Mind. • Silvanus P. Thompson and Martin Gardner. (1998) ISBN 0312185480 Calculus Made Easy. • Albers, Donald J.; Richard D. Anderson and Don O. Loftsgaarden, ed. (1986) Undergraduate Programs in the Mathematics and Computer Sciences: The 1985-1986 Survey, Mathematical Association of America No. 7, • Mathematical Association of America. (1988) Calculus for a New Century; A Pump, Not a Filter, The Association, Stony Brook, NY. ED 300 252. • Keisler, H. Jerome. (1986) Elementary Calculus: An Approach Using Infinitesimals. The text is available here (http://www.math.wisc.edu/~keisler/calc.html) under a creative commons non commercial • Carl B. Boyer. (1949) The History of the Calculus and its Conceptual Development. External links
{"url":"http://www.edinformatics.com/inventions_inventors/calculus.htm","timestamp":"2014-04-21T09:45:29Z","content_type":null,"content_length":"40355","record_id":"<urn:uuid:943c7be1-1c06-41ed-8e1a-8c24066130ea>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Analog-to-Digital conversion Question 1: The circuit shown here is a four-bit analog-to-digital converter (ADC). Specifically, it is a converter, so named because of its high speed: Explain why we must use a encoder to encode the comparator outputs into a four-bit binary code, and not a regular encoder. What problem(s) would we have if we were to use a non-priority encoder in this ADC circuit? I won't directly answer this question, but instead pose a "thought experiment." Suppose the analog input voltage (V ) were slowly increased from 0 volts to the reference voltage (V ). What do the outputs of the comparators do, one at a time, as the analog input voltage increases? What input conditions does the encoder see? How would a primitive "diode network" type of encoder (which we know does encode based on priority) interpret the comparator outputs? Here, I show students a very practical application of a priority encoder, in which the necessity of priority encoding should be apparent after some analysis of the circuit. Question 2: Predict how the operation of this "flash" analog-to-digital converter (ADC) circuit will be affected as a result of the following faults. Consider each fault independently (i.e. one at a time, no multiple faults): • Resistor R[16] fails open: • Resistor R[1] fails open: • Comparator U[13] output fails low: • Solder bridge (short) across resistor R[14]: For each of these conditions, explain the resulting effects will occur. • Resistor R[16] fails open: Output code will be 15 (1111) all the time. • Resistor R[1] fails open: If V[in] < V[ref], output will be 0 (0000); if V[in] > V[ref], output will be 15 (1111). • Comparator U[13] output fails low: Output will assume the "13" state (1101) unless V[in] exceeds that analog value, then the ADC will register properly. • Solder bridge (short) across resistor R[14]: There will be no distinctive "13" state (1101), the analog values for all the other states adjusting slightly to fill the gap. The purpose of this question is to approach the domain of circuit troubleshooting from a perspective of knowing what the fault is, rather than only knowing what the symptoms are. Although this is not necessarily a realistic perspective, it helps students build the foundational knowledge necessary to diagnose a faulted circuit from empirical data. Questions such as this should be followed (eventually) by other questions asking students to identify likely faults based on measurements. Question 3: This "flash" ADC circuit has a problem. The output code jumps from with just the slightest amount of input voltage (V ). In fact, the only time it outputs is when the input terminal is slightly negative with reference to ground: Identify at least two possible component faults that could cause this problem, and explain your reasoning in how you made the identifications. One possible fault is that resistor R has failed open, but this is not the only possibility. Have your students explain their reasoning in class to you, so that you may observe their diagnostic thought processes. Question 4: Don't just sit there! Build something!! Learning to analyze digital circuits requires much study and practice. Typically, students practice by working through lots of sample problems and checking their answers against those provided by the textbook or the instructor. While this is good, there is a much better way. You will learn much more by actually building and analyzing real circuits , letting your test equipment provide the änswers" instead of a book or another person. For successful circuit-building exercises, follow these steps: 1. Draw the schematic diagram for the digital circuit to be analyzed. 2. Carefully build this circuit on a breadboard or other convenient medium. 3. Check the accuracy of the circuit's construction, following each wire to each connection point, and verifying these elements one-by-one on the diagram. 4. Analyze the circuit, determining all output logic states for given input conditions. 5. Carefully measure those logic states, to verify the accuracy of your analysis. 6. If there are any errors, carefully check your circuit's construction against the diagram, then carefully re-analyze the circuit and re-measure. Always be sure that the power supply voltage levels are within specification for the logic circuits you plan to use. If TTL, the power supply be a 5-volt regulated supply, adjusted to a value as close to 5.0 volts DC as possible. One way you can save time and reduce the possibility of error is to begin with a very simple circuit and incrementally add components to increase its complexity after each analysis, rather than building a whole new circuit for each practice problem. Another time-saving technique is to re-use the same components in a variety of different circuit configurations. This way, you won't have to measure any component's value more than once. Let the electrons themselves give you the answers to your own "practice problems"! It has been my experience that students require much practice with circuit analysis to become proficient. To this end, instructors usually provide their students with lots of practice problems to work through, and provide answers for students to check their work against. While this approach makes students proficient in circuit theory, it fails to fully educate them. Students don't just need mathematical practice. They also need real, hands-on practice building circuits and using test equipment. So, I suggest the following alternative approach: students should build their own "practice problems" with real components, and try to predict the various logic states. This way, the digital theory "comes alive," and students gain practical proficiency they wouldn't gain merely by solving Boolean equations or simplifying Karnaugh maps. Another reason for following this method of practice is to teach students scientific method: the process of testing a hypothesis (in this case, logic state predictions) by performing a real experiment. Students will also develop real troubleshooting skills as they occasionally make circuit construction errors. Spend a few moments of time with your class to review some of the "rules" for building circuits before they begin. Discuss these issues with your students in the same Socratic manner you would normally discuss the worksheet questions, rather than simply telling them what they should and should not do. I never cease to be amazed at how poorly students grasp instructions when presented in a typical lecture (instructor monologue) format! I highly recommend CMOS logic circuitry for at-home experiments, where students may not have access to a 5-volt regulated power supply. Modern CMOS circuitry is far more rugged with regard to static discharge than the first CMOS circuits, so fears of students harming these devices by not having a "proper" laboratory set up at home are largely unfounded. A note to those instructors who may complain about the "wasted" time required to have students build real circuits instead of just mathematically analyzing theoretical circuits: What is the purpose of students taking your course? If your students will be working with real circuits, then they should learn on real circuits whenever possible. If your goal is to educate theoretical physicists, then stick with abstract analysis, by all means! But most of us plan for our students to do something in the real world with the education we give them. The "wasted" time spent building real circuits will pay huge dividends when it comes time for them to apply their knowledge to practical problems. Furthermore, having students build their own practice problems teaches them how to perform primary research, thus empowering them to continue their electrical/electronics education autonomously. In most sciences, realistic experiments are much more difficult and expensive to set up than electrical circuits. Nuclear physics, biology, geology, and chemistry professors would just love to be able to have their students apply advanced mathematics to real experiments posing no safety hazard and costing less than a textbook. They can't, but you can. Exploit the convenience inherent to your science, and get those students of yours practicing their math on lots of real circuits! Question 5: may be thought of as a one-bit analog-to-digital converter: Explain why this description of a comparator is appropriate. What exactly is meant by the term änalog-to-digital converter," or ADC? All ADCs input one or more analog signals and output a discrete signal. This description of a comparator is not just theoretical. In many practical ADC circuits, a comparator is actually used as the primary analog-to-digital conversion device. This is particularly true for oversampling or Sigma-Delta converters, which may be built around a single (1-bit) comparator. Question 6: analog-to-digital converters are easy to understand, but are not practical for many applications. Identify some of the drawbacks of the "flash" circuit design. Flash converter circuits have too many components! Actually, the answer is a bit more detailed than this, but easy enough to find on your own that I'll leave the task of research to you. It is a shame that flash converter circuits suffer the disadvantage(s) that they do. They are so simple to understand and have such an inherent speed advantage over other circuit designs! Discuss with your students why the weaknesses of the flash design make the other ADC types necessary, and even preferable in most applications. Question 7: Explain the operating principle of this analog-to-digital converter circuit, usually referred to as a The binary counter will count up or down as necessary to "track" the analog input voltage, resulting in a binary output that continuously represents the input. Follow-up question: this form of ADC is not very effective at following fast-changing input signals. Explain why. Have your students express the answer to this question in their own words, not just copying the answer I provide. Aside from the flash converter, the tracking converter is one of the easiest ADC circuits to understand. Question 8: Explain the operating principle of this analog-to-digital converter circuit, usually referred to as a Note: the successive-approximation register (SAR) is a special type of binary counting circuit which begins counting with the most-significant bit (MSB), then the next-less-significant bit, in order all the way down to the LSB. At that point, it outputs a "high" signal at the "Complete" output terminal. The operation of this register may be likened to the manual process of converting a decimal number to binary by "trial and fit" with the MSB first, through all the successive bits down to the LSB. The successive approximation register counts up and down as necessary to "zero in" on the analog input voltage, resulting in a binary output that locks into the correct value once every n clock cycles, where n is the number of bits the DAC inputs. Follow-up question: this form of ADC is much more effective at following fast-changing input signals than the converter design. Explain why. Have your students express the answer to this question in their own words, not just copying the answer I provide. Aside from the flash converter, the tracking converter is one of the easiest ADC circuits to understand. Question 9: Explain the operating principle of a ADC circuit, in your own words. I won't give away all the details here, but the single-slope converter uses an integrator and a binary counter, the binary output determined by how long the counter is allowed to count. Tutorials abound on simple ADC strategies, so your students should have little problem locating an adequate explanation for the operation of a single-slope ADC. Question 10: Explain the operating principle of a ADC circuit, in your own words. I won't give away all the details here, but the dual-slope converter uses the same integrator and binary counter that the single-slope ADC does. However, the integrator is used a bit differently in the dual-slope design, the benefits being greater immunity to high-frequency noise on the input signal and greater accuracy due to relative insensitivity to integrator component values. Tutorials abound on simple ADC strategies, so your students should have little problem locating an adequate explanation for the operation of a dual-slope ADC. Question 11: analog-to-digital converter works on the principle of , whereby a low-resolution ADC repeatedly samples the input signal in a feedback loop. In many cases, the ADC used is nothing more than a comparator (a 1-bit ADC!), the output of this ADC subtracted from the input signal and integrated over time in an attempt to achieve a balance near 0 volts at the output of the integrator. The result is a pulse-density modulated (PDM) "bitstream" of 1-bit digital data which may be filtered and (converted to a binary word of multiple bits): Explain what this PDM bitstream would look like for the following input voltage conditions: • V[in] = 0 volts • V[in] = V[DD] • V[in] = V[ref] • V[in] = 0 volts ; bitstream = 00000000 . . . • V[in] = V[DD] ; bitstream = 11111111 . . . • V[in] = V[ref] ; bitstream = 01010101 . . . In order to answer this question, students must have a good grasp of how the summing integrator works. Discuss with them how the feedback loop's "goal" is to maintain the integrator output at the reference voltage (V[ref]), and how the 1-bit ADC can only make adjustments to the integrator's output by driving it upward or downward by the same analog quantity every clock pulse. Question 12: The pulse-density modulation (PDM) of a 1-bit oversampled Delta-Sigma modulator circuit may be "decimated" into a multi-bit binary number simply by counting the number of "1" states in a bitstream of fixed length. Take for example the following bitstreams. Sample the first seven bits of each stream, and convert the equivalent binary numbers based on the number of "high" bits in each seven-bit sample: • 001001001001001 • 101101101101101 • 010010001100010 • 010001100010001 • 111011101110111 Then, take the same five PDM bitstreams and "decimate" them over a sampling interval of 15 bits. Sampling interval = 7 bits • 001001001001001 ; Binary value = 010[2] • 101101101101101 ; Binary value = 101[2] or 100[2] • 010010001100010 ; Binary value = 010[2] or 011[2] • 010001100010001 ; Binary value = 011[2] or 001[2] • 111011101110111 ; Binary value = 110[2] or 101[2] Sampling interval = 15 bits • 001001001001001 ; Binary value = 0101[2] • 101101101101101 ; Binary value = 1010[2] • 010010001100010 ; Binary value = 0101[2] • 010001100010001 ; Binary value = 0101[2] • 111011101110111 ; Binary value = 1100[2] Follow-up question: what relationship do you see between sampling speed in this "decimation" process, and how does this relate to the performance of a Delta-Sigma ADC? With little effort, your students should be able to see that sampling twice as many bits in the PDM bitstream adds one more bit of resolution to the final binary output. Such is the nature of so many circuits: that optimization of one performance parameter comes at the expense of another. Students may question how two (or more!) different decimation results can occur from the same bitstream, especially as shown in the answer for the 7-bit groupings. The answer is two-part: first, the bitstreams I show are not all perfectly repetitive. Some change pattern (slightly) mid-way, which leads to different pulse densities in different sections. The second part to this answer is that the nature of decimation by grouping will inevitably lead to differing results (even when the pattern is perfectly repetitive), and that this is the converter's "way" of resolving an analog quantity lying between two discrete output states. In other words, a pair of decimated values of "4" and "5" (100[2] and 101[2], respectively) from a perfectly repetitive bitstream suggests an analog value lying somewhere between the discrete integer values of "4" and "5". Only by sampling groups of bits equal to the period of the PDM repetition (or integer multiples of that repetition) can the digital output precisely and constantly equal the analog input. Question 13: Suppose an analog-digital converter IC ("chip") inputs a voltage ranging from 0 to 5 volts DC and converts the magnitude of that voltage into an 8-bit binary number. How many discrete ßteps" are there in the output as the converter circuit resolves the input voltage from one end of its range (0 volts) to the other (5 volts)? How much voltage does each of these steps represent? This ADC (Analog-to-Digital Converter) circuit has 256 steps in its output range, each step representing 19.61 mV. This question is not so much about ADC circuitry as it is about digital resolution in general. Any digital system with a finite number of parallel bits has a finite range. When representing analog variables in digital form by the limited number of bits available, there will be a certain minimum voltage increment represented by each ßtep" in the digital output. Here, students get to see how the discrete nature of a binary number translates to real-life measurement "rounding." Question 14: One of the idiosyncrasies of analog-to-digital conversion is a phenomenon known as . It happens when an ADC attempts to digitize a waveform with too high of a frequency. Explain what aliasing is, how it happens, and what may be done to prevent it from happening to an ADC circuit. As the saying goes, a picture is worth a thousand words: The point of this question (and of the answer given) is to have students put this important concept into their own words. Something noteworthy for students and instructors alike is that aliasing may be visually experienced using digital oscilloscopes. Setting the timebase (seconds/division) control too slow may result in a false (aliased) waveform displayed in the oscilloscope. Not only does this make a good classroom demonstration, but it also is a great lesson to learn if one expects to use digital oscilloscopes on a regular basis! Question 15: Analog-to-digital converter circuits (ADC) are usually equipped with analog low-pass filters to pre-condition the signal prior to digitization. This prevents signals with frequencies greater than the sampling rate from being seen by the ADC, causing a detrimental effect called . These analog pre-filters are thus known as anti-aliasing filters Determine which of the following Sallen-Key active filters is of the correct type to be used as an anti-aliasing filter: The low-pass Sallen-Key filter, of course! What's the matter? You're not laughing at my answer. What I'm doing here is asking you to some research on Sallen-Key filters to confirm your qualitative analysis. And yes, I do expect you to be able to figure out which of the two filters is low-pass based on your knowledge of capacitors and op-amps, not just look up the answer in an op-amp reference book! Discuss with your students various ways of identifying active filter types. What clues are present in these two circuits to reveal their filtering characteristics? Question 16: Suppose a particular ADC has an input voltage range of +5 volts to -5 volts, and therefore is suitable for digitizing AC input signals. A technician wants to use this ADC to digitize AC line voltage (120 volts RMS), and builds the following conditioning circuit to safely connect the ADC to the AC line: Unfortunately, this ADC is not able to fully sample the AC waveform when tested. It överflows" and ünderflows" at the waveform's peaks, as though the input waveform is too large (outside of the +5/-5 volt ADC chip range). The technician re-checks his calculations, but still thinks the voltage division ratio provided by the potential transformer and resistor network should be sufficient for this What is wrong with this circuit? Why does it över-range" at the waveform peaks instead of sampling the 120 volt waveform with range to spare? Then, once having identified the problem, recommend a solution to fix the problem. The technician failed to consider the voltage of the AC line! Challenge question: one thing the technician did right in this circuit was use a transformer as the front-end of his signal conditioning network. Explain why this was a smart idea. In other words, why would it possibly be worse to simply use a resistive voltage divider to do the attenuation, instead of using a step-down transformer to do part of it and a resistive divider to do the rest? The given answer is purposefully minimal, but should contain enough information that anyone familiar with RMS versus peak sinusoidal values should realize what the problem is. There is more than one practical solution for fixing this problem, so be sure to allow time for discussion into the various options. Question 17: bargraph driver circuit takes an audio input signal and displays the amplitude as a moving "bar" of lights. The stronger the amplitude of the signal, the more LEDs energize in the bargraph display. Predict how the operation of this circuit will be affected as a result of the following faults. Consider each fault independently (i.e. one at a time, no multiple faults): • Resistor R[4] failed open: • Solder bridge (short) past resistor R[3]: • Resistor R[11] failed open: • Zener diode D[1] failed shorted: • Schottky diode D[2] failed shorted: • Resistor R[4] failed open: LEDs 1 through 3 always off, LEDs 4 through 6 always on (with any substantial input signal amplitude at all). • Solder bridge (short) past resistor R[3]: LEDs 2 and 3 always energize at the same time. • Resistor R[11] failed open: LED 4 never lights up. • Zener diode D[1] failed shorted: All LEDs light up together with any substantial input signal amplitude at all. • Schottky diode D[2] failed shorted: None of the LEDs ever light up. Follow-up question: does each comparator current to its respective LED? Challenge question: if resistors R through R are all equal value, the response of the bargraph will be linear (twice the signal amplitude results in twice as many LEDs energized). What would have to be changed in this circuit to give the bargraph a response, so it registered proportional to a decibel scale rather than a voltage scale? Questions like this help students hone their troubleshooting skills by forcing them to think through the consequences of each possibility. This is an essential step in troubleshooting, and it requires a firm understanding of circuit function. Question 18: Examine this vertical ("bird's eye") view of a boat resisting a river's current: Suppose the driver of this boat does not own an anchor, and furthermore that the only form of propulsion is an electric "trolling" motor with an on/off switch (no variable speed control). With the right combination of switch actuations (on, off, on, off), it should be possible for the boat to maintain its position relative to the riverbanks, against the flow of water. Now, if we know the boat is actually holding position in the middle of the river, by trolling motor power alone, the pattern of on/off switch actuations should tell us something about the speed of the river. Perform a couple of "thought experiments" where you imagine what the driver of the boat would have to do with the motor's on/off switch to maintain position against a fast current, versus against a slow current. What relationship do you see between the switch actuations and the speed of the current? Note: once you understand this question, you will be better prepared to grasp the operation of a analog-to-digital converter! duty cycle of the switch actuations is in direct proportion to the river's speed. The purpose of this question is to present an analogy which students may use to grasp the operation of a Delta-Sigma ADC: the idea that a bitstream (PDM) may represent an analog value. Related Links
{"url":"http://www.allaboutcircuits.com/worksheets/adc.html","timestamp":"2014-04-19T17:01:34Z","content_type":null,"content_length":"48690","record_id":"<urn:uuid:2379fcfe-1e13-467d-aac7-178f0d04ac33>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Category:Math Level 3 From OpenContent Curriculum Go Back to the main Math page Five Easy Steps to a Balanced Math Program MA.03.01 Identifies, describes, orders, and models place value to one thousand. (GLE[3] N-2)(GLE[3] N-2) MA.03.02 Skip counts by numbers beyond 100 forward and backward (2 thru 10). (GLE [3] N-9, E&C-5) MA.03.03 Identifies Ordinal Numbers (numbers in sequence) & Cardinal Numbers (number names). (GLE [3] N-2) MA.03.04 Identifies, describes with explanations, and illustrates equivalent fractions with denominators 2,3,4, or 10 and finds the fraction of a set and equal parts of a whole. (GLE [3] N-4) MA.03.05 Demonstrates and models the commutative (8+3=3+8) and identity (1+0=1) properties of addition. (GLE [3] N-7, N-8) MA.03.06 Tells accurate time to the five minute in A.M. or P.M. using analog and digital clocks. (GLE[4] MEA-7) MA.03.07 Determine elapsed time using a calendar. (GLE [3] MEA-8) MA.03.08 (A2.2.1) Estimates length (nearest 1/2 inch and cm) and weight of objects and checks for reasonability in English system. (GLE [3] MEA-1, MEA-6) MA.03.09 Understands and applies <, > and = signs and vocabulary greater than, less than, or equal to. (GLE [3] MEA-3, F&R-5) MA.03.10 Determines possible combinations of coins and Count back change from $1.00. (GLE [3] MEA-9) MA.03.11 Understands basic multiplication and division facts including memorizing facts to 9x9. (GLE [3] E&C-6) MA.03.12 Multiplies and divides 2-digit numbers by 1-digit numbers. (GLE [3] E&C-4) MA.03.13 Estimates three digit addition (sums) and subtraction (differences), with place values, including rounding. (GLE [3] E&C-3) MA.03.14 (A4.1.3) Uses calculator as a tool to solve simple problems. (GLE [3] F&R-3) MA.03.15 Use an open number sentence (addition and subtraction) to solve for an unknown represented by a box or circle (e.g. 5+ = 16). (GLE [3] F&R-4) MA.03.16 (A4.1.2) Identifies and applies addition and subtraction patterns to solve simple problems (5, 10, 15, __, __). (GLE [3] F&R-1) MA.03.17 (A5.1.5) (A5.1.5) Describes, identifies, and draws spatial transformation (slides, flips, turns, congruency), and lines of symmetry with real world objects; understands horizontal and vertical vocabulary. (GLE [3] G-3, G-4) MA.03.18 (A5.1.3) Draws and identifies basic geometric lines, angles, shapes (circles, rectangles, squares, triangles), and solids (cubes, cylinders, cones, & spheres). (GLE [3] G-2) MA.03.19 (A5.1.4) Estimate and determine area and perimeter of squares and rectangles including conservation of area using drawings or manipulatives. (GLE [3] G-6) MA.03.20 (A6.1.3) Collects, organizes and describes data using terms maximum and minimum. (GLE [3] S&P-3) MA.03.21 Reads and describes information from a variety of visual displays (tallies, tables, line graphs, bar graphs). (GLE [3] S&P-2) MA.03.22 Understand probability by making predictions about the likelihood of outcomes of a simple experiment (e.g. spinner, coin toss, dice roll). (GLE[3] S&P-5) MA.03.23 (C1.2.3, D1.1.3) Communicates strategies and solutions by writing explanations (guess and check, draw a picture, make a model, extend a pattern). (GLE [3] PS-1) MA.03.24 (E1.1.2) Applies mathematical skills and processes to other disciplines and everyday life. (GLE [3] PS-5) This category has the following 3 subcategories, out of 3 total. Pages in category "Math Level 3" The following 3 pages are in this category, out of 3 total.
{"url":"http://wiki.bssd.org/index.php/Category:Math_level_3","timestamp":"2014-04-17T07:12:38Z","content_type":null,"content_length":"23943","record_id":"<urn:uuid:f07d48e2-c33e-47df-a30a-624bc3aea567>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: integral problem. i have seen one solution in full. Maybe i can get another solution here. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51150771e4b09e16c5c7bfd4","timestamp":"2014-04-25T08:13:27Z","content_type":null,"content_length":"54625","record_id":"<urn:uuid:c4824dbb-6695-4ffd-8aa4-e398c54c4dc3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Accident Models for Two-Lane Rural Roads: Segment and Intersections 6. Validation and Further Analysis Cumulative Scaled Residuals Figures 8 through 15 below show cumulative scaled residual plots for the extended negative binomial model (combined segments, Table 27) and for negative binomial models (Minnesota three-legged and four-legged intersections, Table 35). The cumulative scaled residuals are plotted against leading explanatory variables. For an explanatory variable x, a plot is made of j versus where j runs through the values of x. Each term, a scaled residual, should be approximately unbiased. However, if the sum depends in some regular way on j, then the model may have missed some systematic effects (e.g., quadratic dependency). If there is no systematic effect and the terms are otherwise independent, the expected value of the sum is approximately zero, and its standard deviation is approximately the square root of the number of observations for which x For the segments (Figures 8, 9, 10, and 11) the overall sum of the scaled residuals is about -8, for the three-legged intersections (Figures 12 and 13) the sum is about -2, and for the four-legged intersections (Figures 14 and 15) the sum is about +1. Thus the segment graphs and the three-legged graphs should end below the horizontal axis, while the four-legged graphs should end above. Table 48 summarizes the residual behavior. The segment model overpredicts (predicted mean number of accidents higher than actual number) at the low end of exposure. The cumulative scaled residual varies from -32 to +12. Overprediction occurs on segments without horizontal curves. The cumulative scaled residual varies from -36 to +7. The segment model underpredicts on segments without crest curves. The cumulative scaled residual varies from -13 to +30 . The cumulative scaled residual varies from -24 to + 22. The cumulative scaled residual varies from -9 to +11. The cumulative scaled residual varies from -16 to +7. The cumulative scaled residual varies from -4 to +12. Despite the indications of overprediction or underprediction in some regimes in the segment model, which might lead one to develop separate models in different regimes (e.g., one model for low exposure, one for medium exposure, and one for high), the graphs are generally consistent with random walks. In particular the ranges shown in Table 48 above are reasonable. In a random walk, as mentioned, the n-th step or observation on average will take one a distance of less than ±(n)^1/2 units from the origin. In addition it is not at all uncommon to stay on one side of zero (above or below) for many steps in succession. Negative binomial models never predict zero values for the dependent variable (in our case numbers of accidents). Thus at low values of highway variables (presumed to be associated with fewer accidents), when the true number of accidents is zero, the negative binomial predicts a positive number and hence must overpredict at least somewhat. Previous Table of Contents Next
{"url":"https://www.fhwa.dot.gov/publications/research/safety/98133/ch06_03.cfm","timestamp":"2014-04-16T05:38:25Z","content_type":null,"content_length":"12015","record_id":"<urn:uuid:f7d071a4-c47a-4130-b913-39df6e76cc5c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00134-ip-10-147-4-33.ec2.internal.warc.gz"}