content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
5 search hits
Coulomb effects on electromagnetic pair production in ultrarelativistic heavy-ion collisions (1999)
U. Eichmann Joachim Reinhardt Stefan Schramm Walter Greiner
We calculate the asymptotic high-energy amplitude for electrons scattering at one ion, as well as at two colliding ions, by means of perturbation theory. We show that the interaction with one ion
eikonalizes and that the interaction with two ions causally decouples. We are able to put previous results on perturbative grounds and propose further applications for the obtained rules for
interactions on the light cone. We discuss the implications of the eikonal amplitude on the pair production probability in ultrarelativistic peripheral heavy-ion collisions. In this context the
Weizsäcker-Williams method is shown to be exact in the ultrarelativistic limit, irrespective of the produced particles’ mass. A new equivalent single-photon distribution is derived, which
correctly accounts for Coulomb distortions. The impact on single-photon induced processes is discussed.
Hybrid approaches to heavy ion collisions and future perspectives (2011)
Marlene Nahrgang Christoph Herold Stefan Schramm Marcus Bleicher
We present the current status of hybrid approaches to describe heavy ion collisions and their future challenges and perspectives. First we present a hybrid model combining a Boltzmann transport
model of hadronic degrees of freedom in the initial and final state with an optional hydrodynamic evolution during the dense and hot phase. Second, we present a recent extension of the
hydrodynamical model to include fluctuations near the phase transition by coupling a chiral field to the hydrodynamic evolution.
Nanolesions induced by heavy ions in human tissues: experimental and theoretical studies (2012)
Marcus Bleicher Lucas Burigo Marco Durante Maren Herrlitz Michael Krämer Igor Mishustin Iris Müller Francesco Natale Igor Pshenichnov Stefan Schramm Gisela Taucher-Scholz Cathrin Wälzlein
The biological effects of energetic heavy ions are attracting increasing interest for their applications in cancer therapy and protection against space radiation. The cascade of events leading to
cell death or late effects starts from stochastic energy deposition on the nanometer scale and the corresponding lesions in biological molecules, primarily DNA. We have developed experimental
techniques to visualize DNA nanolesions induced by heavy ions. Nanolesions appear in cells as “streaks” which can be visualized by using different DNA repair markers. We have studied the kinetics
of repair of these “streaks” also with respect to the chromatin conformation. Initial steps in the modeling of the energy deposition patterns at the micrometer and nanometer scale were made with
MCHIT and TRAX models, respectively.
Particle ratios from AGS to RHIC in an interacting hadronic model (2004)
Detlef Zschiesche Gebhard Zeeb Kerstin Paech Horst Stöcker Stefan Schramm
Abstract: The measured particle ratios in central heavy-ion collisions at RHIC-BNL are investigated within a chemical and thermal equilibrium chiral SU(3) Ã É approach. The commonly adopted
non-interacting gas calculations yield temperatures close to or above the critical temperature for the chiral phase transition, but without taking into account any interactions. In contrast, the
chiral SU(3) model predicts temperature and density dependent effective hadron masses and effective chemical potentials in the medium and a transition to a chirally restored phase at high
temperatures or chemical potentials. Three different parametrizations of the model, which show different types of phase transition behaviour, are investigated. We show that if a chiral phase
transition occured in those collisions, freezing of the relative hadron abundances in the symmetric phase is excluded by the data. Therefore, either very rapid chemical equilibration must occur
in the broken phase, or the measured hadron ratios are the outcome of the dynamical symmetry breaking. Furthermore, the extracted chemical freeze-out parameters differ considerably from those
obtained in simple non-interacting gas calculations. In particular, the three models yield up to 35 MeV lower temperatures than the free gas approximation. The inmedium masses turn out to differ
up to 150 MeV from their vacuum values.
Structure of the vacuum in nuclear matter: a nonperturbative approach (1997)
Amruta Mishra P. K. Panda Stefan Schramm Joachim Reinhardt Walter Greiner
We compute the vacuum polarization correction to the binding energy of nuclear matter in the Walecka model using a nonperturbative approach. We first study such a contribution as arising from a
ground-state structure with baryon-antibaryon condensates. This yields the same results as obtained through the relativistic Hartree approximation of summing tadpole diagrams for the baryon
propagator. Such a vacuum is then generalized to include quantum effects from meson fields through scalar-meson condensates which amounts to summing over a class of multiloop diagrams. The method
is applied to study properties of nuclear matter and leads to a softer equation of state giving a lower value of the incompressibility than would be reached without quantum effects. The
density-dependent effective sigma mass is also calculated including such vacuum polarization effects.
|
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Stefan+Schramm%22/start/0/rows/10/doctypefq/article/sortfield/title/sortorder/asc","timestamp":"2014-04-20T16:29:49Z","content_type":null,"content_length":"39155","record_id":"<urn:uuid:6297c587-b2b6-4c78-ba3e-2f4a40d6a6ef>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Information & Coding Theory
Welcome to E-Books Directory
This page lists freely downloadable books.
Information & Coding Theory
E-Books for free online viewing and/or download
e-books in this category
Data Compression Explained
by Matt Mahoney - mattmahoney.net , 2013
This book is for the reader who wants to understand how data compression works, or who wants to write data compression software. Prior programming ability and some math skills will be needed. This
book is intended to be self contained.
(1162 views)
A primer on information theory, with applications to neuroscience
by Felix Effenberger - arXiv , 2013
This chapter is supposed to give a short introduction to the fundamentals of information theory, especially suited for people having a less firm background in mathematics and probability theory. The
focus will be on neuroscientific topics.
(1049 views)
Data Compression
- Wikibooks , 2011
Data compression is useful in some situations because 'compressed data' will save time (in reading and on transmission) and space if compared to the unencoded information it represent. In this book,
we describe the decompressor first.
(1236 views)
From Classical to Quantum Shannon Theory
by Mark M. Wilde - arXiv , 2012
The aim of this book is to develop 'from the ground up' many of the major developments in quantum Shannon theory. We study quantum mechanics for quantum information theory, we give important unit
protocols of teleportation, super-dense coding, etc.
(1435 views)
Logic and Information
by Keith Devlin - ESSLLI , 2001
An introductory, comparative account of three mathematical approaches to information: the classical quantitative theory of Claude Shannon, a qualitative theory developed by Fred Dretske, and a
qualitative theory introduced by Barwise and Perry.
(2280 views)
Conditional Rate Distortion Theory
by Robert M. Gray - Information Systems Laboratory , 1972
The conditional rate-distortion function has proved useful in source coding problems involving the possession of side information. This book represents an early work on conditional rate distortion
functions and related theory.
(1747 views)
Algorithmic Information Theory
by Peter D. Gruenwald, Paul M.B. Vitanyi - CWI , 2007
We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain this quantitative approach to defining information and discuss the extent to which
Kolmogorov's and Shannon's theory have a common purpose.
(2284 views)
The Limits of Mathematics
by Gregory J. Chaitin - Springer , 2003
The final version of a course on algorithmic information theory and the epistemology of mathematics. The book discusses the nature of mathematics in the light of information theory, and sustains the
thesis that mathematics is quasi-empirical.
(3129 views)
Quantum Information Theory
by Renato Renner - ETH Zurich , 2009
Processing of information is necessarily a physical process. It is not surprising that physics and the theory of information are inherently connected. Quantum information theory is a research area
whose goal is to explore this connection.
(3262 views)
Theory of Quantum Information
by John Watrous - University of Calgary , 2004
The focus is on the mathematical theory of quantum information. We will begin with basic principles and methods for reasoning about quantum information, and then move on to a discussion of various
results concerning quantum information.
(2900 views)
Generalized Information Measures and Their Applications
by Inder Jeet Taneja - Universidade Federal de Santa Catarina , 2001
Contents: Shannon's Entropy; Information and Divergence Measures; Entropy-Type Measures; Generalized Information and Divergence Measures; M-Dimensional Divergence Measures and Their Generalizations;
Unified (r,s)-Multivariate Entropies; etc.
(2417 views)
Information Theory, Excess Entropy and Statistical Complexity
by David Feldman - College of the Atlantic , 2002
This e-book is a brief tutorial on information theory, excess entropy and statistical complexity. From the table of contents: Background in Information Theory; Entropy Density and Excess Entropy;
Computational Mechanics.
(2598 views)
Information Theory and Statistical Physics
by Neri Merhav - arXiv , 2010
Lecture notes for a graduate course focusing on the relations between Information Theory and Statistical Physics. The course is aimed at EE graduate students in the area of Communications and
Information Theory, or graduate students in Physics.
(4072 views)
Quantum Information Theory
by Robert H. Schumann - arXiv , 2000
A short review of ideas in quantum information theory. Quantum mechanics is presented together with some useful tools for quantum mechanics of open systems. The treatment is pedagogical and suitable
for beginning graduates in the field.
(6157 views)
Lecture Notes on Network Information Theory
by Abbas El Gamal, Young-Han Kim - arXiv , 2010
Network information theory deals with the fundamental limits on information flow in networks and optimal coding and protocols. These notes provide a broad coverage of key results, techniques, and
open problems in network information theory.
(5021 views)
Information-Theoretic Incompleteness
by Gregory J. Chaitin - World Scientic , 1992
In this mathematical autobiography, Gregory Chaitin presents a technical survey of his work and a non-technical discussion of its significance. The technical survey contains many new results,
including a detailed discussion of LISP program size.
(2666 views)
A Short Course in Information Theory
by David J. C. MacKay - University of Cambridge , 1995
This text discusses the theorems of Claude Shannon, starting from the source coding theorem, and culminating in the noisy channel coding theorem. Along the way we will study simple examples of codes
for data compression and error correction.
(4542 views)
Exploring Randomness
by Gregory J. Chaitin - Springer , 2001
This book presents the core of Chaitin's theory of program-size complexity, also known as algorithmic information theory. LISP is used to present the key algorithms and to enable computer users to
interact with the author's proofs.
(7419 views)
Entropy and Information Theory
by Robert M. Gray - Springer , 2008
The book covers the theory of probabilistic information measures and application to coding theorems for information sources and noisy channels. This is an up-to-date treatment of traditional
information theory emphasizing ergodic theory.
(5215 views)
Information Theory and Coding
by John Daugman - University of Cambridge , 2009
The aims of this course are to introduce the principles and applications of information theory. The course will study how information is measured in terms of probability and entropy, and the
relationships among conditional and joint entropies; etc.
(6402 views)
Network Coding Theory
by Raymond Yeung, S-Y Li, N Cai - Now Publishers Inc , 2006
A tutorial on the basics of the theory of network coding. It presents network coding for the transmission from a single source node, and deals with the problem under the more general circumstances
when there are multiple source nodes.
(4862 views)
Algorithmic Information Theory
by Gregory. J. Chaitin - Cambridge University Press , 2003
The book presents the strongest possible version of Gödel's incompleteness theorem, using an information-theoretic approach based on the size of computer programs. The author tried to present the
material in the most direct fashion possible.
(4404 views)
A Mathematical Theory of Communication
by Claude Shannon , 1948
Shannon presents results previously found nowhere else, and today many professors refer to it as the best exposition on the subject of the mathematical limits on communication. It laid the modern
foundations for what is now coined Information Theory.
(21657 views)
Information Theory, Inference, and Learning Algorithms
by David J. C. MacKay - Cambridge University Press , 2003
A textbook on information theory, Bayesian inference and learning algorithms, useful for undergraduates and postgraduates students, and as a reference for researchers. Essential reading for students
of electrical engineering and computer science.
(10043 views)
More Sites Like This
Science Books Online Books Fairy
Maths e-Books Programming Books
|
{"url":"http://www.e-booksdirectory.com/listing.php?category=99","timestamp":"2014-04-18T16:36:27Z","content_type":null,"content_length":"22191","record_id":"<urn:uuid:16bd321f-765e-4d47-bbdf-f8c0fdadd8f9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help: bases and scalars
November 2nd 2008, 03:58 PM #1
Junior Member
Oct 2008
Help: bases and scalars
I am having a lot of trouble wrapping my head around these vector spaces...
How can I show that if {v1, v2, ... , vn} is a basis for a vector space V and c <> 0, then {cv1, v2, ... , vn} is also a basis for V?
Any help would be appreciated, thanks!
If you know about matrices, you can easily construct a inversible matrix that transforms your basis into your second set of vectors, and so conclude that it's also a basis.
But you can also show that each v_i , i=1,...,n , is a linear combination of {cv_1,...,v_n}, and then you've won because as {v_1,...v_n} is a basis, then your space has a dimension of n, so a
subset with n vectors that spans your space is a basis.
November 3rd 2008, 01:41 PM #2
Senior Member
Nov 2008
|
{"url":"http://mathhelpforum.com/advanced-algebra/57162-help-bases-scalars.html","timestamp":"2014-04-19T22:07:40Z","content_type":null,"content_length":"31377","record_id":"<urn:uuid:dac29e03-e4da-4332-91a9-c931c50472e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
|
search results
Expand all Collapse all Results 1 - 4 of 4
1. CJM 2007 (vol 59 pp. 614)
Preduals and Nuclear Operators Associated with Bounded, $p$-Convex, $p$-Concave and Positive $p$-Summing Operators
We use Krivine's form of the Grothendieck inequality to renorm the space of bounded linear maps acting between Banach lattices. We construct preduals and describe the nuclear operators associated
with these preduals for this renormed space of bounded operators as well as for the spaces of $p$-convex, $p$-concave and positive $p$-summing operators acting between Banach lattices and Banach
spaces. The nuclear operators obtained are described in terms of factorizations through classical Banach spaces via positive operators.
Keywords:$p$-convex operator, $p$-concave operator, $p$-summing operator, Banach space, Banach lattice, nuclear operator, sequence space
Categories:46B28, 47B10, 46B42, 46B45
2. CJM 2004 (vol 56 pp. 277)
Spectral Properties of the Commutator of Bergman's Projection and the Operator of Multiplication by an Analytic Function
It is shown that the singular values of the operator $aP-Pa$, where $P$ is Bergman's projection over a bounded domain $\Omega$ and $a$ is a function analytic on $\bar{\Omega}$, detect the length of
the boundary of $a(\Omega)$. Also we point out the relation of that operator and the spectral asymptotics of a Hankel operator with an anti-analytic symbol.
3. CJM 2000 (vol 52 pp. 197)
Sublinearity and Other Spectral Conditions on a Semigroup
Subadditivity, sublinearity, submultiplicativity, and other conditions are considered for spectra of pairs of operators on a Hilbert space. Sublinearity, for example, is a weakening of the
well-known property~$L$ and means $\sigma(A+\lambda B) \subseteq \sigma(A) + \lambda \sigma(B)$ for all scalars $\lambda$. The effect of these conditions is examined on commutativity, reducibility,
and triangularizability of multiplicative semigroups of operators. A sample result is that sublinearity of spectra implies simultaneous triangularizability for a semigroup of compact operators.
Categories:47A15, 47D03, 15A30, 20A20, 47A10, 47B10
4. CJM 1998 (vol 50 pp. 658)
Hankel operators on pseudoconvex domains of finite type in ${\Bbb C}^2$
The aim of this paper is to study small Hankel operators $h$ on the Hardy space or on weighted Bergman spaces, where $\Omega$ is a finite type domain in ${\Bbbvii C}^2$ or a strictly pseudoconvex
domain in ${\Bbbvii C}^n$. We give a sufficient condition on the symbol $f$ so that $h$ belongs to the Schatten class ${\cal S}_p$, $1\le p<+\infty$.
Categories:32A37, 47B35, 47B10, 46E22
|
{"url":"http://cms.math.ca/cjm/msc/47B10?fromjnl=cjm&jnl=CJM","timestamp":"2014-04-19T17:21:07Z","content_type":null,"content_length":"30960","record_id":"<urn:uuid:4b395560-4458-4136-a227-e45878aa7a3d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Teaching mathematics: programming or CAS?
I’ve got a few books which aim to teach (discrete) mathematics through the use of a programming language: one book uses Haskell; the other Python. Now I like both these languages – I find them
elegant, fun to program in (at my elementary level, anyway), and as languages for teaching programming I think they both have a great deal to offer.
But are they the right environment for teaching mathematics? I actually think not. The trouble is that unless you start adding lots of specialist libraries to the system you are reduced to mainly
dealing with numbers, lists, and arrays. Now there’s certainly an awful lot you can do here, but is it enough? The trouble is that if you want to include some symbolic manipulation, then you need to
add another hefty library. If you’re working in Python, there’s sympy, which is a library for symbolic mathematics which “aims to become a full-featured computer algebra system”. For Haskell there’s
DoCon, an ” Algebraic Domain Constructor” which claims to be a “a program for symbolic computation in mathematics”.
Now, I haven’t tested either of these, and I don’t intend to. But I’ll bet that neither of them are as full featured or as easy to use as the freeware computer algebra systems Maxima or Axiom. The
only advantage of using sympy/DoCon is that further work in them can be done in that language (Python for sympy, Haskell for DoCon).
For my part, unless the students are so entrenched in using Python/Haskell that everything must be done in that language, it seems to me that from a purely pedagogical perspective you’re much better
off using a CAS from the word go. Students’ mathematical knowledge, expertise and confidence can grow with your CAS of choice. And the range of mathematics of which a good CAS is capable is huge.
One response to “Teaching mathematics: programming or CAS?”
1. Could you please specify those books you’re talking about?
This entry was posted in Maths teaching, Software. Bookmark the permalink.
|
{"url":"http://amca01.wordpress.com/2008/04/10/teaching-mathematics-programming-or-cas/","timestamp":"2014-04-20T09:10:05Z","content_type":null,"content_length":"55053","record_id":"<urn:uuid:51f09166-920c-43bf-b8e9-11032981c9e6>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chris Pollett > Publications, Reviews, Master's Student Papers and Talks
For copyright reasons versions below may be close to but not the final journal version.
This document was last modified Sunday, 09-Oct-2011 19:23:43 PDT.
[25-PDF] [25-PS] Conservative Fragments of S^1[2] and R^1[2]. Archive For Mathematical Logic. Vol. 50. Iss. 3. 2011. pp. 367 -- 393.
[24-PDF] [24-PS] Alternating Hierarchies for Time-Space Tradeoffs. (with Eric Miles). arXiv:0801.1307v1. 2008.
[23-PDF] [23-PS] The Weak Pigeonhole Principle for Function Classes in S^1[2]. (with Norman Danner). Mathematical Logic Quarterly. Volume 52, Issue 6 (December 2006). pp. 575--584.
[22-PDF] [22-PS] Circuit Principles and Weak Pigeonhole Variants. (with Norman Danner). This paper improves the result of the conference paper below. Theoretical Computer Science. Volume 383. 2007.
pp. 115-131.
[21-PDF] [21-PS] Circuit Principles and Weak Pigeonhole Variants. (with Norman Danner). Conferences in Research and Practice in Information Technology -- Computing: The Australasian Theory Symposium
(CATS 2005). Eds. M. Atkinson and F. Dehne. Volume 41. Australian Computer Science Communications. Volume 27. Number 4. pp. 31--40.
[20-PDF] [20-PS] On the Computational Power of Probabilistic and Quantum Branching Programs. (with Farid Ablayev, Aida Gainutdinova, Marek Karpinski, and Cristopher Moore). Information and
Computation. Dec. 2005. Vol. 203. Iss 2. pp. 145--162.
[19-PDF] [19-PS] Languages to diagonalize against advice classes. Computational Complexity. Vol. 14 Iss 4. pp. 342--361. 2005. An earlier version with many typos appeared as ECCC TR04-014.
[18-PDF] [18-PS] S[k, exp] does not prove NP=coNP uniformly. Zapiski Nauchnykh Seminarov. Dom 1. Vol. 304. 2003. pp. 99--120. (Without the uniformity condition the result only implies a restricted
form of IOpen +exp cannot prove NP = coNP.) Translated from English to English but with some typos eliminated in Journal of Mathematical Sciences. Vol. 130. No. 2. Oct. 2005. pp.4607--4619.
[17-PDF] [17-PS] A Theory for Logspace and NLIN versus co-NLIN. Journal of Symbolic Logic. Vol. 68 No. 4. 2003. pp. 1082-1090.
[16-PDF] [16-PS] Nepomnjascij's Theorem and Independence Proofs in Bounded Arithmetic. ECCC TR02-051.
[15-PDF] [15-PS] Quantum and Stochastic Branching Programs of Bounded Width. (with Farid Ablayev and Cris Moore) 29th International Colloquium on Automata, Languages, and Programming (ICALP). 2002.
p.343--354. Earlier versions can be found at ECCC TR02-013 and LANL quant-ph/020113. The upper bound result in the ICALP paper needed a syntactic restriction. This has been revised in the ECCC and
LANL version and in the version on-line here.
[14-PDF] [14-PS] On the Bounded Version of Hilbert's Tenth Problem. Archive for Mathematical Logic. Vol. 42. No. 5. 2003. pp. 469--488.
[13-PDF] [13-PS] Counting, Fanout, and the Complexity of Quantum ACC. (with Fred Green, Steve Homer, and Cris Moore) Quantum Information and Computation. Vol. 2. No. 1. 2002. pp.35--65.
[12-PDF] [12-PS] Minimization and NP multifunctions. (with N. Danner) Theoretical Computer Science. Vol. 318. Iss. 1-2. 2004. pp. 105--119.
[11-PDF] [11-PS] Ordinal Notations and Well-Orderings in Bounded Arithmetic. (with A. Beckmann and S.R. Buss) Annals of Pure and Applied Logic.120 (Issue 1-3) Apr. 2003. pp. 197-223.
[10-PDF] [10-PS] Strengths and Weaknesses of LH Arithmetic. (with R. Pruim) Mathematical Logic Quarterly. 48 (No.2) Feb 2002. pp.221--243.
[9-PDF] [9-PS] On the Complexity of Quantum ACC. (with Fred Green and Steve Homer) Proceedings of the Fifteenth Annual IEEE Conference on Computational Complexity. July 4-7, 2000. pp250--262. Is also
Boston University Technical Report BUCS-TR-2000-003 and LANL quant-ph/0002057.
[8-PDF] [8-PS] Multifunction Algebras and the Provability of PH↓. Annals of Pure and Applied Logic. Vol. 104 July 2000. pp. 279--303.
[7-PDF] [7-PS] Translating IΔ[0] + exp proofs into weaker systems. Mathematical Logic Quarterly. 46:249-256 (No.2) May 2000.
[6-PDF] [6-PS] On the Δ^b[1]-bit comprehension rule. (with Jan Johannsen) Proceedings of Logic Colloquium 1998. edited by S.R. Buss, P. Hajek, P. Pudlak pp.262--270, A.K.Peters and ASL, 2000.
[5-PDF] [5-PS] Structure and Definability in General Bounded Arithmetic Theories. Annals of Pure and Applied Logic. Vol 100. October 1999. pp.189--245.
[4-PDF] [4-PS] On Proofs about Threshold Circuits and Counting Hierarchies (with Jan Johannsen). Proceedings of Thirteenth IEEE Symposium on Logic in Computer Science. pp.444--452. IEEE press. 1998.
[3-PDF] [3-PS] Nonmonotonic reasoning with quantified boolean constraints. (with J. Remmel) Logic Programming and Nonmonotonic Reasoning. 1997. Jurgen Dix, Ulrich Furbach and Anil Nerode (Eds).
pp.18--39. LNAI, 1265. Springer. 1997.
[2-PDF] [2-PS] A propositional proof system for R^i[2]. Proof Complexity and Feasible Arithmetics. Paul W. Beame and Samuel R. Buss eds. pp.253--278. 1997. DIMACS: Series in Discrete Mathematics and
Theoretical Computer Science. Vol 39. AMS. 1997.
[1-PDF] Arithmetic Theories with Prenex Normal Form Induction. Ph.D. Thesis. UC San Diego. 1997.
[Rev 13-PDF] [Rev 13-PS] Review of S. Cook and N. Thapen paper: ``The Strength of Replacement in Weak Arithmetic'' Mathematical Reviews. MR#2264422.
[Rev 12-PDF] [Rev 12-PS] Review of G. Ferreira and I. Oitavem paper: ``An Interpretation of S^1[2] in Σ^b[2]-NIA.'' Mathematical Reviews. MR#2287276.
[Rev 11-PDF] [Rev 11-PS] Review of P. Naumov paper: ``Upper bounds on complexity of Frege proofs with limited use of certain schemata.'' Mathematical Reviews. MR#2227496.
[Rev 11-PDF] [Rev 11-PS] Review of M. R. Fellows, S. Szeider and G. Wrightson paper: ``On finding short resolution refutations and small unsatisfiable subsets.'' Mathematical Reviews. MR#2202495.
[Rev 10-PDF] [Rev 10-PS] Review of M. Jarvisalo, T. A. Junttila and I. Niemela paper: ``Unrestricted vs restricted cut in a tableau method for Boolean circuits.'' Mathematical Reviews. MR#2185702.
[Rev 9-PDF] [Rev 9-PS] Review of J. Krajicek paper: ``Structured pigeonhole principle, search problems, and hard tautologies.'' Mathematical Reviews. MR#2140049.
[Rev 8-PDF] [Rev 8-PS] Review of M. Alekhnovich, E. Ben-Sasson, A. Razborov, and A Wigderson paper: ``Pseudorandom number generators in propositional proof complexity.'' Mathematical Reviews. MR#
[Rev 7-PDF] [Rev 7-PS] Review of M. Moniri paper: ``Polynomial induction and length minimization in intuitionistic bounded arithmetic.'' Mathematical Reviews. MR#2099386.
[Rev 6-PDF] [Rev 6-PS] Review of J. Krajicek paper: ``Dual weak pigeonhole principle, pseudo-surjective functions, and provability of circuit lower bounds .'' Mathematical Reviews. MR#2039361.
[Rev 5-PDF] [Rev 5-PS] Review of A. Beckmann paper: ``Dynamic ordinal analysis.'' Mathematical Reviews. MR#2018084.
[Rev 4-PDF] [Rev 4-PS] Review of Buresh-Oppenheim, Clegg, Impagliazzo, Pitassi paper: ``Homogenization and the polynomial calculus.'' Mathematical Reviews. MR#2022043.
[Rev 3-PDF] [Rev 3-PS] Review of P. Pudlak paper: ``Parallel Strategies.'' Mathematical Reviews. MR#2017351.
[Rev 2-PDF] [Rev 2-PS] Review of M. Moniri paper: ``Comparing constructive arithmetical theories based on NP-PIND and coNP-PIND.'' Mathematical Reviews. MR#2030203.
[Rev 1-PDF] [Rev 1-PS] Review of A. Beckmann paper: ``Proving consistency of equational theories in bounded arithmetic.'' Bulletin of Symbolic Logic. Vol. 9. No. 1. 2003. pp.44-45.
Master's Students Papers
These are some papers I am author on that were mainly dealing with research conducted by master's students who I either supervised or was on their commitee.
[MS 2-PDF] [MS 2-PS] Stealthy Ciphertext. (with Martina Simova and Mark Stamp) Proceeding of the Conference on Internet Computing. 2005. pp. 380--386.
[MS 1-PDF] [MS 1-PS] On using Mouse Movement as a Biometric. (with Shivani Hashia and Mark Stamp) The 3rd International Conference on Computer Science and its Applications. (ICCSA) . 2005. pp.
This is a partial list of talks I have given at various places, together with overheads if I could find them.
This document was last modified Sunday, 09-Oct-2011 19:23:43 PDT.
Oct. 9, 2011. On the Finite Axiomatizability of Prenex R^1[2]. Talk given at BIRS. Banff, Canada.
April 8. 2008. Weak Definability Notions for Independence Results in Bounded Arithmetic. Talk given at Oberwolfach, Germany.
Feb. 1. 2008. The Surjective Weak Pigeonhole Principle in Bounded Arithmetic. Talk given at VIG 2008, UCLA.
Apr. 10. 2006. When can S^1[2] prove the weak pigeonhole principle? Talk given at Newton Mathematical Institute, Cambridge University. An embarassing glitch appeared in a proof in the original talk
on the next to last slide. This is corrected in the version I am posting here.
It is well known result of Krajicek and Pudlak that if S^1[2] could prove the injective weak pigeonhole principle for every polynomial time function then RSA would not be secure. In this talk, we
will consider function algebras based on a common characterization of the polynomial time functions where we slightly modify the initial functions and further restrict the amount of allowable
recursion. We will then argue that S^1[2] can prove the surjective weak pigeonhole principle for functions in this algebra.
Apr. 4. 2006. Introduction to Quantum Branching Programs. Talk given at Berkeley Quantum Information and Computation Seminar (Chemistry Department, UC Berkeley). (Joint work with Farid Ablayev, Aida
Gainutdinova, Marek Karpinski, and Cristopher Moore.)
Branching programs of bounded width are a nonuniform model of computation which can be viewed as generalizing finite automata. As such they are a relatively simple model of computation and quantum
versions of branching programs might serve as a good model on which to develop potentially implementable quantum algorithms. In this talk we will introduce quantum branching programs and discuss
upper and lower bounds in terms of deterministic and stochastic models of computation on what kind of algorithms are implementable on quantum branching programs.
Dec. 1. 2005. Video Game Engines. Talk given to High School Students at National Hispanic University Charter High School.
Apr. 16. 2005. Circuits Principles and Weak Pigeonhole Principles. Talk given at AMS Sectional Meeting #1007, UC Santa Barbara, Special Session on Complexity of Computation and Algorithms. (Joint
work with Norman Danner.) The next four talks listed below are very similar, this talk's set of slides is probably the best. I am retiring this talk at this point.
This talk considers the relational versions of the surjective and multifunction weak pigeonhole principles for various classes of formulas. These principles are interesting because of their close
connection to the provability of circuit lower bounds, and hence the P versus NP question, in weak systems of arithmetic. We show that the relational surjective pigeonhole principle for Θ^b[1] in S^1
[2] implies a circuit block-recognition principle which in turn implies the surjective weak pigeonhole principle for Σ^b[1] formulas. We introduce a class of predicates corresponding to poly-log
length iterates of polynomial-time computable predicates and show that over R^1[2], the multifunction pigeonhole principle for such predicates is equivalent to an "iterative" circuit
block-recognition principle. A consequence of this is that if R^2[3] proves this circuit iteration principle then RSA is vulnerable to quasi-polynomial time attacks.
Mar. 22. 2005. Equivalents of the Weak Multifunction Pigeonhole Principle. Talk given at Oberwolfach, Germany. (Joint work with Norman Danner.)
This talk discusses joint work of myself with Norman Danner of Wesleyan University. I began the talk by presenting a recent result of Jerabek on the surjective weak pigeonhole principle for p-time
functions. Namely, that over the theory S^1[2] this principle is equivalent to the existence of a string which is hard for any circuit of size n^k. This shows that T^2[2], a slightly stronger theory,
can prove a predicate exists which is hard for circuits of size n^k. Krajicek and Pudlak have shown if the injective weak pigeonhole principle for p-time functions is witnessable from a class C
satisfying PTIME^C = C then RSA is insecure against attacks from C. As the multifunction weak pigeonhole principle implies both the injective and surjective principles, it is natural to wonder if
there is any circuit class such that the existence of a hard string for this class is equivalent to the multifunction weak pigeonhole principle for the analogous uniform class. We show that for R^2
[2], a theory between T^2[2] and S^1[2] in strength, the multifunction weak pigeonhole principle for quasi-log iterated p-time relations is equivalent to circuit lower bounds for quasi-log iterated
p-size circuits. Thus, we show if R^2[2] could prove lower bounds for this class of circuits, one can also show RSA is insecure against quasi-polynomial time attacks.
Feb. 1. 2005. Circuit principles and weak pigeonhole variants. Talk given for my CATS 2005 paper, Newcastle, Australia. (Joint work with Norman Danner.)
This paper/talk considers the relational versions of the surjective and multifunction weak pigeonhole principles for PV, Σ^b[1] and Θ^b[2]-formulas. We show that the relational surjective pigeonhole
principle for Θ^b[2]-formulas in S^1[2] implies a circuit block-recognition principle which in turn implies the surjective weak pigeonhole principle for Σ^b[1] formulas. We introduce a class of
predicates corresponding to poly-log length iterates of polynomial-time computable predicates and show that over R^2[2], the multifunction pigeonhole principle for such predicates is equivalent to an
``iterative'' circuit block-recognition principle. A consequence of this is that if R^2[3] proves this circuit iteration principle then RSA is vulnerable to quasi-polynomial time attacks.
Nov. 4. 2004. Circuit Principles, The Weak Pigeonhole Principle, and RSA. SJSU Computer Science Colloquium. Nov 4. 2004. (Joint work with Norman Danner.)
The weak pigeonhole principle for a relation says that the relation does not represent a map of n^2 pigeons into n holes such that each pigeon is mapped and each hole only gets one pigeon. This
principle for polynomial time relations is closely connected with the RSA cryptographic scheme. In particular, Krajicek and Pudlak have shown that if given any polynomial time relation one can find a
polynomial time algorithm which finds for this relation either an unmapped pigeon or two pigeons in the the same hole, then one could break RSA in polynomial time. In this talk we will discuss this
result. It is also an open area of research how much mathematics is needed to prove the weak pigeonhole principle. We will show this problem for certain weak systems is connected to whether the
system can prove lower bounds on the the size of circuits. After discussing several results of this type from our own research and that of Jerabek, we will finally connect this back to the security
of RSA.
Aug. 19, 2003. Weak Arithmetics and Unrelativized Independence Results. Invited talk. Logic Colloquium 2003 (LC 2003). Helsinki, Finland.
In this talk I will survey some techniques which can be used to show unrelativized independence results for questions like NP vs coNP, the collapse of the polynomial hierarchy, or Hilbert's Tenth
Problem in weak systems of arithmetic. I will also present some new results concerning the limits of formalizing padding arguments in commonly studied weak arithmetics.
Apr. 15, 2003. On the Power of Classical and Quantum Branching Programs. CS Colloquium. Santa Clara University.
Branching Programs have proven to be a useful model of computation in a variety of domains such as hardware verification, model checking, and other CAD applications. As branching programs are also a
very simple model of computation with several easy ways to restrict their power, it is interesting to generalize the branching program model to the quantum setting. In this talk I will survey some of
the known results about classical branching programs, discuss (assuming no background in quantum mechanics) how both classical and quantum branching program models work, and describe some of our
results concerning the power of the quantum branching program model.
Jan. 25, 2003. Bounded Versions of Hilbert's Tenth Problem and NP = co-NP. Mini-Workshop: Hilbert's Tenth Problem, Mazur's Conjecture and Divisibility Sequences. Mathematisches Forschungsinstitut
Oberwolfach. Oberwolfach, Germany.
We discuss the provability of Matijasevich-Robinson-Davis-Putnam (MRDP) result in weak systems of arithmetic. It is a well-known result of Gaifman and Dimitricopoulos IDelta_0+exp proves MRDP. What
was shown in their result was that every bounded formula in their language could be rewritten as a formula consisting of an existential block of quantifiers followed by an equation of the form p = q
where p and q are polynomials. By Parikh's Theorem, IDelta_0+exp cannot prove the existence of superexponentially fast growing functions. Therefore, one could ask whether if one expanded the language
by IDelta_0+exp's access to it, then one could obtain a system unable to prove MRDP in this new language. This is possible because now the bounds on the quantifiers that need to be eliminated are
larger than before. In fact, we construct a system that cannot prove MRDP and show as well that it cannot prove NP = co-NP in a certain very uniform way.
Nov. 27, 2002. Nepomnjascij's Theorem and Independence Proofs in Bounded Arithmetic. Logic Colloquium. UC Berkeley.
Many complexity classes in computer science, for example, functions in PTIME, LOGSPACE, NC, AC^0, have characterizations in terms of being the provably "NP-definable" functions of some weak theory of
arithmetic. If the complexity class in question is known not to be equal to NP, this can be used to show the weak system of arithmetic cannot prove NP = co-NP. The problem with this kind of result
is, of course, you need to be able to show first that the complexity class in question is different from NP. For all but the weakest classes this question has been open for quite some time. In this
talk I will survey some of my own work in this area as well as some recent work of Ressayre and Boughattas. Then I will present some new results of mine based on Nepomnjascij's Theorem that get
around this barrier in a slightly different setting related to the problem of whether nondeterministic linear time is equal to co-nondeterministic linear time and related to how strong a theory of
arithmetic is needed to prove the Matiyasevich-Robinson-Davis-Putnam Theorem.
Aug. 7, 2002. Quantum and Stochastic Branching Programs of Bounded Width. 29th International Colloquium on Automata, Languages, and Programming. (ICALP 2002). Malaga, Spain.
Jun. 29, 2002. Using IDelta_0 in independence proofs. Bounded Arithmetic and Complexity Classes 2002. (BACC 2002). Lisbon, Portugal.
Jun. 7, 2002. On the Bounded Version of Hilbert's Tenth Problem. 21st Days of Weak Arithmetics. (JAF '21) Steklov Institute of Mathematics St.Petersburg, Russia.
Hilbert's Tenth problem concerned the decidability of Diophantine equations over the integers. Its negative solution, Matiyassevich's theorem, amounted to showing that the class of formulas of the
(exists y)P(x,y)=Q(x,y)
where P, Q are polynomials with natural number coefficients is equivalent to the class of recursively enumerable sets. The bounded form of Hilbert's Tenth problem is whether the NP-predicates are the
class D of predicates given by formulas of the form
( exists y)[(\sum_j yj <= 2^{|\sum_i xi|^k}) /\ P(x,y)=Q(x, y)]
where P, Q are polynomials with natural coefficients. This problem is related to the average case completeness of certain NP-problems. In this talk we give lower bounds on the provability of both
these problems in weak fragments of arithmetic. We show the theory I^5E cannot prove D=NP. Here I^mE has a finite set of axioms and induction on bounded existential formulas up to m lengths of a
number for the language L_2, which has the symbols <= , 0, S, +, x#y, |x|, 2^{|x||y|}, \lfloor x/2^i \rfloor, and limited subtraction. We use the non-provability of D=NP to show that I^5E cannot
prove the Matiyassevich's theorem.
Apr. 8, 2002. Implicit Complexity and Bounded Arithmetic Theories. Mathematische Logik. Mathematisches Forschungsinstitut Oberwolfach. Oberwolfach, Germany.
Bounded arithmetic theories are weak fragments of arithmetic useful in the study of computational complexity classes. In this talk we will discuss known techniques for doing independence proofs in
these theories. We will then consider the question of classifying the Sigma^b_1≠definable multifunctions of a particular bounded arithmetic theory, S_2. Such a classification might prove useful in
proving new independence results. We will then indicate why an implicit complexity approach to this characterization might be reasonable.
Jun., 2001. Computational complexity and independence results. Meeting on Discrete Mathematics and Mathematical Cybernetics organized by Moscow State University. Dubna, Russia.
Jul. 30, 2000. Strengths and Weaknesses of LH Arithmetic. Contributed paper and talk. Logic Colloquium 2000 (LC 2000). Paris, France.
In Pollett~\cite{cpollett00} a bounded arithmetic theory $Z$ was shown not to be able to prove the collapse of the polynomial hierarchy. This theory also had the property that if $Z \subseteq S^i_2$
for any $i\geq 1$ then the polynomial hierarchy collapses. Here $S^i_2$ are the theories of Buss~\cite{bus86a}. Unfortunately, despite this property $Z$ seemed too weak a theory to formalize many of
the arguments that have been used in computational complexity. In this talk, we give a new arithmetic characterization of the levels of $\log$-time hierarchy. Using this characterization, we propose
a new variant of the theory $TAC^0$ of Clote and Takeuti~\cite{clotak95}. This variant has nice deductive fragments which in some sense correspond to the levels of the log-time hierarchy. We show
that this theory (like $Z$) cannot prove the collapse of the polynomial hierarchy. Furthermore, we give some evidence that this theory may be strong enough to prove that the log-time hierarchy is
infinite, so unlike $Z$ it can carry out useful complexity arguments.
Jul., 2000. On the Complexity of Quantum ACC. Computational Complexity 2000. Florence, Italy. The linked slides have some the content of the original talk; however, some slides are from a job
interview at SJSU and page 13 has been lost.
Nov. 5, 1999. Strengths and Weaknesses of LH Arithmetic. Logic Colloquium, UCLA.
One way to quantify the difficulty of P = NP problem would be to exhibit a logical theory which is capable of formalizing current attempts at answering this question and yet which is not powerful
enough to prove or disprove this equality. Razborov has argued that most current circuit lower bound techniques can be formalized in a fragment of arithmetic which roughly has induction on
NE-predicates. Nevertheless, exhibiting any fragment of arithmetic which one can demonstrate cannot prove the collapse of the polynomial hierarchy is nontrivial. In this talk we will consider a much
weaker theory than Razborov's, but which we argue can still do some interesting mathematics. Namely, it can "reason" about all the functions in the log-time hierarchy, it can prove the log-time
hierarchy differs from NP, and, in fact, we give some evidence it might be able to show the log-time hierarchy is infinite. (In the real world this is known to be true.) On the other hand, we show
this theory cannot prove the polynomial hierarchy collapses. So, in particular, it cannot prove P = NP, and it follows that any proof that the polynomial hierarchy collapses, if such a proof exists,
must be formalized in a stronger fragment than ours.
Mar. 21, 1999. Using translation to separate bounded arithmetic theories. ASL Annual Meeting. UC San Diego.
Aug. 3-7, 1998. Multifunction Algebras and Provability of the Collapse of PH. Proof Theory and Complexity 1998. (PTAC 1998). BRICS, Aarhus, Denmark.
In this talk I will introduce some multifunctions algebras which for i > 1 correspond to functions computable in polynomial time with a limited number of witnessing queries to an oracle at the i - 1
level of the hierarchy. We then consider two subtheories of the well-studied bounded arithmetic theory S_2 of Buss. Actually, one of our theories is contained in the other. Using our algebras (mainly
the i = 1 variants on our algebras) we establish the following properties for these theories: (1) Neither theory can prove the polynomial hierarchy collapses.
(2) If either theory is contained in S^i_2
for some i then the polynomial hierarchy collapses.
(3) If either theory proves the polynomial hierarchy is infinite then for all i, S^i_2
can separate the ith level of the hierarchy.
(4) There is an interesting initial segment of any model of the weaker theory that satisfies all of IDelta_0 + exp. (Postscript: statement 4 in restrospect was not quite correct, although it did lead
to me write a paper about IDelta_0 +exp presented in San Diego.)
Apr. 17, 1998. Bounded query classes and Bounded Arithmetic. Adelphia University.
In trying to solve the P=NP question people have developed lower bounds techniques such as Hastad's Switching Lemma, Razborov-Smolensky, etc. These techniques can often be formalized in very weak
theories of arithmetic. One such theory which can prove the Switching Lemma is S_2(\alpha). A natural question is "can one show that the P=NP problem is independent of such theories?" If so, one can
rule out certain lower bounds methods as a means of solving this problem. In this talk we will consider some complexity questions related to fragments of S_2(\alpha) and bounded query classes. We
will use our results to derive a weak relativized independence result.
Mar. 5, 1998. Bounded query classes and bounded arithmetic. Logic, Computability and Complexity 1998. (LCC 1998). Amsterdam, Netherlands.
There is a well-known result of Krajicek which connects the bounded query class P^{\Sigma^p_i}(log) with the Delta^b_{i+1}-predicates of S^i_2. In this talk we discuss a generalization of this result
which was motivated by trying to show the Delta^b_{i+1}-predicates of R^i_2 are P^{Sigma^p_i}(loglog). We also discuss a general condition result which can be used to show on bounded arithmetic
theory is conservative over another. We then use these results together with recent results about bounded query classes to derive tighter collapses of the polynomial hierarchy under the assumption
that various bounded arithmetic theories are equal. We finally discuss new relativized seperations of bounded arithmetic theories and a weak relativized independence result.
Oct.(?), 1997. Structure and Definability in Bounded Arithmetic Theories. Logic Seminar. Massachusetts Institute of Technology.
Jul. 28, 1997. Nonmonotonic Reasoning with Quantified Boolean Constraints. The Fourth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR '97). Schloss Dagstuhl, Germany.
We define and investigate the complexity of several nonmonotonic logics with quantified Boolean formulas as constraints. We give quantified constraint versions of the constraint programming formalism
of Marek, Nerode, and Remmel [15] and of the natural extension of their theory to default logic. We also introduce a new formalism which adds constraints to circumscription. We show that standard
complexity results for each of these formalisms generalize in the quantified constraint case. Gogic, Kautz, Papadimitriou, and Selman [8] have introduced a new method for measuring the strengths of
reasoning formalisms based on succinctness of model representation. We show a natural hierarchy based on this measure exists between our versions of logic programming, circumscription, and default
logic. Finally, we discuss some results about the relative succinctness of our reasoning formalisms versus any formalism for which model checking can be done somewhere in the polynomial time
May 27, 1997. My thesis defense talk. UC San Diego.
This document last modified Sunday, 09-Oct-2011 19:23:43 PDT.
|
{"url":"http://www.cs.sjsu.edu/faculty/pollett/papers/","timestamp":"2014-04-19T06:52:48Z","content_type":null,"content_length":"39229","record_id":"<urn:uuid:e296a8f6-aa7f-457d-befc-7cf03c79bfdf>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Balls (don't know what else to call it)
April 17th 2010, 01:59 AM #1
Feb 2010
Balls (don't know what else to call it)
Hello all,
I am looking at a supspace $W$ of $C[0,1]$. Given $\delta>0$ i put:
$g(x)=\delta\sum_{n=1}^{\infty}\frac{1}{n+1}x^{n}$ , $\hspace{0,5cm} x\in[0,\frac{1}{2}]$
I would like to show that $g\in B(\bf{0},\delta)$.
My thought-process: I know that $B(\bf{0},\delta)$ signifies a ball centered at $(0,0)$ and radius $\delta>0$. This sets our limit, i.e. $g$ may not exceed this limit. So what I basically think
is that, if I can show that $|g(x)-g_{k}(x)|<\delta$ ( $k\in\mathbb{N}$) then I would have shown that $g\in B(\bf{0},\delta)$ (is my understanding correct?). I have tried this but my attempt was
not succesful.
Assistance would be great.
I suppose you are talking about $C[0,1/2]$ instead of $C[0,1]$.
To observe that $g$ is in $B(0,\delta)$ you have to show that $\|g\|=\|g-0\|<\delta$,
but you can compute that for each $0\leq x\leq\frac{1}{2}$
$|g(x)|\leq\delta\sum_{n=1}^{\infty}\frac{1}{n+1}\f rac{1}{2^n}<\delta\sum_{n=1}^{\infty}\frac{1}{2^n} =\delta$. This certainly implies that the supremum of |g| over [1,1/2] (i.e. the norm of g)
is smaller that $\delta.$
Notice that these estimates show that the function $f$ is actually continuous by Weierstrass M test.
April 17th 2010, 03:26 AM #2
Jun 2009
|
{"url":"http://mathhelpforum.com/differential-geometry/139606-balls-don-t-know-what-else-call.html","timestamp":"2014-04-19T02:10:53Z","content_type":null,"content_length":"35425","record_id":"<urn:uuid:3b37f796-2960-46e6-ab2e-e54214bf2a92>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arlington, MA Algebra 1 Tutor
Find an Arlington, MA Algebra 1 Tutor
...I then teach them a method to attack the questions that is faster and more accurate than the way most students typically approach reading questions. I also frequently time students to make sure
they are able to complete the section in the required amount of time. I teach students the grammar is...
26 Subjects: including algebra 1, English, linear algebra, GRE
...I also often work with individuals on social thinking. I have many years of experience in helping individuals with Aspergers be successful, both within and outside the classroom. Although there
are commonalities among students with Aspergers, each student also has a unique combination of needs, strengths, interests, and learning styles.
33 Subjects: including algebra 1, English, reading, writing
...Everything that we do with computers, from Word, to the internet, every page of the internet, games, etc. Everything on a computer or even a handheld device (cell phone, etc.) has a program on
it. That program was created by a computer programmer, writing in a programming language like C++, Java, Pascal, Javascript, Perl, etc.
19 Subjects: including algebra 1, physics, calculus, SAT math
...I took calculus in high school and several levels of calculus in college. I also took 3D calculus at MIT. While tutoring in my junior and senior year of college, I tutored freshman in calculus.
10 Subjects: including algebra 1, physics, calculus, algebra 2
...I used tons of robots to carry out my experiments and learnt even more there. Unfortunately the company went out of business and I took some time off. I love to tutor and help others and
recently helped my own husband with his recent microbiology class.
22 Subjects: including algebra 1, reading, English, grammar
|
{"url":"http://www.purplemath.com/Arlington_MA_algebra_1_tutors.php","timestamp":"2014-04-16T04:46:12Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:cac0d271-c11e-42e3-b6eb-ab5e8887b259>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
|
S4 subgroup of order 12
December 3rd 2013, 05:36 PM
S4 subgroup of order 12
Show that S4 (the symmetric group of degree 4) has a unique subgroup of order 12.
I know that A4 is that subgroup but I'm not really sure how to show that it is the unique subgroup. Help?
December 4th 2013, 08:55 AM
Re: S4 subgroup of order 12
I'm tired of trying to write latex in this editor. So the hints are in an attachment. If you have problems, post them.
Attachment 29855
December 4th 2013, 10:20 AM
Re: S4 subgroup of order 12
thank you!
December 16th 2013, 01:17 PM
Re: S4 subgroup of order 12
We can approach this another way:
Consider the homomorphism $\text{sgn}:S_4 \to \{-1,1\}$.
If $H$ is ANY subgroup of $S_4$, then $H \cap A_4$ is also a subgroup of $S_4$. Restricting the homomorphism sgn to this subgroup yields a homomorphism from $H \cap A_4$ to {-1,1}.
Since there are only two possibilities for the image of this homomorphism, we have either:
$\text{sgn}(H \cap A_4) = \{1\}$, which implies that $H$ is a subgroup of $A_4$ (why?), or:
$\text{sgn}(H \cap A_4) = \{-1,1\}$, which implies that $H \cap A_4$ is of index 2 in $H$.
Now if $|H| = 12$, the first possibility leads to $H = A_4$. So if $H eq A_4$, we must have that $|H \cap A_4| = |H|/[H: H \cap A_4] = 12/2 = 6$.
Now $H \cap A_4$ is thus a subgroup of $A_4$ of order 6. There are two possibilities:
a) This subgroup is cyclic, but $A_4$ has no elements of order 6, which leaves us with:
b) $H \cap A_4 \cong S_3$.
However, if $A_4$ has a subgroup isomorphic to $S_3$, this subgroup would contain 3 elements of order 2. However, the (only) 3 elements of order 2 in $A_4$, namely:
(1 2)(3 4), (1 3)(2 4), (1 4)(2 3) generate a subgroup of order 4, and $S_3$ has no such subgroup (it cannot, for 4 does not divide 6).
|
{"url":"http://mathhelpforum.com/advanced-algebra/224807-s4-subgroup-order-12-a-print.html","timestamp":"2014-04-21T10:11:37Z","content_type":null,"content_length":"9857","record_id":"<urn:uuid:3a7125d5-a6d3-4015-bea0-86a573ea7582>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
strongly infinity-connected (infinity,1)-topos
strongly infinity-connected (infinity,1)-topos
$(\infty,1)$-Topos theory
A locally ∞-connected (∞,1)-topos
$(\Pi \dashv \Delta \dashv \Gamma) : \mathbf{H} \to \infty Grpd$
is called strongly connected if $\Pi$ preserves finite (∞,1)-products (hence in particular the terminal object, which makes it also an ∞-connected (∞,1)-topos).
Similarly for an $n$-connected $(n,1)$-topos.
For $n = 1$ this yields the notion of strongly connected topos.
If in addition $\mathbf{H}$ is a local (∞,1)-topos then it is a cohesive (∞,1)-topos.
Revised on January 8, 2011 18:33:13 by
Urs Schreiber
|
{"url":"http://www.ncatlab.org/nlab/show/strongly+infinity-connected+(infinity%2C1)-topos","timestamp":"2014-04-20T15:52:17Z","content_type":null,"content_length":"17028","record_id":"<urn:uuid:59cf4828-3d51-45b3-bae3-ecefc815d341>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"Konst " <konstance1@hotmail.com> wrote in message <iklq14$esm$1@fred.mathworks.com>...
> I have a column vector with ones and zeros e.g. b=[0 0 1 1 1 0 0 1].
To me it looks like a *row* vector.
> I want to start 'reading' the table while doing the following: If c=0.5 , k=0 and d=0 my initial times, every time a b(i)>c then d=d+1..So in the above example I should have d=4. I can do that,
It's easy..So here is my problem: I want k=k+1 but only the times that zeros turn to ones while reading the vector (only in the beginning of a "series of ones"), so here I should have f=2.
What about the following test, which returns TRUE only when we are at the beginning of sequence of 1s:
b(i)==1 & (i==1 || b(i-1)==0)
% Bruno
> Here is my code:
> d=0;
> c=0.5;
> k=0;
> for i=1:length(b)
> if b(i)>c
> k=k+1; %this is my problem..
> qk
> d=d+1;
> end
> end
> k
> d
> Any thoughts will be really appreciated!
|
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/303766","timestamp":"2014-04-20T15:57:13Z","content_type":null,"content_length":"43471","record_id":"<urn:uuid:9ed02da3-b136-4712-8438-e9a918a90c00>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] random number
Robert Kern rkern at ucsd.edu
Tue Jun 29 16:34:32 CDT 2004
Steve Schmerler wrote:
[Gary Pajer:]
>>I was going to suggest random()*1e39, but it seems that that is not
>>really what you want.
> Yes. The problem is that the minimal random numbers are mostly only 1e1
> smaller (1e199) than the max value (1e200 in my case).
In [9]: a = stats.uniform.rvs(0,1e200,size=1000)
In [10]: min(a)
Out[10]: 8.4753666305914514e+196
In [11]: max(a)
Out[11]: 9.9870741367340092e+199
Works as expected for me.
>>A uniformly distributed collection of numbers between (0, 1e38) is going
>>to have very few members between (0, 1e3). Or (0, 1e24). Or (0,
>>1e35). Only 1 in 10 will be in the range (0, 1e37). It sounds like
>>you may be looking for logarithmically distributed numbers.
>>How about 10**x where x is a random number in (0, 38).
> Yeah, I tried something similar:
> import random
> from Numeric import *
> def alloverRand(min, max):
> # e.g. max = x*10**y = exp(max_exponent)
> max_exponent = log(max)
> return exp(random.uniform(.1*max_exponent,max_exponent))
> This gives log-like distribution. Though it's not uniformly I'll test if it
> works since it covers _all_ numbers from (nearly 0) to 'max'.
Getting pseudo-random numbers over such a large range leaves gaping
holes no matter what you do. Double precision will only get you 1e16ish
possible numbers. Uniform rvs just spreads out the holes mostly evenly
across the whole range. The exponential distribution puts those holes
mostly out near the large end. If you want the orders of magnitude to be
distributed evenly, then that's what you'd use. If you want the values
themselves to be distributed evenly, then you will have to live with the
fact that, with a perfect RNG, the probabilities of generating small
numbers like 1e23, 1e45, and 1e2 from a range of 1e200 in your lifetime
are too small. With a finite precision PRNG, most orders of magnitude
are impossible to achieve:
In [19]: machar_double.epsilon * 1e200
Out[19]: 2.220446049250313e+184
The smallest exponent you can expect is 184.
So I guess my question is, "Can you do anything to reduce the range to
something reasonable?"
Robert Kern
rkern at ucsd.edu
"In the fields of hell where the grass grows high
Are the graves of dreams allowed to die."
-- Richard Harter
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2004-June/002974.html","timestamp":"2014-04-17T03:58:09Z","content_type":null,"content_length":"5047","record_id":"<urn:uuid:76cc46fd-83cb-403e-9903-c26aa8d89a93>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
athematical contributions of Fibonacci
Home > Technical analysis > Fibonacci > Fibonacci biography facts
Fibonacci biography facts
Mathematical contributions of Fibonacci
Fibonacci introduced the arithmetic system of Hindu-Arabic fundamentals in Europe for the very first time. Its still the positional systems which are continued to be used till today. It contains ten
digits and zero and a decimal point.
Fibonacci had written a book - Liber abbaci, the book on abacus or calculating, which explains the different logical methods for doing the arithmetic in the logical and decimal system. This book was
completed by 1202 and simultaneously Fibonacci tried his level best to persuade and convince the other mathematicians to apply the system. This book is written in Latin and meticulously explains the
different methods for addition, subtraction, multiplication and division. There are many different problems to further explain the process.
The Fibonacci numbers
The Fibonacci number series includes the consecutive addition of first two numbers to give the third one. For example, 0+1=1, 1+1=2, 2+1=3 and so on. So the Fibonacci numbers are 0, 1,2,3,5,8,13, 21,
and 43 and so on. Amazingly, these Fibonacci numbers have their role in Forex trade and currency marketing. They aid in the charting of curves, rectangles and triangles through which the forecasting
of the forex market is done.
The important contribution of Fibonacci in the number theory is one of the most important incorporations in the domain of arithmetic and calculations. He conferred the following benefits.
• Implementation of square root notation.
• Introduction of bars in the fractions. Earlier, the numerator used to have quotations around it.
Later it was discovered that the Fibonacci numbers and the respective series were not only limited to the arithmetic but it also played a pivotal role in economics, commerce and trading sectors too.
Fibonacci Forex trading
The concept of Fibonacci forex trading has been used by millions of forex traders all around the world. These numbers forecast the coming oscillation in the forex charts. Though, at the same time,
the prediction made cannot be proclaimed as flawless and straight hitting to the mark, the closeness it gets to is quite amazing. Thus, this has also led to the emergence of the Fibonacci price
points. Thus, the Fibonacci levels are very elementary and fundamental concepts which need to be grasped before delving into the risky environment of forex trading.
|
{"url":"http://www.forexrealm.com/technical-analysis/fibonacci/fibonacci-biography-history-facts.html","timestamp":"2014-04-19T01:47:29Z","content_type":null,"content_length":"15493","record_id":"<urn:uuid:2c7fdde5-476b-4b0b-8431-fbd60afdc38a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Some Mal'cev conditions for varieties of algebras.
Moses, Mogambery.
This dissertation deals with the classification of varieties according to their Mal'cev properties. In general the so called Mal'cev-type theorems illustrate an interplay between first order
properties of a given class of algebras and the lattice properties of the congruence lattices of algebras of the considered class. CHAPTER 1. A survey of some notational conventions, relevant
definitions and auxiliary results is presented. Several examples of less frequently used algebras are given together with the important properties of some of them. The term algebra T(X) and useful
results concerning 'term' operations are established. A K-reflection is defined and a connection between a K-reflection of an algebra and whether a class K satisfies an identity of the algebra is
established. CHAPTER 2. The Mal'cev-type theorems are presented in complete detail for varieties which are congruence permutable, congruence distributive, arithmetical, congruence modular and
congruence regular. Several examples of varieties which exhibit these properties are presented together with the necessary verifications. CHAPTER 3. A general scheme of algorithmic character for some
Mal'cev conditions is presented. R. Wille (1970) and A. F. Pixley (1972) provided algorithms for the classification of varieties which exhibit strong Mal'cev properties. This chapter is largely
devoted to a modification of the Wille-Pixley schemes. It must be noted that this modification is quite different from all such published schemes. The results are the same as in Wille's scheme but
slightly less general than in Pixley's. The text presented here, however is much simpler. As an example, the scheme is used to confirm Mal'cev's original theorem on congruence permutable varieties.
Finally, the so-called Chinese var£ety is defined and Mal'cev conditions are established for such a variety of algebras . CHAPTER 4. A comprehensive survey of literature concerning Mal'cev conditions
is given in this chapter.
Thesis (M.Sc.)-University of Natal, Durban, 1991.
|
{"url":"http://researchspace.ukzn.ac.za/xmlui/handle/10413/8089","timestamp":"2014-04-24T22:42:18Z","content_type":null,"content_length":"24212","record_id":"<urn:uuid:e5a92ec7-2203-4691-b297-7653eb46ade4>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Essentials of Basic College Mathematics, Second Edition
Back to skip links
Essentials of Basic College Mathematics, Second Edition | 978-0-321-57065-9
ISBN-13: 9780321570659 See more
Author(s): John Tobey; Jeffrey Slater; Jamie Blair
Price Information
Rental OptionsExpiration Date
eTextbook Digital Rental:180 days
Our price: $58.99
Regular price:$146.67
You save:$87.68
Additional product details
ISBN-13 9780321587862, ISBN-10 0321587863
ISBN-10 0-321-57065-0, ISBN-13 978-0-321-57065-9
Author(s): John Tobey; Jeffrey Slater; Jamie Blair
Publisher: Pearson
Copyright year: © 2009 Pages: 464
The Tobey/Slater series builds essential skills one at a time by breaking the mathematics down into manageable pieces. This practical “building block” organization makes it easy for students to
understand each topic and gain confidence as they move through each section. Students respond well to regular feedback, so the authors provide a “How am I Doing?” guide to give students constant
reinforcement and to ensure that students understand each concept before moving on to the next. With Tobey/Slater, students have a tutor and study companion with them every step of the way.
This new edition features Quick Quizzes and Classroom Quizzes, as well as vignettes on how to use math to save money, making this text even more practical and accessible for students.
CourseSmart textbooks do not include any media or print supplements that come packaged with the bound book.
Marketing Promotion
Three Ways to Study
with eTextbooks!
• Read online from your computer or mobile device.
• Read offline on select browsers and devices when the internet won't be available.
• Print pages to fit your needs.
CourseSmart eTextbooks let you study the best way – your way.
|
{"url":"http://www.coursesmart.com/9780321587862","timestamp":"2014-04-18T11:04:14Z","content_type":null,"content_length":"49197","record_id":"<urn:uuid:7bc7fd01-943c-4cae-8a51-023c73cb6fd6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Visualization of the 2D Voronoi Diagram and the Delaunay Triangulation
This example uses a good implementation of the Fortune's algorithm performed by BenDi (see here). The goal of this application is the visualization of the Voronoi diagram.
For more information, see these articles on Wikipedia:
Using the code
The solution for the visualization problem is very easy. We add two static methods on the Fortune class:
/// <summary>
/// Visualization of 2D Voronoi map.
/// </summary>
/// <param name="weight">Weight of result image.</param>
/// <param name="height">Height of result image.</param>
/// <param name="Datapoints">Array of data points.</param>
/// <returns>Result bitmap.</returns>
public static Bitmap GetVoronoyMap(int weight, int height, IEnumerable Datapoints)
Bitmap bmp = new Bitmap(weight, height);
VoronoiGraph graph = Fortune.ComputeVoronoiGraph(Datapoints);
Graphics g = Graphics.FromImage(bmp);
foreach (object o in graph.Vertizes)
Vector v = (Vector)o;
g.DrawEllipse(Pens.Black, (int)v[0]-2, (int)v[1]-2, 4, 4);
foreach (object o in Datapoints)
Vector v = (Vector)o;
g.DrawEllipse(Pens.Red, (int)v[0]-1, (int)v[1]-1, 2, 2);
foreach (object o in graph.Edges)
VoronoiEdge edge = (VoronoiEdge)o;
g.DrawLine(Pens.Brown, (int)edge.VVertexA[0],
(int)edge.VVertexA[1], (int)edge.VVertexB[0],
catch { }
return bmp;
/// <summary>
/// Visualization of Delaunay Triangulation
/// </summary>
/// <param name="weight">Weight of result image.</param>
/// <param name="height">Height of result image.</param>
/// <param name="Datapoints">Result bitmap.</param>
/// <returns></returns>
public static Bitmap GetDelaunayTriangulation(int weight,
int height, IEnumerable Datapoints)
Bitmap bmp = new Bitmap(weight, height);
VoronoiGraph graph = Fortune.ComputeVoronoiGraph(Datapoints);
Graphics g = Graphics.FromImage(bmp);
foreach (object o in Datapoints)
Vector v = (Vector)o;
g.DrawEllipse(Pens.Red, (int)v[0] - 1, (int)v[1] - 1, 2, 2);
foreach (object obj in graph.Edges)
VoronoiEdge edge = (VoronoiEdge)obj;
if ((edge.LeftData[0] == v[0])&(edge.LeftData[1] == v[1]))
g.DrawLine(Pens.Black, (int)edge.LeftData[0], (int)edge.LeftData[1],
(int)edge.RightData[0], (int)edge.RightData[1]);
return bmp;
And now, we have images with diagrams:
Points of interest
Voronoi diagram is a very useful thing. It has specially interesting applications on terrain generation. I would like to develop a simple terrain generation algorithm based on the Voronoi diagram in
|
{"url":"http://www.codeproject.com/Articles/31067/Visualization-of-the-D-Voronoi-Diagram-and-the-De?msg=2812421","timestamp":"2014-04-19T22:00:20Z","content_type":null,"content_length":"107127","record_id":"<urn:uuid:452308ea-9758-4483-b8e5-73d91801bb95>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Illustration 31.5
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Illustration 31.5: Power and Reactance
Please wait for the animation to completely load.
Assume an ideal power supply. The graph shows the voltage (red) across the source and the current (black) through the circuit as a function of time (voltage is given in volts, current is given in
milliamperes, and time is given in seconds). Restart.
Resistive circuit: Look at the plot of voltage and current. Power is given by P = VI, but the current and voltage vary in time. It is more useful to think about the average power, which is P = V[rms]
I[rms] = I[rms]^2R = V[rms]^2/R. Notice that the current and voltage are always in phase, and so the product VI is always positive.
Capacitive circuit: Look at the plot of voltage and current. Notice that when the voltage is going from 0 to a more positive number, the current is going from a maximum value toward 0, but then when
the voltage is going from its max back towards 0, the current has changed direction and is going from 0 down to a negative value. Thus, on the average over many cycles, the current and voltage are
out of phase by π/2 = 90^o. When the voltage is positive, the current is negative as much as it is positive, and the same applies when the voltage is negative. This means that the average power is 0.
Compare this with the resistive load. When the voltage is positive, the current is positive, and when the voltage is negative, the current is negative. The resistor is always drawing current away
from the source. Another way to think about this is that the capacitor simply stores charge. As the voltage changes direction, the current goes back and forth between the source and the capacitor, so
the capacitor does not dissipate any energy over time (it simply stores the energy briefly).
Inductive circuit: Compare the plot of voltage and current for the inductor to that of the capacitor. Can you explain why the average power is 0 for this inductor just as it is for the capacitor?
If you have a circuit with a combination of resistive, capacitive, and inductive loads, calculating the average power dissipated now requires calculating V[rms]I[rms]cosφ, where φ is the phase shift
between the current and the voltage (see Exploration 31.4).
Illustration authored by Anne J. Cox.
Script authored by Wolfgang Christian and Anne J. Cox.
« previous
next »
|
{"url":"http://www.compadre.org/Physlets/circuits/illustration31_5.cfm","timestamp":"2014-04-17T04:03:49Z","content_type":null,"content_length":"21590","record_id":"<urn:uuid:bb8dceba-bbd5-47b3-a24b-9db95062aaac>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exercise 5.1 Improves the Linear Regression implementation done in Exercise 3 by adding a regularization parameter that reduces the problem of over-fitting.Over-fitting occurs especially when fitting
a high-order polynomial, that we will try to do here.DataHere's the points we will make a model from:# linear regression mydata = read.csv("http://spreadsheets.google.com/pub?hl=en_GB&hl=en_GB&key=
0AnypY27pPCJydGhtbUlZekVUQTc0dm5QaXp1YWpSY3c&output=csv", header = TRUE)# view data plot(mydata)
Machine Learning Ex4 – Logistic Regression and Newton’s Method
Exercise 4 is all about using Newton's Method to implement logistic regression on a classification problem.For all this to make sense i suggest having a look at Andrew Ng machine learning lectures on
openclassroom.We start with a dataset representing 40 students who were admitted to college and 40 students who were not admitted, and their corresponding...
Machine Learning Ex3 – multivariate linear regression
Exercise 3 is about multivariate linear regression. First part is about finding a good learning rate (alpha) and 2nd part is about implementing linear regression using normal equations instead of the
gradient descent algorithm.DataAs usual hosted in google docs:mydata = read.csv("http://spreadsheets.google.com/pub?key=0AnypY27pPCJydExfUzdtVXZuUWphM19vdVBidnFFSWc&output=csv", header = TRUE)# show
last 5 rows tail(mydata, 5)area bedrooms price 43 2567 ...
Machine Learning Ex2 – linear regression
Andrew Ng has posted introductory machine learning lessons on the OpenClassRoom site. I've watched the first set and will here solve Exercise 2.The exercise is to build a linear regression
implementation, I'll use R.The point of linear regression is to come up with a mathematical function(model) that represents the data as best as possible, that is done...
|
{"url":"http://www.r-bloggers.com/tag/regressionanalysis/","timestamp":"2014-04-19T17:04:04Z","content_type":null,"content_length":"30398","record_id":"<urn:uuid:a1657ac6-6524-49bc-ba68-e3f8cb750d01>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mirror, MirrorMirror, Mirror
Yesterday I taught the basics of geometric optics to my Physics 202 students. We did plane mirrors, spherical mirrors, and thin lenses. All told it’s a fairly straightforward chapter that’s all theme
and variations on one equation. The sign rules for object and image distances have to be remembered correctly, but that’s not so difficult.
Now while students aren’t necessarily going to come into class with previously built intuition for Ampere’s law and all the rest of the fun equations of the electromagnetic fields, they do have many
hundreds of hours of experience looking in mirrors. All of them know how mirrors work. Or at least I’m sure they all think so!
But even simple things can have subtle complications. I’m thinking about giving this as a quiz question:
Why do mirrors reflect right and left, but not up and down?
Give it a try!
1. #1 Douglas July 16, 2009
ubevmbagny fgrerb ivfvba[rot13]
2. #2 Max Fagin July 16, 2009
“Why do mirrors reflect right and left, but not up and down?”
Umm, they don’t, do they? When I’m looking in a mirror, my left hand appears on the left side of the mirror, my right hand on the right side, and my head an feet at the top and bottom
3. #3 Bexley July 16, 2009
They dont – they reflect forward and back.
Stand in front of a mirror and move your hand up and the reflections hand moves up too; move it right and the reflections hand moves to your right; however if you move your hand forward (away
from you) and the reflections hand moves towards you.
4. #4 Bexley July 16, 2009
Previous answer assumed that the mirror is set out in front of you like in the picture you have.
I guess why people are confused is because of the way we think about our left and right sides. We can only think about left and right once we have defined front/back and up/down. Since a mirror
in front of us reflects front/back we have the illusion of right and left being reflected. When we wave our right hand the reflection waves the hand to our right but we think of it as the
reflections left hand.
5. #5 NJ July 16, 2009
Because then it wouldn’t be a mirror operation, it would be an inversion center!
6. #6 Brian H July 16, 2009
Exactly, and text appears to have been reversed because you turn it around to point it at the mirror! If you hold a transparency with the text correct to you up to a mirror, the text will also be
correct in the mirror.
7. #7 Gray Gaffer July 16, 2009
trick question. When I look in the mirror I am seeing (mostly) what is behind me. To see it directly I must rotate 180 degrees about a vertical axis. Hence the left/right inversion.
compare and contrast with the directionality of a focussed image, as at the prime focus of a telescope.
8. #8 Robert July 16, 2009
I think that is caused by the position of our eyes. Would they be ordered vertically we would see an upside down reflection
9. #9 Paul Murray July 17, 2009
@3 beat me to it!
But the reason for the error is that when working out what’s different, we mentally place ourselves where the reflection is – standing upright, but facing us.
10. #10 Max July 17, 2009
When I was 8 or 9 years old the notion got in my head while I was sitting in the attic staring at a full length mirror. Took me at least 30 minutes if I recall correctly. Then I moved onto
concave surfaces like spoons. Thanks for bringing back the memory.
11. #11 Gerry July 17, 2009
True story: My college physics instructor responded to a question about mirror reflection by saying: “Think of leaning over and looking between your legs. What do you see?” He had a great deal of
trouble getting the class back in order.
12. #12 Uncle Al July 17, 2009
Two plane mirrors intersect at right angles. The center reflection is double revesed – which is to say, unchanged. Buy a good suit and enjoy the shop’s mirror.
13. #13 Anonymous July 17, 2009
The reverse neither. They reflect back and front.
14. #14 bad Jim July 17, 2009
We think a mirror reverses right and left because we typically rotate an object around a vertical axis in order to view it. If instead you rotate a book around a horizontal axis you will find the
text reversed vertically instead.
15. #15 Neil B ♪ July 19, 2009
Yes, commenters are “right” and I’ve made the same points. However, some people say that the image reverses the handness per the standards of the “reversed world” – that “right hand” you wave is
still “on your right” in the mirror, but it’s a “left hand” relative to the direction the “mirror image you” is facing. Wow, even more philobabble than the fallacious attempt to use “decoherence”
to solve the collapse problem.
16. #16 William July 19, 2009
17. #17 ewkk May 6, 2011
I honistly hate it so go get a real job.
|
{"url":"http://scienceblogs.com/builtonfacts/2009/07/16/mirror-mirror/","timestamp":"2014-04-16T16:55:46Z","content_type":null,"content_length":"58078","record_id":"<urn:uuid:ac6a34d7-af09-46e6-98cf-ca9c8561aefa>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Schur polynomials.
Next: The Generalized Schur Up: The Generalized Schur Previous: The Generalized Schur
The functions
We therefore have
In this way we obtain the Schur continued fraction representation of
The function nth approximant of nth tail of
It is shown in [5], and easily verified, that the LFT
where the polynomials
It follows by induction that less than n, and n.
The polynomials nth Schur polynomials associated with the Schur function
The following result shows how the Schur polynomials provide the Szego polynomial
The proof follows by comparison of the recurrence relations for 5].
Greg Ammar
Thu Sep 18 20:40:30 CDT 1997
|
{"url":"http://www.math.niu.edu/~ammar/cortona/node7.html","timestamp":"2014-04-20T05:59:43Z","content_type":null,"content_length":"4300","record_id":"<urn:uuid:ff096509-54ca-4f9e-a1c1-4cb98af10265>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hubbard Quotes
Calculus.ra (642kb) Excerpted from Philadelphia Doctorate Course, Tape #58
[some text dropped]
"Rate of change is this mathematics known as Calculus. Calculus, it's a very interesting thing, is divided into two classes -- there's Differential Calculus and Integral Calculus. The Differential
Calculus is in the first part of the textbook on Calculus, and Integral Calculus is in the second part of the textbook on Calculus. As you look through the book, you'll find in the early part of the
book on Calculus, "dx" over "dy", a little "dx", and a little "dy" -- and one's above the other on a line -- predominates in the front part of the book, but as you get to the end of the book you find
these "dx" and "dy"s preceded by a summation sign, or are equating to a summation sign, and the presence of this shows that we are in the field of Integral Calculus.
Now I hope you understand this, because I've never been able to make head nor tail of it. It must be some sort of a Black Magic operation, started out by the Luce cult -- some immoral people who are
operating up in New York City, Rockefeller Plaza -- been thoroughly condemned by the whole society. Anyway, their rate-of-change theory -- I've never seen any use for that mathematics, by the way --
I love that mathematics, because it -- I asked an engineer, one time, who was in his 6th year of engineering, if he'd ever used Calculus, and he told me yeah, once, once I did, he said. When did you
use it? And he said I used it once. Let me see, what did you use it on? Oh yeah. Something on the rate-of-change of steam particles in boilers. And then we went out and tested it and found the answer
was wrong.
Calculus -- if you want to know -- there is room there for a mathematics which is a good mathematics. And it would be the rate of co-change, or the rate of change when something else was changing, so
that you could establish existing rates of change in relationship to each other, and for lack of that mathematics, nobody has been able to understand present time -- you just can't sum it up easily
-- or let us say, for lack of an understanding of what present time was, nobody could formulate that mathematics. So, actually there's a big hole there that could be filled -- a thing called calculus
is trying to fill that hole, right now, and it can't.
But the rates of change -- it comes closest to it. I think it was one of Newton's practical jokes. Here we have Calculus, and it's trying to measure a rate of change. Well, if we had something that
was really workable and simple, it would be formed on this basis. The present time, and gradients of time were gradients of havingness, and as one havingness changed, you could establish a constancy
of change for other related havingnesses. But because the basic unit of the universe is two, you would have to have a rate of change known and measured for every rate of change then estimated. The
mathematics won't operate in this universe unless it has simultaneous equations. If you have two variables, you must have two equations with which to solve those two variables. In other words you
have to compare one to the other simultaneously. Otherwise you just get another variable. Of course, people laughingly do this. They take an equation with two variables, and then they solve it. And
then you say "What have you got?" And the fellow says "K". And you say now just a minute -- you got "K", huh? Well, what is "K"? Well "K", we have established arbitrarily as being -- well, say, why
did you work the equation out in the first place? You had "K", didn't you?"
Casbah.mp3 (430184 bytes)
Casbah.ra (358660 bytes)
On 9 Dec 1952, Hubbard lectured on "Whats wrong with This Universe: A Working Package for the Auditor". Hubbard claimed that all religions originally came from "implants". He describes the apparent
origins of the Islam religion:
"It's an enormous stone hanging suspended in the middle of a room, this is an incident called the Emanator by the way, and this thing is by the way the source of the Mohammedan Lodestone that they
have hanging down there, that, eh, when Mohammed decided to be a good small-town booster in eh Kansas, Middle-East, or something of the sort. By the way, the only reason he mocked that thing up, is
the trade wasn't good in his hometown. That's right. You read the life of Mohammed.
And he's got a black one and it sort of hung between the ceiling and the floor, I don't know, maybe they call it the Casbah or something or... Anyway, anyway, that thing is a mockup of the Emanator!
The Emanator is bright, not black.
And so, your volunteer, who insists on a sightseeing trip, goes in and this thing is standing in the middle of the room, and it's going 'wong wong wong wong wong' and he says: "Isn't that pretty?".
It sure is, and then he says "Mmmgrmrm ponk" Why, I'll tell you, they cart him from there, and they take him in and they do a transposition of beingness."
|
{"url":"http://www.rr.cistron.nl/xenu/quotes.htm","timestamp":"2014-04-20T00:48:21Z","content_type":null,"content_length":"26772","record_id":"<urn:uuid:1efd8ee9-3dc3-4959-abb0-954e2a2cd68b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Watauga, TX SAT Math Tutor
Find a Watauga, TX SAT Math Tutor
I graduated from Brigham Young University in 2010 with a degree in Statistical Science and I am looking into beginning a master's program soon. I have always loved math and took a variety of math
classes throughout high school and college. I taught statistics classes at BYU for over 2 years as a TA and also tutored on the side.
7 Subjects: including SAT math, statistics, algebra 1, geometry
...Here is a highlight of my education accomplishments: I have worked with WyzAnt Tutoring from September 2011 to the present. While here, I 1) serve a diverse clientele of 111 customers, 2) earn
17 client recommendations and a 98% customer satisfaction rating, 3) Conducted over 278 tutorials, 4) ...
40 Subjects: including SAT math, reading, English, calculus
...So why am I tutoring? One of the most satisfying aspects of being an educator is “seeing the light come on” when a student finally understands a new concept or idea and is able to apply that
newly acquired knowledge in the classroom or on an exam. Unfortunately, with formal classroom education being what it is, this occurs much less often than it should in today’s classrooms.
55 Subjects: including SAT math, reading, English, calculus
I have undergraduate degrees in Mechanical and Aerospace Engineering. I completed a Master's in Industrial Engineering and have four years of industry experience. I have been tutoring and
mentoring since high school, all the way through college.
21 Subjects: including SAT math, chemistry, English, accounting
I have been a high school math teacher for a number of years and am a former youth minister. I enjoy helping students work through their fears of math to become successful! I have taught algebra
1 and 2, geometry, and pre-calculus (including trigonometry). I have also taught statistics at the college level.
8 Subjects: including SAT math, geometry, algebra 1, algebra 2
Related Watauga, TX Tutors
Watauga, TX Accounting Tutors
Watauga, TX ACT Tutors
Watauga, TX Algebra Tutors
Watauga, TX Algebra 2 Tutors
Watauga, TX Calculus Tutors
Watauga, TX Geometry Tutors
Watauga, TX Math Tutors
Watauga, TX Prealgebra Tutors
Watauga, TX Precalculus Tutors
Watauga, TX SAT Tutors
Watauga, TX SAT Math Tutors
Watauga, TX Science Tutors
Watauga, TX Statistics Tutors
Watauga, TX Trigonometry Tutors
|
{"url":"http://www.purplemath.com/watauga_tx_sat_math_tutors.php","timestamp":"2014-04-18T16:08:18Z","content_type":null,"content_length":"24138","record_id":"<urn:uuid:7efa7d3f-6111-4dc6-9186-5aff5537e262>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is Algebra Necessary?
Has anyone else heard that this opinion from the NY Times is becoming a real debate? With my average understanding of algebra, I use it and think we need to continue using it – but not everyone
This debate matters. Making mathematics mandatory prevents us from discovering and developing young talent. In the interest of maintaining rigor, we’re actually depleting our pool of brainpower.
I say this as a writer and social scientist whose work relies heavily on the use of numbers. My aim is not to spare students from a difficult subject, but to call attention to the real problems
we are causing by misdirecting precious resources.
Last month, Professor Andrew Hacker of Queens College questioned the role and value of learning algebra. He says the course is a stumbling block for many students, and hurts both high school and
college graduation rates. But critics argue algebra is essential for learning critical thinking skills needed in everyday life.
Professor Andrew Hacker:
Okay. My basic question is, why are we making every student take algebra? And, by the way, various myths and mantras grow up around a subject like this. For example, the notion that algebra
somehow enhances, sharpens your critical thinking skills. Total myth. Let me take this, let's suppose over here are 10 people who have mastered algebra, and another 10 over here who have not. Are
you going to tell me that the people who have mastered algebra have better thinking ability, are more thoughtful about politics, society, have better marriages, make better decisions about what
we should do with a country like Syria.
Please do not assume that you have to go to a math book to sharpen your mind. You can read "Emma Bovary." You can read "The Great Gatspy." You can take a course in anthropology. Math is not the
key to a sharp mind. And, in fact, I'm really believing more and more that the kind of thinking skills that math encourage are really constrictive because there's a very cold logic to math,
whereas if you study anthropology you'll discover there are much more variation in the world.
Towards the end, Hacker’s reasoning gets just bizarre. He keeps emphasising how important “citizen statistics” is. I’m baffled as to how one could teach statistics in any useful way without the
material he wants to throw out! Prerequisites: we needz them. “Is Algebra Necessary?” If you want to do statistics or economics, yes, it is
|
{"url":"http://whywontgodhealamputees.com/forums/index.php/topic,23705.msg529685.html","timestamp":"2014-04-16T22:01:52Z","content_type":null,"content_length":"153133","record_id":"<urn:uuid:bd0ddaf0-6d03-47c8-be41-a5ce517f9c0f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Functions whose divided difference is uniformly convergent to $0$
up vote 1 down vote favorite
Let a function $f:(a,b) \rightarrow \mathbb R$ be continuous and such that for each $\varepsilon >0$ there exists a $\delta >0$ such that for $x \in (a,b)$, $|h|<\delta$ such that $x+nh \in (a,b)$ :
$$| \frac{\Delta_h^n f(x)}{h^n}|:=|\frac{\sum_{i=0}^n (-1)^{n-i} \frac{n!}{i!(n-i)!} f(x+ih) }{h^n}| <\varepsilon .$$
Is it then $f$ a polynomial of degree $\leq n-1$ ?
What is the research question in which this problem arose? – Michael Renardy Nov 17 '12 at 17:01
add comment
1 Answer
active oldest votes
Yes. If the first divided difference has this property then the function is constant. As the $n$-th divided difference is the first divided difference of the $n-1$-st divided
up vote 1 down difference, we conclude that the $n-1$-st divided difference is constant. So your function is polynomial of degree at most $n-1$.
vote accepted
add comment
Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes or ask your own question.
|
{"url":"http://mathoverflow.net/questions/112686/functions-whose-divided-difference-is-uniformly-convergent-to-0","timestamp":"2014-04-19T15:01:50Z","content_type":null,"content_length":"50898","record_id":"<urn:uuid:94b5b9dd-3d42-47c5-b510-40284b2bf3e6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/manie1122/answered","timestamp":"2014-04-18T03:23:52Z","content_type":null,"content_length":"109025","record_id":"<urn:uuid:98460d94-ae10-41b0-a226-3eb210460c7d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Improving MATLAB for loops!
October 7th 2012, 11:49 PM
Improving MATLAB for loops!
I am very, very new to Matlab. I migrated c++ code to Matlab, because I knew vectors were involved in the problem, and thought I could get a performance boost using the language. I pretty much
just translated the code line-by-line. My inexperience was obvious, since the code runs slower than it did in c++....
Obviously, I'm not taking full advantage of the language's features... or maybe the problem just isn't suited for it.
Below are the two code segments (both loops) that have become a huge bottle neck. It is a graph problem, with this iteration having 501 points (24675 edges). The program also used for much bigger
test sets, up to 20,000 points, so speed is a necessity.
The first code segment takes about a minute. It reads in all the point values in a text file. master_graph is a map (int keys) of vectors (edges)
for i=1:(numEdges)
%? change types to save space?
line = fgets(fid);
%# read line by line
A = sscanf(line,'%i %i %i');
num1 = A(1);
num2 = A(2);
num3 = A(3);
temp_node = num1;
if (temp_edge.distance > max_distance_for_wb)
max_distance_for_wb = temp_edge.distance;
master_graph(temp_node) = [master_graph(temp_node) temp_edge];
master_graph(temp_node) = temp_edge;
swap = temp_node;
temp_node = temp_edge.endpoint;
temp_edge.endpoint = swap;
master_graph(temp_node) = [master_graph(temp_node) temp_edge];
master_graph(temp_node) = temp_edge;
The fact that reading from a file is involved, it will obviously be slower, but is there a better way of doing this?
The second code segment involves two nested for-loops. wsp and graph are in the same format as master_graph. It loops through each key and then each of the edges (stored in a vector mapped with
that key). I know loops can be optimized with 'vectorization', but I'm kinda clueless. Any suggestions?
% for each node in wsp
for it = 1:wsp.Count()
%? for each edge in the graph
key = thekeys{it};
for i = 1:length(graph(key))
thisEdge = graph(key);
thisPoint = thisEdge(i);
if (thisPoint.distance > max_edge_in_prim)
str = sprintf('\tT');
cost_of_addition = costFunc(thisPoint.distance, path_length(key));
%disp('Not Terminal!');
str = sprintf('\t ');
cost_of_addition = (TRENCH_COST * thisPoint.distance);
% BENEFIT
% calculate cost one time...
curr_cost = costFunc(thisPoint.distance, path_length(key));
% added (TRENCH_COST *) below
curr_cost = (WC * curr_cost) - (WB * benefit(thisPoint.endpoint));
str = sprintf('( %d , %d) C=%d B=%d T=%d',key,thisPoint.endpoint,costFunc(thisPoint.distance,path_length(key)), benefit(thisPoint.endpoint),curr_cost); fprintf(str);
disp(' ');
% curr_cost holds the overall value of which the min should
% be tracked
if (first || curr_cost < min)
first = 0;
min = curr_cost;
select = thisPoint.endpoint;
node = key;
distance = thisPoint.distance;
node_added = 1;
cost_of_min = cost_of_addition;
Any suggestions would be HUGELY appreciated. I need to speed this thing up anyway possible. The code runs in C++ in under a second. This code takes MINUTES. Hopefully I can avoid embarrassment in
my suggestion of translating to matlab. Also, I only have access to R2010b.
Thanks Greg
February 17th 2013, 08:01 AM
Re: Improving MATLAB for loops!
C++ is faster than MATLAB. MATLAB is applied to solve a problem easier, but the runtime is not faster. So if you did the algorithm in C++ then you need no to turn to MATLAB. However you can run
your C++ application within MATLAB.
March 14th 2013, 10:37 PM
Re: Improving MATLAB for loops!
Yes i use C++.It is faster than MALTAB.
bermuda grass austin
March 23rd 2013, 02:56 AM
Re: Improving MATLAB for loops!
I like C++
cremation blog
March 26th 2013, 11:43 AM
Re: Improving MATLAB for loops!
i prefer C++ , it's more greater seo egypt
June 21st 2013, 03:39 AM
Re: Improving MATLAB for loops!
My thoughts:
1) Well written c++ will be faster (but probably not by much), however the code posted is not vectorized so you are comparing applies with oranges.
2) If you can, prepare your data in a format such as CSV where you can import into an array using a builtin matlab function
3) If you post an example file, some pseudo code, and expected output I will have a crack at writing a vectorized version when I find some time.
Regards elbarto
|
{"url":"http://mathhelpforum.com/math-software/204839-improving-matlab-loops-print.html","timestamp":"2014-04-17T15:51:03Z","content_type":null,"content_length":"13287","record_id":"<urn:uuid:af103860-c03b-406a-8464-6dd918af7f07>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Freemansburg, PA Algebra 1 Tutor
Find a Freemansburg, PA Algebra 1 Tutor
...Note, on the SAT, you are penalized for wrong multiple choice answers. So, you should only answer a multiple choice question if you can narrow it down to 2 answers. On the student response
section, answer all of them because you are not penalized for a wrong answer.
22 Subjects: including algebra 1, geometry, ASVAB, GRE
...Currently, I am a Doctoral student at Lehigh University. I have a Bachelor's degree in Psychology from the University of Pennsylvania and a Master's from Lehigh University in Human Development.
My current background checks are available upon request.
14 Subjects: including algebra 1, English, reading, special needs
...Solve problems involving fractions. 8. Solve problems by using factoring. 9. Solve linear equations and inequalities. 10.
27 Subjects: including algebra 1, calculus, statistics, geometry
...I have made full size blankets in crochet. I have sewn various items including clothing, blankets, and toys. I have also completed hundreds of cross-stitching projects that have been bought
including bookmarks, house decorations, wedding and birth samplers, and many others.
43 Subjects: including algebra 1, chemistry, English, reading
...My GPA is a 3.81 so all the math classes I've taken have went very well and I'd love to offer help to anyone who needs it! I'm experienced in all middle school and high school math classes, as
well as offering prep help for the SAT, ACT, or Praxis exams. Upon request I can forward you any references, professional evaluations, and background checks that you would like to see.
14 Subjects: including algebra 1, statistics, precalculus, algebra 2
|
{"url":"http://www.purplemath.com/freemansburg_pa_algebra_1_tutors.php","timestamp":"2014-04-16T22:34:31Z","content_type":null,"content_length":"24147","record_id":"<urn:uuid:f16b5174-6265-411a-a842-418a09d66052>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Dynamics of a Continuous Stirred-Tank Reactor with Consecutive Exothermic and Endothermic Reactions
This Demonstration illustrates the dynamics of two irreversible consecutive reactions, , the first exothermic, the second endothermic, in a continuous stirred-tank reactor. The dimensionless
equations for this system [1] are
where , , are the concentrations and is the temperature, is the Damköhler number, is the activation energy, is the ratio of the two rate constants, is the ratio of activation energies, is the heat
transfer coefficient, is the coolant temperature, is the adiabatic temperature rise, and is the ratio of enthalpies of reaction. The equations are solved with and initial conditions . As the heat
transfer coefficient is increased, the trajectories change from damped oscillations leading to a steady state to periodic, then to chaotic oscillations; further increases in β lead to damped
oscillations and finally to an asymptotic approach to equilibrium.
[1] C. Kalhert, O. E. Rössler, and A. Varma, "Chaos in a Continuous Stirred Tank Reactor with two Consecutive First Order Reactions, One Exo-, One Endothermic,"
Springer Series in Chemical Physics
, 1981 pp. 366–465.
|
{"url":"http://demonstrations.wolfram.com/DynamicsOfAContinuousStirredTankReactorWithConsecutiveExothe/","timestamp":"2014-04-16T13:09:02Z","content_type":null,"content_length":"45198","record_id":"<urn:uuid:bd905e3b-5038-4096-a8a2-00a5db416ae1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How did they get from the second expression to the third? (simplification)
November 14th 2012, 10:26 AM #1
How did they get from the second expression to the third? (simplification)
Re: How did they get from the second expression to the third? (simplification)
$\frac{1000}{\pi \left(\frac{500}{\pi}\right)^{2/3}} =$
$\frac{1000}{\pi} \cdot \frac{\pi^{2/3}}{500^{2/3}} =$
$\frac{2 \cdot 500}{\pi} \cdot \frac{\pi^{2/3}}{500^{2/3}} =$
$\frac{2 \cdot 500^{1/3}}{\pi^{1/3}} = 2\sqrt[3]{\frac{500}{\pi}}$
November 14th 2012, 10:48 AM #2
|
{"url":"http://mathhelpforum.com/algebra/207586-how-did-they-get-second-expression-third-simplification.html","timestamp":"2014-04-20T15:11:54Z","content_type":null,"content_length":"34414","record_id":"<urn:uuid:789fad79-e90e-4e08-88ea-555808edf1f2>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MATHEMATICA BOHEMICA, Vol. 132, No. 1, pp. 43-54 (2007)
Gallai and anti-Gallai graphs of a graph
Aparna Lakshmanan S., S. B. Rao, A. Vijayakumar
Aparna Lakshmanan S., Department of Mathematics, Cochin University of Science and Technology, Cochin-682 022, India, e-mail: aparna@cusat.ac.in; S. B. Rao, Stat-Math Unit, Indian Statistical
Institute, Kolkata-700 108, India, e-mail: raosb@isical.ac.in; A. Vijayakumar, Department of Mathematics, Cochin University of Science and Technology, Cochin-682 022, India, e-mail:
Abstract: The paper deals with graph operators - the Gallai graphs and the anti-Gallai graphs. We prove the existence of a finite family of forbidden subgraphs for the Gallai graphs and the
anti-Gallai graphs to be $H$-free for any finite graph $H$. The case of complement reducible graphs - cographs is discussed in detail. Some relations between the chromatic number, the radius and the
diameter of a graph and its Gallai and anti-Gallai graphs are also obtained.
Keywords: Gallai graphs, anti-Gallai graphs, cographs
Classification (MSC2000): 05C99
Full text of the article:
[Previous Article] [Next Article] [Contents of this Number] [Journals Homepage] © 2007–2010 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
|
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/MB/132.1/5.html","timestamp":"2014-04-19T14:40:52Z","content_type":null,"content_length":"3043","record_id":"<urn:uuid:8a2e762e-4a34-473d-a456-466e07eb25d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Counterexamples in universal algebra
up vote 27 down vote favorite
Universal algebra - roughly - is the study, construed broadly, of classes of algebraic structures (in a given language) defined by equations. Of course, it is really much more than that, but that's
the best one-sentence definition I can give.
I'm very new to universal algebra. So far I've found it incredibly interesting, mainly because it looks at things I was already interested in from a new (to me) perspective, and that's always good;
but I don't at all have a firm command of even the basics. For example, the recent question Relatively free algebras in a variety generated by a single algebra made me realize that I'd naively
accepted a very false statement: until I thought about it, I'd sort of taken for granted that A is always relatively free in Var(A).
I'm sure this isn't the only false belief I have about universal algebra, and I'm sure I'll hold more in the future; and I'm also sure I'm not alone in this. So my question is:
What are some notable counterexamples (to reasonable hypotheses a student in universal algebra might have) in universal algebra?
I'm specifically interested in universal algebra because, well, it's fairly universal; it seems reasonable that a counterexample in universal algebra would be of interest to algebraists of many
different stripes, and hopefully many outside algebra as well. At the same time, universal algebra is distinct enough that counterexamples in universal algebra would hopefully have their own flavor
not found necessarily in questions like "counterexamples in group theory," "counterexamples in ring theory," etc. In that vein, I'd especially appreciate counterexamples about topics firmly within
universal algebra - say, congruence lattices, or Mal'cev conditions - which nonetheless have "something to say" to other areas of mathematics.
big-list lattices universal-algebra counterexamples
10 Well, so far, Benjamin Steinberg seems to be your go-to man! – Todd Trimble♦ Jan 9 at 4:17
add comment
14 Answers
active oldest votes
All free Jónsson-Tarski algebras on a finite nonempty set of generators are isomorphic. Thus free objects may not know their rank.
up vote 14 down Curiously the automorphism group of this free algebra is the famous Thompson simple group $V$.
Just checking: a Jonsson-Tarski algebra is an algebra in the language of a single binary function symbol $f$, with infinite domain $A$ such that $f$ is a bijection between $A^2$ and
$A$, right? – Noah S Jan 9 at 3:39
@NoahS I think you also want unary unpairing functions $g$ and $h$, so the identities are $g(f(x,y))=x,\ h(f(x,y))=y$, and $f(g(x),h(x))=x$. – bof Jan 9 at 3:58
In a variety generated by an infinite primal algebra $A$, all algebras freely generated by finitely many elements are isomorphic to the direct power $A^{A}$, and $\textrm{Aut}(A^
{A})\simeq\textrm{Sym}(A)$. – Joseph Van Name Jan 9 at 14:47
add comment
A deep theorem of Oates and Powell shows that any finite group has a finite basis for its identities. One might think that the same is true for semigroups. But Perkins showed the
6-element semigroup consisting of the $2\times 2$ identity matrix, the zero matrix and the four matrix units $E_{ij}$ is not finitely based.
Mark Sapir, in a tour-de-force work involving symbolic dynamics, proved that this semigroup is inherently nonfinitely based, meaning it cannot belong to any finitely based locally
up vote 11 finite variety. Hence any finite semigroup generating a variety containing this semigroup is not finitely based.
down vote
See Sapir's book http://www.math.vanderbilt.edu/~msapir/book/b2.pdf for this and many other nice universal algebra results.
4 Benjamin, can you add a reference for Mark Sapir's tour-de-force? – Todd Trimble♦ Jan 9 at 4:18
You can find it in in his book math.vanderbilt.edu/~msapir/book/b2.pdf where in Chapter 3 he uses symbolic dynamics to characterize in an algorithmic way all finite inherently non
finitely based semigroups. – Benjamin Steinberg Jan 9 at 14:53
add comment
McKenzie proved that it is undecidable whether a finite universal algebra has a finite basis of identities.
up vote 9
down vote
That was actually part of a creative outburst of McKenzie's regarding decidability of issues such as residual finiteness, whether certain types (tame conrguence theory) occur in an
algebra in the variety generated by a variety, and others. The counterexample was to the statement "These problems are not going to be solved in the millenium they were posed." Gerhard
"Or In The Same Century, Anyway" Paseman, 2013.01.09 – Gerhard Paseman Jan 9 at 22:24
add comment
Following George Bergman's recent preprint
up vote 8 (btw, he is very good at finding strange (counter-)examples in universal algebra!)
down vote
we recently found out that the universal group of the subsemigroup from the free monoid $\{a,b,c\}^*$ generated by $\{bc,abcabc,bcabca,bcabcabcabc\}$ is isomorphic to $(\mathbb{Z}\times\
mathbb{Z})\ast\mathbb{Z}$, and so non-free, which somehow collides with Nielsen-Schreier theorem.
2 So Nielsen-Schreier theorem is false ? – Joël Jan 9 at 15:51
@Jöel: $({\Bbb{Z}}\times {\Bbb{Z}})*{\Bbb{Z}}$ is free product, but isn't a free group. – janmarqz Jan 11 at 0:57
Okay, I get it: the analog of Nielsen-Schreied for mono ids is false. Sorry for having been thick. – Joël Jan 11 at 1:30
@Joël: well, before this example, i was very sure that universal groups of subsemigroups of free monoids must be free (subsemigroups of free monoids are of course not necessarily
1 free), since subsemigroups of free monoids live inside free groups and kind of their universal groups would be some subgroups from the free group. Well, semigroups are just crazy wild
objects! – Victor Jan 12 at 7:04
add comment
There are compact totally disconnected lattices which are not inverse limits of finite lattices.
up vote 7 down vote
add comment
Here is another one I love. Marcel Jackson showed that there is a finite semigroup with a finite basis for its identities such that the variety it generates contains uncountably many
subvarieties. I would have thought this impossible. Here is the link:
up vote 7 down
vote http://www.sciencedirect.com/science/article/pii/S0021869399982807
add comment
Subalgebras of free algebras are free. True for groups and Lie algebras but false for semigroups or commutative rings.
up vote 6 down
Rather blatantly false for lattices. – bof Jan 9 at 3:53
False also for Boolean algebras since the finite free Boolean algebras have cardinality of the form $2^{2^{n}}$. – Joseph Van Name Jan 9 at 5:10
@JosephVanName Oh, right. And I believe every countable Boolean algebra is embeddable in the free Boolean algebra on $\aleph_0$ generators? – bof Jan 9 at 5:28
Also false for "most" varieties of groups: it only holds for the variety of all groups, the variety of all abelian groups, the variety of all abelian groups of exponent $p$ ($p$ a
prime), and the trivial variety. – Arturo Magidin Jan 10 at 4:54
add comment
Here is another great Mark Sapir result. Let $S$ be the three element cyclic semigroup $\langle x\mid x^2=x^3\rangle$. Then $S$ has no finite basis for its quasi-identities. I believe
Jackson and Volkov later showed that any finite semigroup containing this one also has no finite basis for its quasi-identities.
up vote 5 Related, Mark Sapir showed that although the variety generated by the finite semigroup $\{1,a,b\}$ where $1$ is the identity and $xy=x$ for $x,y\neq 1$ has only fnitely many subvarieties,
down vote it has uncountably many subquasivarieties. I recently showed with Margolis and Saliola that $\{1,a,b\}$ has no finite basis of quasi-identities (this is easier than Sapir's result) using
hyperplane arrangements.
add comment
Not sure whether you count this as universal algebra; someone classically-minded probably wouldn't, and someone categorically-minded probably would.
In any case: it's possible to cook up two nonisomorphic operads that give rise to the same algebraic theory. Here "operad" means "non-symmetric operad of sets", and by "give rise to" I'm
referring to the fact that every operad $P$ has an associated algebraic theory (which expressed as a monad is $\coprod_{n \geq 0} P_n \times (-)^n$).
This is a counterexample to the general belief that operads are algebraic theories of a special kind. You still see this belief expressed all over the place, and it's reinforced by the
fact that for symmetric operads, different operads genuinely do give rise to different theories.
up vote 5 Edit Here are some details. From any operad $P$, we can construct its reverse $P^\#$. It has the same operations as $P$, and the same identity operation, but the order of the composition
down vote is reversed. Thus, $\theta \circ (\theta_1, \ldots, \theta_n)$ in $P^\#$ is $\theta \circ (\theta_n, \ldots, \theta_1)$ in $P$. It's easy to show that $P$ and $P^\#$ give rise to the same
algebraic theory.
So, to construct our counterexample, we just have to find some operad not isomorphic to its reverse. This is a bit harder than it sounds. For instance, any operad that admits a symmetric
structure is isomorphic to its reverse, and several other well-known operads are too. But you can find one in arXiv:math/0404016. It's obscure enough that I couldn't immediately remember
what it was. There may be simpler examples out there; I think Steve Lack told me one by email.
Tom, could you give a reference for this, or maybe just give two such operads in your answer if it's not too much trouble? – Todd Trimble♦ Jan 9 at 20:55
add comment
A complete semilattice is automatically also a complete lattice. Hence a student might expect that there is no need to distinguish between complete semilattices and complete lattices.
up vote 5 However, the complete homomorphisms of complete meet-semilattices in general don't preserve the join operation, so the two structures must really be distinguished.
down vote
add comment
This is only a counterexample in the loose sense of the word. Universal algebra students may hypothesize that all interesting fundamental operations studied in universal algebra and related
areas have arity at most 2. Even though ternary terms are useful when investigating certain properties of congruence lattices (i.e. Mal'cev conditions), ternary terms rarely pop up as
fundamental operations on algebras.. Therefore, algebras with interesting ternary operations are helpful to keep in mind. For example, median algebras are algebras with a single ternary
operation $m$ that satisfies the identities: $$m(a,a,b)=a$$ $$m(a,b,c)=m(b,a,c)=m(b,c,a)$$ $$m(m(a,b,c),d,e)=m(a,m(b,d,e),m(c,d,e)).$$
up vote
3 down For example, if $X$ is a distributive lattice, then $(X,m)$ is a median algebra where $$m(x,y,z)=(x\wedge y)\vee(x\wedge z)\vee(y\wedge z)=(x\vee y)\wedge(x\vee z)\wedge(y\vee z).$$
See this question for more ternary operations in universal algebra.
1 Median algebras are essentially the same thing as Cat(0) cube complexes so are fundamental to 3-manifold topology. – Benjamin Steinberg Jan 9 at 14:56
add comment
Regarding a small model of High School Algebra
up vote 2 down vote M. Jackson, A note on HSI-algebras and counterexamples to Wilkie's identity, Algebra Universalis 36 (1996), 528–535.
add comment
Let me mention the question we struggled with for some time: we could not find examples showing that if you take away finitely many elements from a finitely generated semigroup, so that
the resulting subset is a subsemigroup, then neither hopficity, not co-copficity is preserved in general. Strangely, but it's exactly relating (co-)hopficity for semigroups to (co-)
hopficity for graphs what made it possible. Here are two links:
up vote 2 http://arxiv.org/abs/1307.6929
down vote
(I have to acknowledge though that Ben Steinberg finds Rees index unnatural)
Some other surprising links of semigroup properties to algebraic properties of various relational structures can be found in works on FA-presentable semigroups.
add comment
Not all nonabelian relatively free groups of exponent zero or a prime power are directly indecomposable; not all splitting groups in a variety of exponent zero or a prime power are
relatively free.
up vote 1 down These were Problems 21 and 22 in Hanna Neumann's Varieties of Groups; Peter Neumann provided examples in A note on the direct decomposablity of relatively free groups. Quart. J. Math.
vote Oxford Ser. (2) 19 1968 67–79, MR 0223437 (36 #6485)
add comment
Not the answer you're looking for? Browse other questions tagged big-list lattices universal-algebra counterexamples or ask your own question.
|
{"url":"http://mathoverflow.net/questions/154017/counterexamples-in-universal-algebra","timestamp":"2014-04-20T11:01:21Z","content_type":null,"content_length":"116888","record_id":"<urn:uuid:3a9ea737-789c-480d-9fd6-f19caa090e11>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
|
L = loss(tree,X,Y)
[L,se] = loss(tree,X,Y)
[L,se,NLeaf] = loss(tree,X,Y)
[L,se,NLeaf,bestlevel] = loss(tree,X,Y)
L = loss(tree,X,Y,Name,Value)
L = loss(tree,X,Y) returns the mean squared error between the predictions of tree to the data in X, compared to the true responses Y.
[L,se] = loss(tree,X,Y) returns the standard error of the loss.
[L,se,NLeaf] = loss(tree,X,Y) returns the number of leaves (terminal nodes) in the tree.
[L,se,NLeaf,bestlevel] = loss(tree,X,Y) returns the optimal pruning level for tree.
L = loss(tree,X,Y,Name,Value) computes the error in prediction with additional options specified by one or more Name,Value pair arguments.
Input Arguments
tree Regression tree created with fitrtree, or the compact method.
X A matrix of predictor values. Each column of X represents one variable, and each row represents one observation.
Y A numeric column vector with the same number of rows as X. Each entry in Y is the response to the data in the corresponding row of X.
Name-Value Pair Arguments
Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several
name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.
'lossfun' Function handle for loss, or the string 'mse' representing mean-squared error. If you pass a function handle fun, loss calls fun as:
● Y is the vector of true responses.
● Yfit is the vector of predicted responses.
● W is the observation weights. If you pass W, the elements are normalized to sum to 1.
All the vectors have the same number of rows as Y.
Default: 'mse'
'subtrees' A vector with integer values from 0 (full unpruned tree) to the maximal pruning level max(tree.PruneList). You can set subtrees to 'all', meaning the entire pruning sequence.
Default: 0
'treesize' A string, either:
● 'se' — loss returns bestlevel that corresponds to the smallest tree whose mean squared error (MSE) is within one standard error of the minimum MSE.
● 'min' — loss returns bestlevel that corresponds to the minimal MSE tree.
'weights' Numeric vector of observation weights with the same number of elements as Y.
Default: ones(size(Y))
Output Arguments
L Classification error, a vector the length of subtrees. The error for each tree is the mean squared error, weighted with weights. If you include lossfun, L reflects the loss calculated with
se Standard error of loss, a vector the length of subtrees.
NLeaf Number of leaves (terminal nodes) in the pruned subtrees, a vector the length of subtrees.
bestlevel A scalar whose value depends on treesize:
● treesize = 'se' — loss returns the highest pruning level with loss within one standard deviation of the minimum (L+se, where L and se relate to the smallest value in subtrees).
● treesize = 'min' — loss returns the element of subtrees with smallest loss, usually the smallest element of subtrees.
Mean Squared Error
The mean squared error m of the predictions f(X[n]) with weight vector w is
Find the loss of a regression tree predictor of the carsmall data to find MPG as a function of engine displacement, horsepower, and vehicle weight:
load carsmall
X = [Displacement Horsepower Weight];
tree = fitrtree(X,MPG);
L = loss(tree,X,MPG)
L =
Find the pruning level that gives the optimal level of loss for the carsmall data:
load carsmall
X = [Displacement Horsepower Weight];
tree = fitrtree(X,MPG);
[L,se,NLeaf,bestlevel] = loss(tree,X,MPG,'Subtrees','all');
bestlevel =
See Also
fitrtree | predict
|
{"url":"http://www.mathworks.com/help/stats/compactregressiontree.loss.html?nocookie=true","timestamp":"2014-04-18T15:47:46Z","content_type":null,"content_length":"49858","record_id":"<urn:uuid:2ca554dc-d678-4017-b2ea-26931d533246>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From Conservapedia
A fraction is a representation of a rational number written with two numbers separated by a slash. Like: 1/2. Or one over the other, separated by a horizontal line. Thus:
The top number is called the numerator. The bottom number is the denominator.
All fractions can be expressed as decimals, and vice versa. If the denominator is not a power of ten, you get a repeating decimal or repeating fraction. You can round it off, or use a vertical line
over the repeating digits.
1/3 = 0.33 (rounded to two digts)
The digit 3 repeats indefinitely, so if you need more precision you can write 0.33333 or whatever.
See Also
|
{"url":"http://conservapedia.com/Fraction","timestamp":"2014-04-24T05:38:58Z","content_type":null,"content_length":"13160","record_id":"<urn:uuid:75d36372-6c3f-44be-adc7-ef2e4a8af5e0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Voltage Multiplication Circuits and DC Powered Tesla Coils
Re: Voltage Multiplication Circuits and DC Powered Tesla Coils
• To: tesla-at-pupman-dot-com
• Subject: Re: Voltage Multiplication Circuits and DC Powered Tesla Coils
• From: Kevin Ottalini <ottalini-at-mindspring-dot-com> (by way of Terry Fritz <twftesla-at-uswest-dot-net>)
• Date: Thu, 11 Nov 1999 12:21:14 -0700
• Approved: twftesla-at-uswest-dot-net
• Delivered-To: fixup-tesla-at-pupman-dot-com-at-fixme
This is a longish informational posting about DC multipliers
and DC Teslas and related. I've spent several years designing, testing,
and destroying multipliers and so feel that I can shorten a lot of the
back-and forth questions by covering it all here, all at once.
(My apologies Terry!).
Cockroft-Walton Multiplier bridges are in actuality AC-coupled adders,
not multipliers (in spite of the name). They can be half-wave or full wave
or even multi-phase in design. In more common, modern-day literature,
they are called "charge pumps".
In essence, the simplest implementation is a transformer with one
side grounded and the other side coupled through a capacitor to a diode.
This is the first half of the stage, which is called the peak detector.
The second half is another capacitor and diode on the other side of the
"ladder" ('cause it kinda looks and acts like climbing a ladder). This
part is called a "level shifter".
The two parts together comprise a complete "stage" that will provide
1X (that's right, one times) the AC peak-to-peak input voltage.
Each successive stage adds almost exactly the same amount, minus the
successive diode drops and other minor losses.
Here is an Ascii picture (you need to use Courier New font):
Stage #1 Stage #2
Peak Detector Peak Detector
| / | /
| \ | | | \ | |
------- \-----/ ------- \-----/
/ \ \ / / \ \ /
/ \ \ / / \ \ /
/-----\ ------- /-----\ -------
| | | |
| | / | | | / |
| \ | \
Level Shifter Level Shifter
Stage #1 Stage #2
As you can see, the AC input gets rectified and then stored. That DC level
then becomes the effective "ground level" for the following stage. The
stage then rectifies the AC again minus minor losses caused by putting two
capacitors in series and adds that amount to the DC level generated by the
first stage. Voila! 2X the AC PP input. (It helps a lot here to think of
capacitors as "AC resistors").
Each component for each stage MUST be able to withstand the AC Peak-to-Peak
voltage that is driving the first stage. Because of the design, this is an
"equal stress" architecture that allows all components to have the same
If you have 15KV AC PP on the input, then the diodes and the capacitors
must be
rated for 15KV AC (not DC). Although the rectification seems to indicate
you could get away with 15KV DC, the reality is that there is an AC signal
present (as well as DC) at every point of the ladder except at the ground
of the first stage. It might also be necessary to put a higher voltage
or string of diodes in the final diode position ... under a full "short to
ground" transient, the last diode could see the full swing).
For use with something like a Neon Sign Transformer (or NST), the basic
is that the input frequency is 60Hz. The frequency is totally critical
it comes to efficiency in a charge pump. The lower the frequency, the
the coupling capacitors have to be in order to couple and integrate the AC.
(Note: NSTs typically have center-grounded secondaries. You must use a
wave multiplier design in order to avoid a really massive ground loop).
Here is an example:
First, with 2 stages half-wave, 15KV input and 30ma load,
I'll use .015uFd caps (15KV AC rated):
Cockroft-Walton Multiplier Work Sheet
November 6,1998 K.Ottalini
Enter the number of stages:? 2
Enter the input voltage (volts):? 15000
Enter the input AC Frequency (Hz):? 60
Enter the capacitor values (Uf):? .015
Enter the load current (amps) or 'R':? .030
The estimated ripple at 30 ma is: 100 KVolts <----- OUCH!
The output voltage is: 30 KVolts
The estimated ripple is: 166.6667 % maximum at this
The estimated Vdrop is: 233.3333 KVolts
The real output voltage is: -203.3333 KVolts
The percent regulation is: 388.8889 %
The power requirement for 30 KVolts at 30 ma is: 900 Watts
This is equivalent to a load resistance of 1000000 Ohms
Hmmm. Something is really wrong here. How can I get 100KVolts of ripple?
The problem is that the load is way too much for the size of the caps and
the frequency. Here is the same thing at 40KHZ:
Cockroft-Walton Multiplier Work Sheet
November 6,1998 K.Ottalini
Enter the number of stages:? 2
Enter the input voltage (volts):? 15000
Enter the input AC Frequency (Hz):? 40000
Enter the capacitor values (Uf):? .015
Enter the load current (amps) or 'R':? .030
The estimated ripple at 30 ma is: .15 KVolts <--- good
The output voltage is: 30 KVolts
The estimated ripple is: .25 % maximum at this current.
The estimated Vdrop is: .35 KVolts <--- good
The real output voltage is: 29.65 KVolts
The percent regulation is: .5833333 %
The power requirement for 30 KVolts at 30 ma is: 900 Watts
This is equivalent to a load resistance of 1000000 Ohms
This looks a lot better, only 150 volts of ripple and good regulation.
Here is the 60Hz worksheet again, but using 2.0Ufd coupling capacitors:
Cockroft-Walton Multiplier Work Sheet
November 6,1998 K.Ottalini
Enter the number of stages:? 2
Enter the input voltage (volts):? 15000
Enter the input AC Frequency (Hz):? 60
Enter the capacitor values (Uf):? 2
Enter the load current (amps) or 'R':? .030
The estimated ripple at 30 ma is: .75 KVolts <---- not so good
The output voltage is: 30 KVolts
The estimated ripple is: 1.25 % maximum at this current.
The estimated Vdrop is: 1.75 KVolts <---- not so good
The real output voltage is: 28.25 KVolts
The percent regulation is: 2.916667 %
The power requirement for 30 KVolts at 30 ma is: 900 Watts
This is equivalent to a load resistance of 1000000 Ohms
Not bad, but we still see a loss of almost 2KV. It would probably
take 5.0uFd caps to get a reliable and efficient multiplier.
As you can see, the bottom line is that the caps are actually a liability
unless you are really desperate to get more than 15KV. For 15KVDC and
you actually only need a full-wave rectifier, and use the energy storage
caps as the "filter". Just be sure to put a large resistor or inductor
in series with the output to limit the current surges.
For DC-powered TC's, it is unlikely that you'll need more than 15KVDC
since it is the stored energy in the caps that is critical. There
is certainly a voltage related efficiency profile (E=0.5CV^2 ... the energy
rises 2X the voltage but only 1x the capacitance), but it turns out (after
lots of experimenting on DC TC's) that attempting higher voltages only
down the rep rate, and the performance is much more impressive with faster
rep rates. You can easily run from 1 (yes one!)BPS up to 600BPS or more
(BPS=beats per second) and literally get full power out at each beat.
No sync problems, no safety gaps needed, just great sparks.
With only 15KV, and 0.03Ufd storage caps (no multiplier, just a full wave
rectifier), a reasonably well tuned DC TC can reliably produce 42" sparks
with less than 1.5kw. Just changing the caps to 0.5uFd can result in
as much as a 30% increase in output at 1KV to 2KV LOWER input voltage!
(of course, if this is continued, at some point your spark gap will go
into total melt down).
(Safely ...) Enjoy!
* Kevin Ottalini *
* WhoSys / Who Systems *
* High Voltage with Style! *
* ottalini-at-mindspring-dot-com *
* Often in Another Reality *
|
{"url":"http://www.pupman.com/listarchives/1999/November/msg00275.html","timestamp":"2014-04-18T08:04:16Z","content_type":null,"content_length":"11772","record_id":"<urn:uuid:b84c20f1-a515-4f0e-b224-6d7a8087b685>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kendall, FL Precalculus Tutor
Find a Kendall, FL Precalculus Tutor
...I have knowledge on how to successfully pass the end of year exam. With my knowledge, patience and dedication, success is a plus. Currently I am certified to teach Math K-12.
18 Subjects: including precalculus, chemistry, calculus, geometry
...Tutoring is not about the extracurriculars for me, it is about seeing that student finally get it and feel confident in the subject. I look forward to helping you and am excited to hear from
you. Thank you, JennI am very patient with students and enjoy teaching seeing as I aspire to become a mathematics professor.
12 Subjects: including precalculus, calculus, geometry, algebra 1
...Also, I helped out a lot of my classmates that had difficulties in this subject. In 11th grade, I took AP Calculus and Honors Precalculus, because my teacher thought I could do it. Precalculus
was a breeze, but it did help when I took my AP Calculus exam.
11 Subjects: including precalculus, chemistry, physics, calculus
...We will learn terms like circumference and area of a circle; also, area of a triangle, volume of a cylinder, sphere, and a pyramid. Trigonometric functions and angle derivation will be
explained and applied. Geometric proofs are an important aspect of geometry and so these will be extensively explained.
46 Subjects: including precalculus, Spanish, reading, writing
...The numerical implementation of a system governed by differential equations always takes the form of a vector equation. Linear Algebra is used extensively to manipulate and solve these
equations, and thus it is critical to my job function. In addition to on-the-job training, I have taken two linear algebra courses at university.
13 Subjects: including precalculus, physics, calculus, geometry
Related Kendall, FL Tutors
Kendall, FL Accounting Tutors
Kendall, FL ACT Tutors
Kendall, FL Algebra Tutors
Kendall, FL Algebra 2 Tutors
Kendall, FL Calculus Tutors
Kendall, FL Geometry Tutors
Kendall, FL Math Tutors
Kendall, FL Prealgebra Tutors
Kendall, FL Precalculus Tutors
Kendall, FL SAT Tutors
Kendall, FL SAT Math Tutors
Kendall, FL Science Tutors
Kendall, FL Statistics Tutors
Kendall, FL Trigonometry Tutors
Nearby Cities With precalculus Tutor
Coconut Grove, FL precalculus Tutors
Crossings, FL precalculus Tutors
Gables By The Sea, FL precalculus Tutors
Olympia Heights, FL precalculus Tutors
Palmetto Bay, FL precalculus Tutors
Perrine, FL precalculus Tutors
Pinecrest, FL precalculus Tutors
Princeton, FL precalculus Tutors
Quail Heights, FL precalculus Tutors
Richmond Heights, FL precalculus Tutors
Snapper Creek, FL precalculus Tutors
South Miami Heights, FL precalculus Tutors
South Miami, FL precalculus Tutors
Village Of Palmetto Bay, FL precalculus Tutors
Westchester, FL precalculus Tutors
|
{"url":"http://www.purplemath.com/Kendall_FL_precalculus_tutors.php","timestamp":"2014-04-18T00:43:36Z","content_type":null,"content_length":"24273","record_id":"<urn:uuid:065ee5dd-abaa-45dd-a4d5-9c4ef73e8cdb>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
|
University Graduate School Bulletin 2000-2002: Physics
Graduate Faculty
Special Departmental Requirements
Master of Science Degree
Master of Science in Beam Physics and Technology Degree
Master of Arts for Teachers Degree
Doctor of Philosophy Degree
College of Arts and Sciences
Professor Steven Vigdor
Departmental e-mail:
Departmental URL:
Graduate Faculty
Distinguished Professors
Steven Girvin, Gail Hanson, Allan MacDonald, Roger Newton (Emeritus), Robert Pollock
E. D. Alyea, Jr. (Emeritus), Andrew Bacher, Robert Bent (Emeritus), Leslie Bland, Bennet Brabson, John Cameron, John Challifour (Mathematics), Ray Crittenden (Emeritus), Alex Dzierba, Charles Goodman
(Emeritus), Steven Gottlieb, Richard Hake (Emeritus), Richard Heinz, Archibald Hendry, Charles Horowitz, Larry Kesmodel, Alan Kostelecky, S. Y. Lee, Andrew Lenard (Emeritus), Don Lichtenberg
(Emeritus), Timothy Londergan, Malcolm Macfarlane, Hugh Martin (Emeritus), Hans Meyer, Daniel Miller (Emeritus), James Musser, Hermann Nann, Harold Ogren, Catherine Olmer, William Schaich, Peter
Schwandt, Brian Serot, P. Paul Singh (Emeritus), James Swihart (Emeritus), Steven Vigdor, George Walker, John Wills (Emeritus), Scott Wissink, Andrzej Zieminski
Senior Scientists
Charles Bower (Astronomy), William Jacobs, Thomas R. Marshall, David Rust, James Sowinski, Edward Stephenson, Scott Teige, Daria Zieminska
Associate Professors
David Baxter, Michael Berger, John Carini, Fred Lurie (Emeritus), William Snow, Richard Van Kooten,
Assistant Professors
Robert Gardner,* Adam Szczepaniak,* Jay Tang*
Graduate Advisor
Professor Brian Serot, Swain Hall West 234, (812) 855-0780
Degrees Offered
Master of Science, Master of Arts for Teachers, and Doctor of Philosophy. The department also participates in the Ph.D. programs in astrophysics, chemical physics, and mathematical physics (described
elsewhere in this bulletin).
Return to Top
Special Departmental Requirements
(See also general University Graduate School requirements.)
B average (3.0) required. See special requirement under “Master of Science Degree” for courses numbered below 501 that are to be counted toward that degree.
Return to Top
Master of Science Degree
Admission Requirements
Physics P201, P202, P301, P309, P331, P332, and P340 (or equivalents); Mathematics M211-M212, M311 (or equivalents). Deficiencies must be removed without graduate credit.
Course Requirements
A total of 30 credit hours, of which at least 14 credit hours must be in physics courses numbered 501 or above. Seminars, research, and reading courses may not be counted toward this 14 credit hour
requirement. Physics courses numbered below 501 that are listed in this bulletin may count toward the 30 credit hour requirement only if passed with a grade of B (3.0) or above.
Not required.
Final Examination
Written or oral. May be taken only twice.
Return to Top
Master of Science in Beam Physics and Technology Degree
Admission Requirements
Same as for Master of Science degree.
Course Requirements
A total of 30 credit hours, including the following: P441 (or equivalent at another institution), P506 (or equivalent), P570, one course at the 500 level or above in laboratory techniques or
computational methods, and a master’s thesis course (P802). Four advanced courses in beam physics should be chosen from among the Special Topics courses P571, P671, and P672, with topics to be listed
in a syllabus prepared jointly by the I.U. Department of Physics and the U.S. Particle Accelerator School (USPAS). A graduate point average of 3.0 or better must be maintained in the courses
satisfying the 30 credit-hour requirement. In particular, both P441 and P506 (or equivalents) must be passed with a grade of B (3.0) or above.
Final Examination
Either an oral defense of the thesis or a written final examination is required, and should take place at Indiana University. The written examination may be substituted for the oral defense only with
the permission of the thesis committee.
Return to Top
Master of Arts for Teachers Degree
Admission Requirements
Eight (8) credit hours of undergraduate physics courses.
Course Requirements
Twenty (20) credit hours in physics courses numbered P300 or higher, selected from the course listings below (recommended: P301, P309, P331, P332, P360, P451, P453, P454), the remaining 16 credit
hours in graduate education and in mathematics, astronomy, or chemistry.
Return to Top
Doctor of Philosophy Degree
Admission Requirements
Same as those for Master of Science degree.
Course Requirements
A total of 90 credit hours, including two courses at the 600 level or higher in one of the following five areas: condensed-matter physics (P557, P615, P616, P627, P657), high-energy physics (P535,
P640, P641, P707, P708), mathematical physics (P522, P607, P609, P622, P637, P647, P665, P743), nuclear physics (P535, P626, P633, P634), accelerator physics (P671 plus one of P633, P634, P640,
P641). Courses offered for the (optional) inside minor cannot be used to satisfy this requirement. A minimum of 9 credit hours per semester at the P501 level or above with a 3.0 (B) grade point
average is required. Mathematics courses suited to the student’s fields will be specified by advisors in the Department of Physics.
The minor may be taken either inside or outside of the department. The inside minor consists of P551, either P621 or P625, and at least two different courses, falling within different areas of
concentration, among the five areas listed above. Programs of study for outside minors are determined by the individual departments and typically require 9 to 12 credit hours of course work.
Recommended outside fields: astronomy, chemistry, and mathematics. All minors must be approved by the graduate advisor of the Department of Physics.
Qualifying Examination
Written. May be taken only twice. Must be taken at the end of the first year and must be passed by the end of the second year. The written examination covers the subjects of mechanics, electricity
and magnetism, quantum mechanics, and thermodynamics/statistical physics at the level of first-year graduate work. Relevant courses are P506, P507, P511, P512, P521, and P556. Not attempting the
qualifying examination at the required time constitutes an automatic failure.
Candidacy Seminar
Must be presented after the first attempt at the qualifying examination but before the end of the fifth semester. Usually pertains to a proposed dissertation topic.
Result of a significant piece of original research.
Final Examination
Oral defense of dissertation.
Return to Top
Courses at the 300 level listed below may be taken for graduate credit only by M.A.T. students in physics; those at the 400 level or above are available for graduate credit to all graduate students.
P301 Physics III (3 cr.)
P309 Modern Physics Laboratory (2 cr.)
P331-P332 Theory of Electricity and Magnetism I-II (3-3 cr.)
P340 Thermodynamics and Statistical Mechanics (3 cr.)
P360 Physical Optics (3 cr.)
P410 Computing Applications in Physics (3 cr.)
P441-P442 Analytical Mechanics I-II (3-3 cr.)
P453 Introduction to Quantum Physics (3 cr.)
P454 Modern Physics (4 cr.)
P500 Seminar (1 cr.) Reports on current literature. Graduate students and staff participate.
P504 Practicum in Physics Laboratory Instruction (1 cr.) Practical aspects of teaching physics labs. Meets the week before classes and one hour per week during the semester to discuss goals,
effective teaching techniques, grading standards, AI-student relations, and administrative procedures as applied to P201. Students enrolling in this course teach a section of P201 laboratory.
P506 Electricity and Magnetism I (4 cr.) Three hours of lectures and one hour of recitation. Development of Maxwell’s equations. Conservation laws. Problems in electrostatics and magnetostatics.
Introduction to the special functions of mathematical physics. Time-dependent solutions of Maxwell’s equations. Motion of particles in given electromagnetic fields. Elementary theory of radiation.
Plane waves in dielectric and conducting media. Dipole and quadruple radiation from nonrelativistic systems.
P507 Electricity and Magnetism II (4 cr.) Three hours of lectures and one hour of recitation. Further development of radiation theory. Fourier analysis of radiation field and photons. Scattering and
diffraction of electromagnetic waves. Special relativity. Covariant formulation of electromagnetic field theory.
P508 Current Research in Physics (1 cr.) Presentations by faculty members designed to give incoming graduate students an overview of research opportunities in the department.
P511 Quantum Mechanics I (4 cr.) Three hours of lectures and one hour of recitation. Basic principles, the Schrödinger equation, wave functions, and physical interpretation. Bound and continuum
states in one-dimensional systems. Bound states in central potential; hydrogen atom. Variational method. Time-independent perturbation theory.
P512 Quantum Mechanics II (4 cr.) P: P511. Three hours of lectures and one hour of recitation. Time-dependent perturbation theory. Schrödinger, Heisenberg and interaction pictures. Elementary theory
of scattering. Rotations and angular momentum. Other symmetries. Nonrelativistic, many-particle quantum mechanics, symmetry and antisymmetry of wave functions, and Hartree-Fock theory of atoms and
P521 Classical Mechanics (3 cr.) Vector and tensor analysis. Lagrangian and Hamiltonian dynamics. Conservation laws and variational principles. Two-body motion, many-particle systems, and rigid-body
motion. Canonical transformations and Hamilton-Jacobi theory. Continuum mechanics with introduction to complex variables.
P522 Advanced Classical Mechanics (3 cr.) Mathematical methods of classical mechanics; exterior differential forms, with applications to Hamiltonian dynamics. Dynamical systems and nonlinear
phenomena; chaotic motion, period doubling, and approach to chaos.
P535 Introduction to Nuclear and Particle Physics (3 cr.) P: P453 or equivalent. Survey of the properties and interactions of nuclei and elementary particles. Experimental probes of subatomic
structure. Basic features and symmetries of electromagnetic, strong and weak forces. Models of hadron and nuclear structure. The role of nuclear and particle interactions in stars and the evolution
of the universe.
P540 Digital Electronics (3 cr.) Digital logic, storage elements, timing elements, arithmetic devices, digital-to-analog and analog-to-digital conversion. Course has lectures and labs emphasizing
design, construction, and analysis of circuits using discrete gates and programmable devices.
P541 Analog Electronics (3 cr.) Amplifier and oscillator characteristics feedback systems, bipolar transistors, field-effect transistors, optoelectronic devices, amplifier design, power supplies, and
the analysis of circuits using computer-aided techniques.
P551 Modern Physics Laboratory (3 cr.) Graduate-level laboratory; experiments on selected aspects of atomic, condensed-matter, and nuclear physics.
P556 Statistical Physics (3 cr.) The laws of thermodynamics; thermal equilibrium, entropy, and thermodynamic potentials. Principles of classical and quantum statistical mechanics. Partition functions
and statistical ensembles. Statistical basis of the laws of thermodynamics. Elementary kinetic theory.
P557 Solid State Physics (3 cr.) P: P453 or equivalent. Atomic theory of solids. Crystal and band theory. Thermal and electromagnetic properties of periodic structures.
P570 Introduction to Accelerator Physics (3 cr.) P: approval of instructor. Overview of accelerator development and accelerator technologies. Transverse phase space motion and longitudinal
synchrotron motion of a particle in an accelerator. Practical accelerator lattice design. Design issues relevant to synchrotron light sources. Basics of free electron lasers. Spin dynamics in cyclic
accelerators and storage rings.
P571 Special Topics in Physics of Beams (3 cr.) P: approval of instructor.
P575 Introductory Biophysics (3 cr.) Overview of cellular components; basic structures of proteins, nucleotides, and biological membranes; solution physics of biological molecules; mechanics and
motions of biopolymers; physical chemistry of binding affinity and kinetics; physics of transport and signal transduction; biophysical techniques such as microscopy and spectroscopy; mathematical
modeling of biological systems; biophysics in the post-genome era.
P607 Group Representations (3 cr.) P: consent of instructor. Elements of group theory. Representation theory of finite and infinite compact groups. Study of the point crystal, symmetric, rotation,
Lorentz, and other classical groups as time permits. Generally offered in alternate years; see also MATH M607-M608.
P609 Computational Physics (3 cr.) Designed to introduce students (1) to numerical methods for quadrature, solution of integral and differential equations, and linear algebra; and (2) to the use of
computation and computer graphics to simulate the behavior of complex physical systems. The precise choice of topics will vary.
P615-P616 Physics of the Solid State I-II (3-3 cr.) P: P512. Mechanical, thermal, electric, and magnetic properties of solids; crystal structure; band theory; semiconductors; phonons; transport
phenomena; superconductivity; superfluidity; and imperfections. Usually given in alternate years.
P621 Relativistic Quantum Field Theory I (4 cr.) P: P512. Introduction to quantum field theory, symmetries, Feynman diagrams, quantum electrodynamics, and renormalization.
P622 Relativistic Quantum Field Theory II (4 cr.) P: P621. Non-Abelian gauge field theory, classical properties, quantization and renormalization, symmetries and their roles, and nonperturbative
P625 Quantum Many-Body Theory I (3 cr.) P: P512. Elements of nonrelativistic quantum field theory: second quantization, fields, Green’s functions, the linked-cluster expansion, and Dyson’s equations.
Development of diagrammatic techniques and application to the degenerate electron gas and imperfect Fermi gas. Canonical transformations and BCS theory. Finite-temperature (Matsubara), Green’s
functions, and applications.
P626 Quantum Many-Body Theory II-Nuclear (3 cr.) P: P625. Continued development of nonrelativistic, many-body techniques, with an emphasis on nuclear physics: real-time, finite-temperature Green’s
functions, path-integral methods, Grassmann algebra, generating functionals, and relativistic many-body theory. Applications to nuclear matter and nuclei.
P627 Quantum Many-Body Theory II-Condensed Matter (3 cr.) P: P625. Continued development of nonrelativistic many-body techniques with an emphasis on condensed-matter physics: properties of real
metals, superconductors, superfluids, Ginzburg-Landau theory, critical phenomena, order parameters and broken symmetry, ordered systems, and systems with reduced dimensionality.
G630 Nuclear Astrophysics (3 cr.) P: A451-A452, P453-P454, or consent of instructor. R: A550, P611. Fundamental properties of nuclei and nuclear reactions and the applications of nuclear physics to
astronomy. The static and dynamic properties of nuclei; nuclear reaction rates at low and high energies. Energy generation and element synthesis in stars; the origin and evolution of the element
abundancies in cosmic rays.
P633-P634 Theory of the Nucleus I-II (3-3 cr.) P: P512. Nuclear forces, the two-nucleon problem, systematics and electromagnetic properties of nuclei, nuclear models, nuclear scattering and
reactions, theory of beta-decay, and theory of nuclear matter.
P637 Theory of Gravitation (3 cr.) P: consent of instructor. The general theory of relativity and its observational basis. Discussion of attempts at a quantum field theory of gravitation. See MATH
P640 Elementary Particles I (3 cr.) P: P512. Experimental facts about hadrons and leptons, introduction to models of the fundamental particles and interactions, and experimental methods of
high-energy physics.
P641 Elementary Particles II (3 cr.) P: P640. Field theoretic descriptions of elementary particles and applications.
P647 Mathematical Physics (3 cr.) P: P501 or P502, P521, or MATH M442. Topics vary from year to year. Integral equations, including Green’s function techniques, linear vector spaces, and elements of
quantum mechanical angular momentum theory. For students of experimental and theoretical physics. May be taught in alternate years by members of Departments of Physics or Mathematics, with
corresponding shift in emphasis; see Mathematics M647.
P657 Statistical Physics II (3 cr.) Continuation of P556. Topics include advanced kinetic and transport theory, phase transitions, and non-equilibrium statistical mechanics.
P665 Scattering Theory (3 cr.) P: P506, P511. Theoretical tools for analysis of scattering experiments. Electromagnetic theory, classical and quantum particle dynamics.
P671 Special Topics in Accelerator Physics (3 cr.) P: P570, P521. Nonlinear dynamics: betatron phase space distortion due to the nonlinear forces. Methods of dealing with nonlinear perturbations.
Multi-particle dynamics: microwave and coupled bunch instabilities. Physics of electron cooling and stochastic cooling. Advanced acceleration techniques: inverse free electron laser acceleration,
wakefield and two-beam acceleration.
P672 Special Topics in Accelerator Technology and Instrumentation (3 cr.) P: approval of instructor.
P700 Topics in Theoretical Physics (cr. arr.)
P702 Seminar in Nuclear Spectroscopy (cr. arr.)
P703 Seminar in Theoretical Physics (cr. arr.)
P704 Seminar in Nuclear Reactions (cr. arr.)
P705 Seminar in High-Energy Physics and Elementary Particles (cr. arr.)
P706 Seminar in Solid State Physics (cr. arr.)
P707-P708 Topics in Quantum Field Theory and Elementary Particle Theory (3-3 cr.)
G711 Graduate Seminar in Chemical Physics (cr. arr.)
P743 Topics in Mathematical Physics (3 cr.) P: consent of instructor. For advanced students. Several topics in mathematical physics studied in depth; lectures and student reports on assigned
literature. Content varies from year to year. May be taught in alternate years by members of Departments of Physics or Mathematics, with corresponding shift in emphasis; see MATH M743.
P782 Topics in Experimental Physics (1-4 cr.)
P790 Seminar in Mathematical Physics (cr. arr.)
P800 Research (cr. arr.; S/F grading)* Experimental and theoretical investigations of current problems; individual staff guidance.
P801 Readings (cr. arr.; S/F grading)* Readings in physics literature; individual staff guidance.
P802 Research (cr. arr.)* Experimental and theoretical investigations of current problems; individual staff guidance. Graded by letter grade.
P803 Readings (cr. arr.)* Readings in physics literature; individual staff guidance. Graded by letter grade.
G750 Topics in Astrophysical Sciences (1-3 cr.)
Return to Top
|
{"url":"http://www.indiana.edu/~bulletin/iub/grad/2000-2002/physics.html","timestamp":"2014-04-24T23:37:22Z","content_type":null,"content_length":"25755","record_id":"<urn:uuid:ed98bfac-10fe-4c23-b24f-3d94be6c0f0b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The localization sequence for the algebraic K-theory of topological K-theory, by Andrew J. Blumberg and Michael A. Mandell
We prove a conjecture of Rognes that establishes a localization cofiber sequence of spectra K(Z) to K(ku) to K(KU) to Sigma K(Z) for the algebraic K-theory of topological K-theory. We deduce the
existence of this sequence as a consequence of a devissage theorem identifying the K-theory of the Waldhausen category of Postnikov towers of modules over a connective A-infty ring spectrum R with
the Quillen K-theory of the abelian category of finitely generated pi_{0}R-modules.
Andrew J. Blumberg <blumberg@math.stanford.edu>
Michael A. Mandell <mmandell@indiana.edu>
|
{"url":"http://www.math.illinois.edu/K-theory/0789/","timestamp":"2014-04-18T20:43:42Z","content_type":null,"content_length":"4147","record_id":"<urn:uuid:bb762e32-794c-44f1-a382-13341a132aa2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Grosse Pointe Park, MI Trigonometry Tutor
Find a Grosse Pointe Park, MI Trigonometry Tutor
...When I was in school Math did not come easily, and now it has become my gift. My ability to understand what students are struggling with and guide them into the right direction is my greatest
strength. I am available for tutoring during the day for home bound and homeschoolers.
9 Subjects: including trigonometry, geometry, algebra 1, algebra 2
...Both my parents are math teachers, my older sister is a math teacher and for a couple years in college I planned on being a math teacher. Yet, I came to realize that the classroom setting does
not excite me and energize me. I get really excited about giving all my attention and energy to one person and watching them thrive with success.
17 Subjects: including trigonometry, geometry, accounting, algebra 1
...Students will develop a much greater appreciation, and more passion for excellence in reading and writing. h. There is much more, if you can stand it :) Relevant portions of Algebra I&II are
mastered. The basics of Geometry are introduced, and its on from there!
30 Subjects: including trigonometry, reading, chemistry, writing
...I help the students to understand the concepts first,then guide through them to solve different kinds of problems of each topic. Each section of the subject whether it is Newtonian Mechanics,
Gravitation, Thermal Physics or Electricity Magnetism, all has their own concept and explanation! I al...
9 Subjects: including trigonometry, calculus, physics, linear algebra
...As a writing tutor at a local university, I have helped students with everything from defining the focus of a paper to writing an effective thesis. I also assisted them in writing applications
to various academic and professional programs. And as a writer for an independent newspaper, I not onl...
39 Subjects: including trigonometry, reading, ESL/ESOL, biology
Related Grosse Pointe Park, MI Tutors
Grosse Pointe Park, MI Accounting Tutors
Grosse Pointe Park, MI ACT Tutors
Grosse Pointe Park, MI Algebra Tutors
Grosse Pointe Park, MI Algebra 2 Tutors
Grosse Pointe Park, MI Calculus Tutors
Grosse Pointe Park, MI Geometry Tutors
Grosse Pointe Park, MI Math Tutors
Grosse Pointe Park, MI Prealgebra Tutors
Grosse Pointe Park, MI Precalculus Tutors
Grosse Pointe Park, MI SAT Tutors
Grosse Pointe Park, MI SAT Math Tutors
Grosse Pointe Park, MI Science Tutors
Grosse Pointe Park, MI Statistics Tutors
Grosse Pointe Park, MI Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Detroit, MI trigonometry Tutors
East Detroit, MI trigonometry Tutors
Eastpointe trigonometry Tutors
Grosse Pointe trigonometry Tutors
Grosse Pointe Farms, MI trigonometry Tutors
Grosse Pointe Shores, MI trigonometry Tutors
Grosse Pointe Woods, MI trigonometry Tutors
Hamtramck trigonometry Tutors
Harper Woods trigonometry Tutors
Highland Park, MI trigonometry Tutors
Roseville, MI trigonometry Tutors
Royal Oak, MI trigonometry Tutors
Saint Clair Shores trigonometry Tutors
Taylor, MI trigonometry Tutors
Warren, MI trigonometry Tutors
|
{"url":"http://www.purplemath.com/grosse_pointe_park_mi_trigonometry_tutors.php","timestamp":"2014-04-20T23:38:19Z","content_type":null,"content_length":"24940","record_id":"<urn:uuid:4861b0d4-5eca-4827-b416-28f433760b63>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
, 1993
"... We classify clustering algorithms into sequence-based techniques---which transform the object net into a linear sequence---and partition-based clustering algorithms. Tsangaris and Naughton
[TN91, TN92] have shown that the partition-based techniques are superior. However, their work is based on a sin ..."
Cited by 22 (7 self)
Add to MetaCart
We classify clustering algorithms into sequence-based techniques---which transform the object net into a linear sequence---and partition-based clustering algorithms. Tsangaris and Naughton [TN91,
TN92] have shown that the partition-based techniques are superior. However, their work is based on a single partitioning algorithm, the Kernighan and Lin heuristics, which is not applicable to
realistically large object bases because of its high running-time complexity. The contribution of this paper is two-fold: (1) we devise a new class of greedy object graph partitioning algorithms
(GGP) whose running-time complexity is moderate while still yielding good quality results. For large object graphs GGP is the best known heuristics with an acceptable running-time. (2) We carry out
an extensive quantitative analysis of all well-known partitioning algorithms for clustering object graphs. Our analysis yields that no one algorithm performs superior for all object net
characteristics. Therefore, we d...
- APPLIED MATHEMATICS LETTERS , 1990
"... ..."
, 1992
"... We investigate clustering techniques that are specifically tailored for object-oriented database systems. Unlike traditional database systems object-oriented data models incorporate the
application behavior in the form of type-associated operations. This source of information is exploited for clu ..."
Cited by 7 (3 self)
Add to MetaCart
We investigate clustering techniques that are specifically tailored for object-oriented database systems. Unlike traditional database systems object-oriented data models incorporate the application
behavior in the form of type-associated operations. This source of information is exploited for clustering decisions by statically determining the operations' access behavior applying dataflow
analysis techniques. This process yields a set of weighted access paths. The statically extracted (syntactical) access patterns are then matched with the actual object net. Thereby the interobject
reference chains that are likely being traversed in the database applications accumulate correspondingly high weights. The object net can then be viewed as a weighted graph whose nodes correspond to
objects and whose edges are weighted inter-object references. We then employ a newly developed (greedy) heuristics for graph partitioning---which exhibits moderate complexity and, thus, is applicable
, 2001
"... The cycle space of a strongly connected graph has a basis consisting of directed circuits. The concept of relevant circuits is introduced as a generalization of the relevant cycles in undirected
graphs. A polynomial time algorithm for the computation of a minimum weight directed circuit basis is ..."
Cited by 7 (0 self)
Add to MetaCart
The cycle space of a strongly connected graph has a basis consisting of directed circuits. The concept of relevant circuits is introduced as a generalization of the relevant cycles in undirected
graphs. A polynomial time algorithm for the computation of a minimum weight directed circuit basis is outlined.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2381833","timestamp":"2014-04-17T19:56:37Z","content_type":null,"content_length":"19861","record_id":"<urn:uuid:05fb1fce-71ac-464f-a5bb-772ce405f246>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum Foundations
This series consists of talks in the area of Foundations of Quantum Theory. Seminar and group meetings will alternate.
It is certainly possible to express ordinary quantum mechanics in the framework of a real vector space: by adopting a suitable restriction on all operators--Stueckelberg’s rule--one can make the
real-vector-space theory exactly equivalent to the standard complex theory. But can we achieve a similar effect without invoking such a restriction? In this talk I explore a model within
real-vector-space quantum theory in which the role of the complex phase is played by a separate physical system called the ubit (for “universal rebit”). The ubit is a single binary real-vector-space
quantum obje
The distinction between a realist interpretation of quantum theory that is psi-ontic and one that is psi-epistemic is whether or not a difference in the quantum state necessarily implies a difference
in the underlying ontic state. Psi-ontologists believe that it does, psi-epistemicists that it does not. This talk will address the question of whether the PBR theorem should be interpreted as
lending evidence against the psi-epistemic research program.
It is sometimes pointed out as a curiosity that the state space of quantum theory and actual physical space seem related in a surprising way: not only is space three-dimensional and Euclidean, but so
is the Bloch ball which describes quantum two-level systems. In the talk, I report on joint work with Lluis Masanes, where we show how this observation can be turned into a mathematical result:
suppose that physics takes place in d spatial dimensions, and that some events happen probabilistically (dropping quantum theory and complex amplitudes altogether).
One of the most important open problems in physics is to reconcile quantum mechanics with our classical intuition. In this talk we look at quantum foundations through the lens of mathematical
foundations and uncover a deep connection between the two fields. We show that Cantorian set theory is based on classical concepts incompatible with quantum experiments. Specifically, we prove that
Zermelo-Fraenkel axioms of set theory (and the background classical logic) imply a Bell-type inequality.
We establish a tight relationship between two key quantum theoretical notions: non-locality and complementarity. In particular, we establish a direct connection between Mermin-type non-locality
scenarios, which we generalise to an arbitrary number of parties, using systems of arbitrary dimension, and performing arbitrary measurements, and a new stronger notion of complementarity which we
introduce here. Our derivation of the fact that strong complementarity is a necessary condition for a Mermin scenario provides a crisp operational interpretation for strong complementarity.
I present our work on inferring causality in the classical world and encourage the audience to think about possible generalizations to the quantum world. Statistical dependences between observed
quantities X and Y indicate a causal relation, but it is a priori not clear whether X caused Y or Y caused X or there is a common cause of both. It is widely believed that this can only be decided if
either one is able to do interventions on the system, or if X and Y are part of a larger set of variables.
In the de Broglie-Bohm pilot-wave theory, an ensemble of fermions is not only described by a spinor, but also by a distribution of position beables. If the distribution of positions is different from
the one predicted by the Born rule, the ensemble is said to be in quantum non-equilibrium. Such ensembles, which can lead to an experimental discrimination between the pilot-wave theory and standard
quantum mechanics, are thought to quickly relax to quantum equilibrium in most cases.
This talk presents two results on the interplay between causality and quantum information flow. First I will discuss about the task of switching the connections among quantum gates in a network. In
ordinary quantum circuits, gates are connected in a fixed causal sequence. However, we can imagine a physical mechanism where the connections among gates are not fixed, but instead are controlled by
the quantum state of a control system.
|
{"url":"https://perimeterinstitute.ca/video-library/collection/quantum-foundations?page=4","timestamp":"2014-04-19T18:15:57Z","content_type":null,"content_length":"60571","record_id":"<urn:uuid:1090d1fc-0408-46f2-b576-e4f18c12c3e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What violins have in common with the sea – the wave principle
By Alok Jha, The Guardian
Wednesday, February 12, 2014 8:04 EDT
You’re reading these words because light waves are bouncing off the letters on the page and into your eyes. The sounds of the rustling paper or beeps of your computer reach your ear via compression
waves travelling through the air. Waves race across the surface of our seas and oceans and earthquakes send waves coursing through the fabric of the Earth.
As different as they all seem, all of these waves have something in common – they are all oscillations that carry energy from one place to another. The physical manifestation of a wave is familiar –
a material (water, metal, air etc) deforms back and forth around a fixed point.
Think of the ripples on the surface of a pond when you throw in a stone. Looking from above, circular waves radiate out from the point where the stone hits the water, as the energy of the collision
makes water molecules around it move up and down in unison. The resulting wave is called “transverse” because it travels out from the point the stone sank, while the molecules themselves move in the
perpendicular direction. A vertical cross-section of the wave would look like a familiar sine curve.
Sound waves are known as “longitudinal” because the medium in which they travel – air, water or whatever else – vibrates in the same direction as the wave itself. Loudspeakers, for example, move air
molecules back and forth in the same direction as the vibration of the speaker cone.
In both cases, the water or air molecules remain, largely, in the same place as they started, as the wave travels through the material. They are not shifted, en masse, in the direction of the wave.
The one-dimensional wave equation (pictured) describes how much any material is displaced, over time, as the wave proceeds. The curly “d” symbols scattered through the equation are mathematical
functions known as partial differentials, a way to measure the rate of change of a specific property of the system with respect to another.
On the left is the expression for how fast the material is deforming (y) in space (x) at any given instant; on the right is a description for how fast the material is changing in time (t) at that
same instant. Also on the right is the velocity of the wave (v). For a wave moving across the surface of a sea, the equation relates how fast a tiny piece of water is physically deforming, at any
particular instant, in space (on the left) and time (on the right).
The wave equation had a long genesis, with scientists from many fields circling around its mathematics across the centuries. Among many others, Daniel Bernoulli, Jean le Rond d’Alembert, Leonhard
Euler, and Joseph-Louis Lagrange realised that there was a similarity in the maths of how to describe waves in strings, across surfaces and through solids and fluids.
Bernoulli, a Swiss mathematician, began by trying to understand how a violin string made sound. In the 1720s, he worked out the maths of a string as it vibrated by imagining the string was composed
of a huge number of tiny masses, all connected with springs. Applying Isaac Newton’s laws of motion for the individual masses showed him that the simplest shape for vibrating violin string, fixed at
each end, would be the gentle arc of a single sine curve. A violin string (or a string on any instrument, for that matter) vibrates in transverse waves along its length, which creates longitudinal
waves in the surrounding air, which our ears interpret as sound.
Some decades later, mathematician Jean Le Rond d’Alembert generalised the string problem to write down the wave equation, in which he found that the acceleration of any segment of the string was
proportional to the tension acting on it. The waves created by different tensions of the string produce different notes – think of how the sound from a plucked string can be changed as it is
tightened or loosened.
The wave equation started off describing movement of physical stuff but it is much more powerful than that. Mathematically, it can also describe, for example, the movement of heat or electrical
potential, by changing “y” from describing the deformation of a substance to the change in the energy of a system.
Not all waves need to travel through a material. By 1864, the physicist James Clerk Maxwell had derived his four famous equations for the interactions of the electric and magnetic fields in a vacuum
around charged particles. He noticed that the expressions could be combined to form wave equations featuring the strength of the electric or magnetic fields in the place of “y”. And the speed of
these waves (the “v” term in the equation) was equal to the speed of light.
This simple mathematical re-arrangement was one of the most significant discoveries in the history of physics, showing that light must be an electromagnetic wave that travelled in the vacuum.
Electromagnetic waves, then, are transverse oscillations of the electric and magnetic fields. Discovering their wave-like nature led to the prediction that there must be light of different
wavelengths, the distance between successive peaks and troughs of the sine curve. It was soon discovered that wavelengths longer than visible light include microwaves, infrared and radio waves;
shorter wavelengths include ultraviolet light, X-rays and gamma rays.
The wave equation has also proved useful in understanding one of the strangest, but most important, physical ideas in the past century: quantum mechanics. In this description of the world at the
level of atoms and smaller, particles of matter can be described as waves using Erwin Schrödinger’s eponymous equation.
His adaptation of the wave equation describes electrons, for example, not as a well-defined object in space but as quantum waves for which it is only possible to describe probabilities for position,
momentum or other basic properties. Using the Schrödinger wave equation, interactions between fundamental particles can be modelled as if they were waves that interfere with each other, instead of
the classical description of fundamental particles, which has them hitting each other like billiard balls.
Everything that happens in our world, happens because energy moves from one place to another. The wave equation is a mathematical way to describe how that energy flows.
guardian.co.uk © Guardian News and Media 2014
|
{"url":"http://www.rawstory.com/rs/2014/02/12/what-violins-have-in-common-with-the-sea-the-wave-principle/","timestamp":"2014-04-17T22:48:14Z","content_type":null,"content_length":"85119","record_id":"<urn:uuid:d0ddf70d-8341-48ab-b99a-9c3a513fab91>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Which equation will give you the speed of a wave? A. speed = frequency + wavelength B.speed = amplitude/2 C. speed = frequency x wavelength D.speed = wavelength + frequency
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
option C
Best Response
You've already chosen the best response.
Even if you knew nothing about waves, you can use dimensional analysis combined with the common meanings of the words to eliminate the wrong answers. Dimensional analysis simply means that the
equation must make sense in terms of the units of additive terms. For example, you won't want to add time and distance. Frequency refers to how often something happens, like 3 times per second.
The units of frequency are therefore 1/second. Wavelength, you can suspect, has units of length, or meters. It doesn't make sense to add meters to 1/seconds. (also A & D are the same so they
can't both be right so they must both be wrong). Similarly, amplitude and speed are not the same concept so it cant be B; you don't add or subtract speed and amplitude.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f6b585be4b014cf77c82680","timestamp":"2014-04-17T09:48:04Z","content_type":null,"content_length":"33049","record_id":"<urn:uuid:16c61e34-6ce7-42bc-8996-a0168e34da4e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An asymptotic derivation of weakly nonlinear ray theory
Prasad, Phoolan (2000) An asymptotic derivation of weakly nonlinear ray theory. In: Proceedings of the Indian Academy of Sciences Mathematical Sciences, 110 (4). pp. 431-447.
Download (132Kb)
Using a method of expansion similar to Chapman-Enskog expansion, a new formal perturbation scheme based on high frequency approximation has been constructed. The scheme leads to an eikonal equation
in which the leading order amplitude appears. The transport equation for the amplitude has been deduced with an error O(epsilon (2)) where epsilon is the small parameter appearing in the high
frequency approximation. On a length scale over which Choquet-Bruhats theory is valid, this theory reduces to the former. The theory is valid on a much larger length scale and the leading order terms
give the weakly nonlinear ray theory (WNLRT) of Prasad, which has been very successful in giving physically realistic results and also in showing that the caustic of a linear theory is resolved when
nonlinear effects are included. The weak shock ray theory with infinite system of compatibility conditions also follows from this theory.
Item Type: Journal Article
Additional Information: Copyright of this article belongs to Indian Academy of Sciences
Keywords: Nonlinear wave propagation;ray theory;hyperbolic equations;caustic
Department/Centre: Division of Physical & Mathematical Sciences > Mathematics
Date Deposited: 13 Sep 2004
Last Modified: 19 Sep 2010 04:16
URI: http://eprints.iisc.ernet.in/id/eprint/1806
Actions (login required)
|
{"url":"http://eprints.iisc.ernet.in/1806/","timestamp":"2014-04-18T21:30:01Z","content_type":null,"content_length":"23231","record_id":"<urn:uuid:10d41036-83ee-4566-982d-c155b16333df>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Point Math Tutor
Find a College Point Math Tutor
...It blew my mind and fostered a lifetime love of words. I like to bring forth this same excitement to my students...and to that I say: supercalifragilisticexpialidocious! Many elementary school
children have a hard time with math and therefore find it uninteresting.
17 Subjects: including algebra 1, prealgebra, geometry, reading
...I value the importance of WHY and not just HOW. Mathematics is a much easier subject when you know WHY you are doing something rather than simply memorizing the HOW to do it. If necessary,
individualized homework assignments will be given to students.
31 Subjects: including SAT math, Spanish, ACT Math, discrete math
I was able to do well in Math because I understood it, and you can too. I have a BS in Math and an MS in Applied Math. I have tutored private and public school students.
9 Subjects: including calculus, algebra 1, algebra 2, geometry
...I believe in homework because the next time that a tutor sees his/her tutee, there should be completed material to check and go over in order to evaluate progress. Learning is an individual
process, but tutors can facilitate that progress. It is my goal to enable my tutees to think, grow, and understand the subject material on their own.
13 Subjects: including SAT math, Spanish, English, reading
...Firstly, I worked for four years as an assistant teacher in private pre-schools on Long Island, as well as two summers as the adult group leader at a summer camp for pre-school children. In
addition, I have NYS certification as a Level 1, teaching assistant. Furthermore, during my 3.5 years of ...
10 Subjects: including algebra 1, vocabulary, grammar, French
|
{"url":"http://www.purplemath.com/college_point_math_tutors.php","timestamp":"2014-04-17T07:47:51Z","content_type":null,"content_length":"23761","record_id":"<urn:uuid:50391ce0-138d-46e8-8827-90556933ce20>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
x intercept for this function
• 10 months ago
• 10 months ago
Best Response
You've already chosen the best response.
\[\huge y=x^3-9x^2+15x+4\]
Best Response
You've already chosen the best response.
this is unfactorable?
Best Response
You've already chosen the best response.
oh no wait nevermind
Best Response
You've already chosen the best response.
Any particular method they want you to use? Do they specify factoring?
Best Response
You've already chosen the best response.
No but isnt that the only way?
Best Response
You've already chosen the best response.
Graphing is the easiest.
Best Response
You've already chosen the best response.
Graph that bad boy. Look for where it crosses the x axis. Doneso.
Best Response
You've already chosen the best response.
no i have to use an algebraic method
Best Response
You've already chosen the best response.
because like say on an exam, i cant graph that
Best Response
You've already chosen the best response.
Possible rational roots are :
Best Response
You've already chosen the best response.
Okay then your best best is to use the rational root theorem to list possible rational roots. Then check each one to see if it is a valid root.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Use synthetic division to see if any of those are actual roots.
Best Response
You've already chosen the best response.
Mertsj is correct. The way he got those possible roots is: A: Make a list of factors for the last number (In this case, it's 4) B: Make a list of factors for the first coefficient (In this case,
it's 1) Possible rational roots must be of the form \(\huge \frac{\text{things in the first list}}{\text{things in the second list}}\)
Best Response
You've already chosen the best response.
It is not factorable.
Best Response
You've already chosen the best response.
So the best approach would be to find where y changes sign. Then you would know there is a root between those two values and you could hone in on it by trial and error. Of if you know calculus,
you could use the derivative. What class is this for?
Best Response
You've already chosen the best response.
@Mertsj calculus !
Best Response
You've already chosen the best response.
they want you to bruteforce using newton's more than likely
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51c0f775e4b0bb0a4360050a","timestamp":"2014-04-19T22:25:52Z","content_type":null,"content_length":"68582","record_id":"<urn:uuid:d98435e3-4443-4081-a077-ad733090f179>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Points in Sphere Solution
Use spherical polar coordinates, and w.l.o.g. choose the polar axis through one of the points. Now the distance between the two points is
sqrt ( r1^2 + r2^2 - 2 r1 r2 cos(theta))
and cos(theta) is (conveniently) uniformly distributed between -1 and
• 1, while r1 and r2 have densities 3 r1^2 d(r1) and 3 r2^2 d(r2). Split
the total integral into two (equal) parts with r1 < r2 and r1 > r2, and it all comes down to integrating polynomials.
More generally, the expectation of the n'th power of the distance between the two points is
2^n . 72 / ((n+3)(n+4)(n+6))
So the various means are
the (arithmetic) mean distance is 36/35 = 1.028571... the root mean square distance is sqrt(6/5) = 1.095445... the geometric mean distance is 2exp(-3/4) = 0.944733... the harmonic mean distance
is 5/6 = 0.833333... the inverse root mean inverse square distance is
2/3 = 0.666666...
|
{"url":"http://rec-puzzles.org/index.php/Points%20in%20Sphere%20Solution","timestamp":"2014-04-21T15:01:29Z","content_type":null,"content_length":"7592","record_id":"<urn:uuid:7348ae67-b1b4-4d05-98a1-bd6305dedf78>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Given an array find any pair that sums up into a number
This problem is quite largely discussed and is very famous for its various ways to attack. This can be constrained in time and in space. The most primitive way of attacking this problem would yield
an solution that runs in O(n
) time.
Let me define the problem clearly. Given an array of size n and a number X, you are supposed to find a pair of numbers that sums to X. Only one pair would be enough.
Lets see how we can find the solution for this in O(n) using a HashSet. Yes, HashSet is a costly data structure to use, but lets consider this primitive just due to the linear order it provides.
package dsa.arrays;
import java.util.HashSet;
import java.util.Set;
public class FindSumHashSet {
public static void main(String a[]){
FindSumHashSet sumFinder = new FindSumHashSet();
public void begin(){
int[] sampleArray = getRandomArray(20);
int randomSum = sampleArray[15] + sampleArray[10];
System.out.print("ARRAY : ");
for(int i:sampleArray){
System.out.print(i+" ");
public void findPair(int[] sampleArray, int randomSum) {
Set<Integer> sampleArraySet = new HashSet<Integer>();
for(int i=0;i<sampleArray.length;i++){
int valueToFind = randomSum - sampleArray[i];
System.out.println("SUM : "+randomSum);
System.out.println("PAIR : "+valueToFind+","+sampleArray[i]);
private int[] getRandomArray(int size) {
int[] randomArray = new int[size];
for(int i=0;i<size;i++){
randomArray[i] = (int)(Math.random()*10);
return randomArray;
//ARRAY : 7 3 6 4 3 4 7 7 5 1 4 6 2 4 1 7 5 8 9 7
//SUM : 11
//PAIR : 7,4
//ARRAY : 0 2 9 6 0 7 6 5 1 7 9 0 7 1 2 4 4 3 9 0
//SUM : 13
//PAIR : 6,7
We shall improvise on this problem in space domain using fancy sorts and those we shall see in the coming posts.
5 comments:
Considering that the members are not ordered, the best bet would be to use a hash based datastructure. I am not sure why you generalize HashSet as expensive. List or an array would do horribly
when doing a linear search with this data (as you know, binary search is impossible here). I cant think of any other datastructure you could fit in. Also, the performance of HashSet depends on
the efficient implementation of the hash. So, there could be a cost you would pay for larger datasets but i am sure that would be better than linear search any day.
@Arun: I completely accept your argument. The reason i want to solve the problem without using Hash is because that is what interviewers ask :) these days. They tell you not to solve using Hash.
And they may say it should not be O(n*n) either.
You can easily tackle this question in O(n lg n) time by first sorting the array using in-place comparison sort like quick sort. After sorting, you can have two pointers pointed to the head and
the tail, then find the sum.
it might be a good idea to use BST here. create a BST of n numbers (nlogn). then, for each number in an array (of size n), subtract it from TargetSum and then look for the other number in binary
tree (logn) which should take (nlogn) total time ... the overall time would (nlogn + nlogn) which is nlogn... please correct me if I am wrong.
@vijay : yes that is a very good idea to solve the problem!!
|
{"url":"http://www.technicalypto.com/2010/02/given-array-find-number-that-sums-up.html?showComment=1272551580443","timestamp":"2014-04-18T08:19:03Z","content_type":null,"content_length":"90545","record_id":"<urn:uuid:b92bf4cc-7bef-4a64-9f62-fe23f2d7a57e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Reply to comment
Submitted by Andrew Irving on April 12, 2013.
The 2nd clue didn't make sense to me at first either, but I think I understand it now. When it says,
`The lady replies, "Okay, here is clue number 2: If you go the house next door (points to it), the sum of their ages is the same as the house number" '
`their' refers to the lady's children. At first I thought `the sum of their ages' meant that the sum of the ages of the people next door, which confused me. But I think the clue is supposed to give
you an equation of the form,
a + b + c = # of house next door
where a, b and c are the ages of the lady's children.
|
{"url":"http://plus.maths.org/content/comment/reply/5887/4249","timestamp":"2014-04-20T10:50:14Z","content_type":null,"content_length":"20448","record_id":"<urn:uuid:32803d30-65d5-4220-a669-8e2038161cf6>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
R index between two products is somewhat dependent on other products
March 12, 2012
By Wingfeet
I explained earlier how R-index is used in sensory is used to examine ranking data. The legitimization to use R-index is in the link with d' and with Mann-Whitney statistic. In this post I show there
is a dependence on the number of products and position of other products on the R index. It is a small effect. However, if data is analyzed by looking only and rigidly at the p value, then the result
might chance from just under significant to just over significance.
Using simulations, I will show that presence of other samples influences the R-index. I think this effect occurs because the R index is, mathematically, calculated from an aggregated matrix of counts
of product against ranks. It is my feeling, that when there are more products, there are less chances to get equal rankings than with few products and hence slightly different scores.
R index calculation
Below the calculation when comparing 2 products from a total of 4
R index calculation matrix rank 1 rank 2 rank 3 rank 4 product 1 a b c d product 2 e f g h Note a to h are the counts in the respective cells. The R index is composed of three parts:1 The number of
wins of product 1 over product 2: a*(f+g+h) + b*(g+h) + c*h2 The number of equal rankings divided by two (a*e + b*f + c*g + d*h) /23 Normalization (a+b+c+d)*(e+f+g+h)
R index = 100*
normalizationEffect of number of products
Figure 1 shows the simulation R index dependence on the number of products, using a ranking with 25 panelists. With a low number of products, the distribution of the R index is a bit wider than with
more products. Most of the difference in distribution is in the region 3 to 6 products, which is also the number of products often used in sensory.
(Critical values of R-indices are given by the red and blue lines (Bi and O'Mahony 1995 respectively 2007, Journal of Sensory Studies))
Effect of neighborhood of other products
Figure 2 shows the dependence on location of the other products. I have chosen 5 products, two have the same location. The other 3 move away from this location. Again 25 panelists. In this figure it
shows that the two products R-index has a smaller distribution under H0 (no product differences) when all products are similar. This is about the same as the 5 products in the first plot. When the
other products are far away, the distribution becomes wider, getting closer to the 3 product distribution in figure 1.
It should be written that with one product rather than three products moving away from the centre location the effect is smaller. Effect of number of panelists is for a next post.
Code for figure 1:library(ggplot2)
makeRanksNoDiff <- function(nprod,nrep) { inList <- lapply(1:nrep,function(x) sample(1:nprod,nprod) ) data.frame(person=factor(rep(1:nrep,each=nprod)), prod=factor(rep(1:nprod,times=nrep)), rank=
tab2Rindex <- function(t1,t2) { Rindex <- crossprod(rev(t1)[-1],cumsum(rev(t2[-1]))) + 0.5*crossprod(t1,t2) 100*Rindex/(sum(t1)*sum(t2))}
FastAllRindex <- function(rankExperiment) { crst <- xtabs(~ prod + rank,data=rankExperiment) nprod <- nlevels(rankExperiment$prod) Rindices <- unlist( lapply(1:(nprod-1),function(p1) { lapply
((p1+1):nprod,function(p2) tab2Rindex(crst[p1,],crst[p2,])) }) ) Rindices }
nprod <- seq(3,25,by=1)last <- lapply(nprod,function(xo) { nsamples <- ceiling(10000/xo) li <- lapply(1:nsamples,function(xi) { re <- makeRanksNoDiff(nprod=xo,nrep=25) FastAllRindex(re) }) li2 <-
as.data.frame(do.call(rbind,li)) li2$nprod <- xo li2 } )
last2 <- lapply(last,function(x) { qq <- quantile(as.matrix(x[,grep('nprod',names(x),invert=TRUE)]) ,c(0.025,.5,.975)) qq <- as.data.frame(t(qq)) qq$nprod <- x$nprod[1] qq } )
summy <- do.call(rbind,last2)g1 <- ggplot(summy,aes(nprod,`50%`) )g1 <- g1+ geom_errorbar(aes(ymax = `97.5%`, ymin=`2.5%`))g1 <- g1 + scale_y_continuous(name='R-index' )g1 <- g1 + scale_x_continuous
(name='Number of products to compare')g1 <- g1 + geom_hline(yintercept=50 + 18.57*c(-1,1),colour='red')g1 <- g1 + geom_hline(yintercept=50 + 15.21*c(-1,1),colour='blue')
Additional code for figure 2
makeRanksDiff <- function(prods,nrep) {
nprod <- length(prods)
inList <- lapply(1:nrep,function(x) rank(rnorm(n=nprod,mean=prods)))
location <- seq(0,3,by=.25)
last <- lapply(location,function(xo) {
li <- sapply(1:10000,function(xi) {
re <- makeRanksDiff(prod=c(0,0,xo,xo,xo),nrep=25)
crst <- xtabs(~ prod + rank,data=re)
li2 <- data.frame(location=xo,Rindex=li)
} )
last2 <- lapply(last,function(x) {
qq <- quantile( x$Rindex,c(0.025,.5,.975))
qq <- as.data.frame(t(qq))
qq$location <- x$location[1]
} )
summy <- do.call(rbind,last2)
g1 <- ggplot(summy,aes(location,`50%`) )
g1 <- g1+ geom_errorbar(aes(ymax = `97.5%`, ymin=`2.5%`))
g1 <- g1 + scale_y_continuous(name='R-index between equal products' )
g1 <- g1 + scale_x_continuous(name='Location of odd products')
g1 <- g1 + geom_hline(yintercept=50 + 18.57*c(-1,1),colour='red')
g1 <- g1 + geom_hline(yintercept=50 + 15.21*c(-1,1),colour='blue')
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
|
{"url":"http://www.r-bloggers.com/r-index-between-two-products-is-somewhat-dependent-on-other-products/","timestamp":"2014-04-18T13:21:46Z","content_type":null,"content_length":"51956","record_id":"<urn:uuid:078f2d9a-ed01-4f67-b421-82e3cda44867>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
space-filling curve
space-filling curve (plural space-filling curves)
1. (analysis) a curve whose range contains the entire 2-dimensional unit square (or the 3-dimensional unit cube)
□ From space-filling curves it can be deduced that $\mathbb{R}^2$ and $\mathbb{R}^3$ have the same cardinality as $\mathbb{R}$.
Related termsEdit
Last modified on 19 June 2013, at 18:23
|
{"url":"http://en.m.wiktionary.org/wiki/space-filling_curve","timestamp":"2014-04-17T04:50:09Z","content_type":null,"content_length":"17089","record_id":"<urn:uuid:0c4cb990-f9c9-4c57-be6d-9d8292905a5a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Handwriting to LaTeX?
Anyone tried out MoboMath?
MoboMath lets you work with math on a computer exactly as you learned it — by writing it. MoboMath translates your handwritten math input into a formatted layout that can be used in Microsoft
Word, Maple, and many other popular applications.
Instead of struggling with keyboard and palette input, you can create and edit math expressions for technical documents, calculations, presentations, and web pages in your own handwriting. It’s
fast, simple, and intuitive.
Using a tablet PC or an external tablet, just write your expressions as you normally would with your tablet’s pen. Then convert them to a formatted math layout with a single tap and copy or drag
them into your documents or worksheets.
Don’t have a tablet right now or I’d test it out.
1. #1 Joe Fitzsimons February 4, 2010
I haven’t tried it, but use a tablet all the time. I had thought MathJournal was probably the best option, but now windows 7 seems to have a math input panel for tablets.
I recently discovered MathPaper, which is buggy as hell, but comes closest to replicating the functionality I want: http://graphics.cs.brown.edu/research/pcc/research.html#mathpaper
2. #2 Justin Dove February 4, 2010
I was actually thinking about this the other day since I’ll be buying a tablet soon. On that note, perhaps some others would get use out of the site that sparked my interest in the idea:
If you keep that address with you, you can get a similar functionality, albeit for a single symbol at a time, anywhere that has internet access. Though this is less of a translator and more of an
encyclopedic resource. Great if you don’t know what the code is for a given symbol.
3. #3 sep332 February 4, 2010
The math panel built-in to Windows 7 is really amazingly good. Give it a try sometime.
4. #4 Andrew Landahl February 4, 2010
After breaking my wrist at QIP (bar fight; someone insulted quantum mechanics), I’m typing LaTeX with one hand now. I just installed MacSpeech Dictate which is enabling me to write e-mails and
indeed even this post by just speaking, which is quite handy. (Sorry for the pun.) However, typing LaTeX formulas is still problematic. I see there is software called Natlatex on the web that is
supposed to turn spoken words into LaTeX equations. I haven’t installed it yet. Does anyone have any experience with typing LaTeX equations using speech recognition software? Any recommendations?
PS I only had to manually typeset the word latex above; the rest came out perfectly using the dictation software. However, here are some renditions of the word “adiabatic”
idiot back, 80 back, idiomatic, media `, media back, 80 of back, media back, 80th attic, idiot back, idiot `, media back
5. #5 Ro February 4, 2010
Why would you want this? Why, why???? You’d lose the sense of superiority you can have about knowing how to add a matrix full of accented and italicized greek letters.
6. #6 Ian Durham February 4, 2010
Now if only Steve Jobs would succumb to the pressure and come out with a Mac tablet (and, no, the iPad is not a tablet).
7. #7 andy.s February 5, 2010
I just went from “holy crap! it actually worked.” to
It doesn’t seem to recognize h-bar, for example. It suggested “z” or “\bar{h}” or “\vec{h}” among others.
Makes it kind of useless for QM.
8. #8 Neil B February 6, 2010
Andy.s, I haven’t used this product but note that “symbol font” doesn’t have “hbar” either, which is shocking. But did you try a general “bar”-across command that you can just apply to h at will,
maybe one of the finer settings.
Anyone know of a good Word to Latex program? I couldn’t get Word2tex to work out OK.
9. #9 Aaron February 8, 2010
The last pen-input math thing I tried was the “Freehand Formula Entry System”, which did work, but was not so good as to make it clearly better than manual LaTeX usage.
As I only run free operating systems, most of the products out there are useless to me.
10. #10 Justin Dove February 10, 2010
Detexify recognizes it! So there aren’t any excuses for these other programs. Do they have train features? Because I think that is key for a system like this. Human users are going to be the best
input, so accumulating feedback is the best way to improve them.
11. #11 bware May 4, 2010
the simple solution to your problem is the well known fact in q.m. that \hbar = 1. (c=1 too).
more seriously, i think \hbar is a character in some aps physics fonts. possible low-work solutions to these type of problems are allowing the translator to pick \bar{h} and then doing a
find-replace that switches all instances of \bar{h} with \hbar.
In general, I use a combination of \newcommand, AutoHotKey hotkeys, find-replace, and the tried and true copy-paste to speed up latexing.
If there are good programs that could even marginally translate scanned handwriting to latex then a I would give a fortune for it.
12. #12 gnils May 29, 2010
With Inlage you can get LaTeX from the Windows7 Math Input Panel (that works very fine). By pressing the insert button you’ll get a syntax error free LaTeX code in the editor and this program is
cheaper than mobomath.
|
{"url":"http://scienceblogs.com/pontiff/2010/02/04/handwriting-to-latex/","timestamp":"2014-04-19T12:05:17Z","content_type":null,"content_length":"60150","record_id":"<urn:uuid:b0b9af4a-633f-4390-a32c-8c3beac1c3fe>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Converting From Negative Exponents Into Positive Exponents In Simplest Terms 5 Video
Converting From Negative Exponents Into Positive Exponents In Simplest Terms 5 Video Tutorial
exponents video, negative exponents video.
Converting From Negative Exponents Into Positive Exponents In Simplest Terms 5
This math video tutorial gives a step by step explanation to a math problem on "Converting From Negative Exponents Into Positive Exponents In Simplest Terms 5".
Converting from negative exponents into positive exponents in simplest terms 5 video involves exponents, negative exponents. The video tutorial is recommended for 7th Grade, 8th Grade, 9th Grade, and
/or 10th Grade Math students studying Algebra, Pre-Algebra, Pre-Calculus, and/or Advanced Algebra.
Exponentiation is a mathematical operation, written a^n, involving two numbers, the base a and the exponent n. When n is a positive integer, exponentiation corresponds to repeated multiplication:
a^1 = a
a^2 = a × a
a^3 = a × a × a
a^4 = a × a × a × a
and so on.
Post a Comment
|
{"url":"http://www.tulyn.com/videotutorials/converting_from_negative_exponents_into_positive_exponents_in_simplest_terms_5-by-polly.html","timestamp":"2014-04-19T01:47:57Z","content_type":null,"content_length":"10214","record_id":"<urn:uuid:73ff53b5-a46f-4dfa-a29b-5604f2795e4e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Princeton Township, NJ Statistics Tutor
Find a Princeton Township, NJ Statistics Tutor
...The ACT math section covers trigonometry and elements of pre-calculus while the SAT goes only through algebra 2. A major difference between the SAT and the ACT is that the ACT has a science
section. Without good reading and analytical skills, it is hard to score well on the ACT science test.
23 Subjects: including statistics, English, calculus, algebra 1
...I look forward to hearing from you and scheduling a tutoring session! Thank you.I have worked at a non profit organization for 8 months where I use Microsoft Outlook on a daily basis. I am
well versed with the mail, calendar, and contact tools featured on Outlook.
49 Subjects: including statistics, Spanish, English, reading
...This subject includes topics in the areas of logic, proofs, combinatorics, functions and relations, and graphs. I have taught Programming I and II at Manor College (CS 210 and CS 211). Both of
these courses use C++ as the language. Programming II includes object oriented programming topics.
17 Subjects: including statistics, calculus, GRE, geometry
...I have also worked with middle and high school students. Over the years, I have gained experience working with students who have a wide variety of learning styles. For something to ‘click’ it
must be presented in a way that makes sense to you based on what you already understand and how you process information.
10 Subjects: including statistics, calculus, geometry, algebra 1
...Finally, let's not forget the most important part in all of this: the student. Helping a student unlock his/her potential is priceless and a gift that continues to give for a lifetime. If you
have any questions - please feel free to ask.
57 Subjects: including statistics, reading, chemistry, calculus
Related Princeton Township, NJ Tutors
Princeton Township, NJ Accounting Tutors
Princeton Township, NJ ACT Tutors
Princeton Township, NJ Algebra Tutors
Princeton Township, NJ Algebra 2 Tutors
Princeton Township, NJ Calculus Tutors
Princeton Township, NJ Geometry Tutors
Princeton Township, NJ Math Tutors
Princeton Township, NJ Prealgebra Tutors
Princeton Township, NJ Precalculus Tutors
Princeton Township, NJ SAT Tutors
Princeton Township, NJ SAT Math Tutors
Princeton Township, NJ Science Tutors
Princeton Township, NJ Statistics Tutors
Princeton Township, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Princeton_Township_NJ_statistics_tutors.php","timestamp":"2014-04-17T07:13:53Z","content_type":null,"content_length":"24620","record_id":"<urn:uuid:b102e5d4-1c20-434f-8f4d-f0d493e6a837>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cage Unit Overlap, sudokuwiki.org
This is an important Killer Sudoku strategy which I have placed at the start of all the more complex strategies in the solver because it is so useful and very easy to spot. It is related to
Intersection Removal. Whereas IR is the overlap of rows/columns with boxes, this is the overlap of 'cages' with rows, columns and boxes.
Each 'cage' is made up of one or more 'combinations' - sets of numbers that total the cage clue. If you can find a candidate number inside that cage that is not found elsewhere on the row, columns or
box the cage is aligned on, then you know that number must appear in the cage. That part is self-evident since, in the solver at least, these numbers are displayed. Given that number is true, we can
remove all combinations which omit that number. Often that means we can remove a bunch of numbers. Example 1 (Link)
In the first example below, two such Cage/Unit Overlaps occur. The 2-cell cage with the red box has a clue of 14, which means the two combinations (visible if you hover over the cage on the solver)
are {5,9} and {6,8}. The 6s in the cage are unique to that cage and the cage is entirely inside the box. So the only combination that fits is {6,8}. Hence we can remove 5s and 9s from the cage. In
fact, since {6,8} is the only combination left both those numbers must fit in the cell and ALL other candidates can be removed, so the 4s and 3s can also go.
The blue ringed cage, a 3-cell with a clue of 19 gives us five different combinations. But the 9 in that cage is unique to both the cage and row H, so only the combinations with 9 in them are valid.
{4,7,8} and {5,6,8} are not possible. Of the remaining candidates, the 5s can be removed.
Example 2 (Link)
In this second example, five Cage/Unit Overlaps have been found. Taking just the centre one as an example, the red ringed 4-cell cage has a clue of 28 - the combinations being {4,7,8,9} and
{5,6,8,9}. 7 is, however, unique to the cage and the centre box, so only the first combination can be valid. 5s, 6s and other numbers not in that combination (the 1s) can be removed.
I'll leave it to you to show how the other four cages have similar eliminations.
Overall, most Killer Sudoku puzzles will have at least one example of this strategy so they are well worth looking out for, and often you can reduce the puzzle with this method while looking into the
cages with multiple combinations. Keep an eye out on the rows, columns and boxes you are studying.
|
{"url":"http://www.sudokuwiki.org/Print_Cage_Unit_Overlap","timestamp":"2014-04-17T01:16:37Z","content_type":null,"content_length":"5609","record_id":"<urn:uuid:2521ee89-b39d-4220-a9cf-0ebd0dd1c5ce>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Abelian Factor Group
November 29th 2007, 09:17 PM
Abelian Factor Group
Question: Let G be a group and N be a normal subgroup of G. Show that the factor group G/N is abelian iff aba^-1b^-1 is in N for all a,b in G.
If G/N is abelian then Na*Nb = Nb * Na and Naa^-1bb^-1 is in N and so Naba^-1b^-1 is in N
If aba^-1b^-1 is in N then Nb*Na = Nba = aba^-1b^-1ba which canceles to ab which I would like to say is Nab = Na*Nb.
I feel like I'm missing something though.
November 30th 2007, 05:03 AM
Question: Let G be a group and N be a normal subgroup of G. Show that the factor group G/N is abelian iff aba^-1b^-1 is in N for all a,b in G.
If G/N is abelian then Na*Nb = Nb * Na and Naa^-1bb^-1 is in N and so Naba^-1b^-1 is in N
If aba^-1b^-1 is in N then Nb*Na = Nba = aba^-1b^-1ba which canceles to ab which I would like to say is Nab = Na*Nb.
I feel like I'm missing something though.
yes, that was right but you can simplify it..
Let G be a group and N be a normal subgroup of G.
Let $a,b \in G$
G/N is abelian $\Longleftrightarrow aNbN = (ab)N = (ba)N = bNaN$ (let's take the middle..)
$\Longleftrightarrow (ab)(ba)^{-1} \in N$ definition of equal cosets..
$\Longleftrightarrow aba^{-1}b^{-1} \in N$.. QED.
November 30th 2007, 07:50 AM
Question: Let G be a group and N be a normal subgroup of G. Show that the factor group G/N is abelian iff aba^-1b^-1 is in N for all a,b in G.
If G/N is abelian then Na*Nb = Nb * Na and Naa^-1bb^-1 is in N and so Naba^-1b^-1 is in N
If aba^-1b^-1 is in N then Nb*Na = Nba = aba^-1b^-1ba which canceles to ab which I would like to say is Nab = Na*Nb.
I feel like I'm missing something though.
Try to prove a more useful related result.
Definition:An element expressable as $aba^{-1}b^{-1}$ is called a 'commutatator'.
Definition:The 'commutatator subgroup' $C$ is generated by all commutatators.
Definition:The factor group $G/N$ is abelian if and only if $N$ contains the commutatator subgroup.
|
{"url":"http://mathhelpforum.com/advanced-algebra/23818-abelian-factor-group-print.html","timestamp":"2014-04-16T13:29:30Z","content_type":null,"content_length":"7886","record_id":"<urn:uuid:7290eac9-cf5e-479a-9c4b-c2464aa4dbb6>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Outer Dimension of closed wooden box 10x8x7
The outer dimension of a closed wooden box are 10cm by 8cm by 7cm. Thickness of the wood is 1cm. find the total cost of the wood required to make the box if 1 cubic centimeter of wood costs $ 2.00/-.
kindly send me the solution steps?
Re: Outer Dimension of closed wooden box 10x8x7
admin_faisal wrote:The outer dimension of a closed wooden box are 10cm by 8cm by 7cm. Thickness of the wood is 1cm. find the total cost of the wood required to make the box if 1 cubic centimeter
of wood costs $ 2.00/-.
i'd do this by doing the sides one and a time, like as if you are building it. maybe do the bottom and top first. say that 10 by 8 is the bottom, so 7 is the height. it doesn't matter how you do it
but picking an orientation makes it easier.
so the bottom is 10 by 8 by 1. use the volume formula to find the volume. times by 2 to get the volume for the top and the bottom.
say the front and back sides are 10 by 7. but 1 on each end is already done because of the top and bottom, so you're only really doing 10 by 5. they're 1 thick, so use that to find the volume. times
by 2 to get front & back.
then do the sides. don't forget that you already did 1 cm for the top and bottom and 1 cm for the front and back.
plus all the volumes and them times by the cost.
Re: Outer Dimension of closed wooden box 10x8x7
I didn't understand. kindly solve it in a mathematical steps.
Re: Outer Dimension of closed wooden box 10x8x7
admin_faisal wrote:I didn't understand. kindly solve it in a mathematical steps.
i gave you the steps. which part dind't you understand?
Re: Outer Dimension of closed wooden box 10x8x7
buddy wrote:
admin_faisal wrote:The outer dimension of a closed wooden box are 10cm by 8cm by 7cm. Thickness of the wood is 1cm. find the total cost of the wood required to make the box if 1 cubic
centimeter of wood costs $ 2.00/-.
i'd do this by doing the sides one and a time, like as if you are building it. maybe do the bottom and top first. say that 10 by 8 is the bottom, so 7 is the height. it doesn't matter how you do
it but picking an orientation makes it easier.
so the bottom is 10 by 8 by 1. use the volume formula to find the volume. times by 2 to get the volume for the top and the bottom.
say the front and back sides are 10 by 7. but 1 on each end is already done because of the top and bottom, so you're only really doing 10 by 5. they're 1 thick, so use that to find the volume.
times by 2 to get front & back.
then do the sides. don't forget that you already did 1 cm for the top and bottom and 1 cm for the front and back.
plus all the volumes and them times by the cost.
$10*5*2=$front and back sides
$8*5*2=$right and left sides
Hope it helps
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=7034","timestamp":"2014-04-20T09:04:58Z","content_type":null,"content_length":"25977","record_id":"<urn:uuid:e564b652-0eac-417e-a330-a03ca87b9dd0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Browse by Warwick Author
The Library
Browse by Warwick Author
Number of items: 17.
Croydon, David A.. (2013) Slow movement of a random walk on the range of a random walk in the presence of an external field. Probability Theory and Related Fields, Volume 157 (Number 3-4). pp.
515-534. ISSN 0178-8051
Croydon, David A., Fribergh, A. and Kumagai, T. (Takashi). (2013) Biased random walk on critical Galton–Watson trees conditioned to survive. Probability Theory and Related Fields, Volume 157 (Number
1-2). pp. 453-507. ISSN 0178-8051
Croydon, David A., Fribergh, A. and Kumagai, T. (Takashi). (2013) Biased random walk on critical Galton–Watson trees conditioned to survive. Probability Theory and Related Fields, Volume 157 (Number
1-2). pp. 453-507. ISSN 0178-8051
Croydon, David A., Fribergh, A. and Kumagai, Takashi. (2012) Biased random walk on critical Galton–Watson trees conditioned to survive. Probability Theory and Related Fields . ISSN 0178-8051
Croydon, David A., Hambly, Ben M. and Kumagai, Takashi. (2012) Convergence of mixing times for sequences of random walks on finite graphs. Electronic Journal of Probability, Vol.17 . article no.3.
ISSN 1083-6489
Croydon, David A.. (2011) Scaling limit for the random walk on the largest connected component of the critical random graph. Research Institute for Mathematical Sciences. Publications, Vol.48 (No.2).
pp. 279-338. ISSN 1663-4926
Croydon, David A. and Hambly, Ben M.. (2010) Spectral asymptotics for stable trees. Electronic Journal of Probability, Vol.15 (No.57). pp. 1772-1801. ISSN 1083-6489
Croydon, David A.. (2010) Scaling limits for simple random walks on random ordered graph trees. Advances in Applied Probability, Vol.42 (No.2). pp. 528-558. ISSN 0001-8678
Croydon, David A.. (2009) Random walk on the range of random walk. Journal of Statistical Physics, Vol.136 (No.2). pp. 349-372. ISSN 0022-4715
Croydon, David A.. (2009) Hausdorff measure of arcs and Brownian motion on Brownian spatial trees. Annals of Probability, Vol.37 (No.3). pp. 946-978. ISSN 0091-1798
Croydon, David A.. (2008) Convergence of simple random walks on random discrete trees to Brownian motion on the continuum random tree. Annales de l'Institut Henri Poincaré (B). Probabilites et
Statistiques, Vol.44 (No.6). pp. 987-1019. ISSN 0246-0203
Croydon, David A. and Hambly, Ben M.. (2008) Local limit theorems for sequences of simple random walks on graphs. Potential Analysis, Vol.29 (No.4). pp. 351-389. ISSN 0926-2601
Croydon, David A. and Kumagai, Takashi. (2008) Random walks on Galton-Watson trees with infinite variance offspring distribution conditioned to survive. Electronic Journal of Probability, Vol.13 .
pp. 1419-1441. ISSN 1083-6489
Croydon, David A. and Hambly, Ben M.. (2008) Self-similarity and spectral asymptotics for the continuum random tree. Stochastic Processes and their Applications, Vol.118 (No.5). pp. 730-754. ISSN
Croydon, David A.. (2008) Volume growth and heat kernel estimates for the continuum random tree. Probability Theory and Related Fields, Vol.140 (No.1-2). pp. 207-238. ISSN 0178-8051
Croydon, David A.. (2007) The Hausdorff dimension of a class of random self-similar fractal trees. Advances in Applied Probability, Vol.39 (No.3). pp. 708-730. ISSN 0001-8678
Croydon, David A.. (2007) Heat kernel fluctuations for a resistance form with non-uniform volume growth. Proceedings of the London Mathematical Society, Vol.94 (No.3). pp. 672-694. ISSN 0024-6107
This list was generated on Wed Apr 16 11:28:14 2014 BST.
|
{"url":"http://wrap.warwick.ac.uk/view/author_id/4271.html","timestamp":"2014-04-16T10:28:14Z","content_type":null,"content_length":"26106","record_id":"<urn:uuid:ba5ba6d1-e291-4141-bb47-42a68d78e38b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Classical Mechanics/Lagrangian
Mechanics considered using forcesEdit
In Newtonian mechanics, a mechanical system is always made up of point masses or rigid bodies, and these are subject to known forces. One must therefore specify the composition of the system and the
nature of forces that act on the various bodies. Then one writes the equations of motion for the system. Here are some examples of how one describes mechanical systems in Newtonian mechanics (these
examples are surely known to you from school-level physics).
• Example: a free mass point.
This is the most trivial of all mechanical systems: a mass point that does not interact with any other bodies and is subject to no forces. Introduce the coordinates $x,y,z$ to describe the position
of the mass point. Since the force is always equal to zero, the equations of motion are $\ddot x=0, \ddot y=0, \ddot z=0$. The general solution of these equations describes a linear motion with
constant velocity: $x=x_0 + v_x t$, etc.
• Example: two point masses with springs attached to a motionless wall.
| \/\/\/ (m1) \/\/\ (m2) ----> x
Two masses can move along a line (the $x$ axis) without friction. The mass $m_1$ is attached to the wall by a spring, and the mass $m_2$ is attached to the mass $m_1$ by a spring. Both springs have
spring constant $k$ and the unstretched length $L$.
To write the equations of motion, we first introduce the two coordinates $x_1, x_2$ and then consider the forces acting on the two masses. The force on the mass $m_1$ is the sum of the
leftward-pointing force $F_1$ from the left spring and the rightward-pointing force $F_2$ from the right spring. The force on $m_2$ is a leftward-pointing $F_2$. By definition of a "spring" we have
$F_1 = k (x_1-L)$ and $F_2 = k (x_2 - x_1 - L)$. Therefore we write the equations for the accelerations $a_1, a_2$ of the two masses:
$a_1 = \ddot {x_1} = \frac{F_2 - F_1}{m_1} = \frac{k}{m_1} (x_2 -x_1 -L) - \frac{k}{m_1} (x_1 -L),$
$a_2 = \ddot {x_2} = - \frac{F_2}{m_2} = - \frac{k}{m_2} (x_2 -x_1 -L).$
At this point we are finished describing the system; we now need to solve these equations for particular initial conditions and determine the actual motion of this system.
Introducing the action principleEdit
The Lagrangian description of a mechanical system is rather different: First, we do not ask for the evolution of the system given some initial conditions, but instead assume that the position of the
system at two different time moments $t_1$ and $t_2$ is known and fixed. For convenience, let us collect all coordinates (such as $x,y,z$ or $x_1,x_2$ above) into one array of "generalized
coordinates" and denote them by $q_i$. So the "boundary conditions" that we impose on the system are $q_i(t_1)=A_i$ and $q_i(t_2)=B_i$, where $A_i,B_i$ are fixed numbers. We now ask: how does the
system move between the time moments $t_1$ and $t_2$. The Lagrangian description answers: during that time, the system must move in such a way as to give the minimum value to the integral $\int _
{t_1}^{t_2} L(q_i,\dot q_i) dt$, where $L(q_i, \dot q_i)$ is a known function called the Lagrange function or Lagrangian. For example, the Lagrangian for a free mass point is
$L(x,y,z,\dot x, \dot y, \dot z) = \frac{m}{2} [\dot x^2 +\dot y^2 +\dot z^2].$
The Lagrangian for the above example with two masses attached to the wall is
$L(x_1,x_2,\dot x_1, \dot x_2) = \frac{m}{2} [\dot x_1^2 +\dot x_2^2] - \frac{k}{2} (x_1-L)^2 - \frac{k}{2}(x_2-x_1-L)^2.$
For instance, according to the Lagrangian description, the free point mass moves in such a way that the functions $x(t),y(t),z(t)$ give the minimum value to the integral $\int _{t_1}^{t_2} \frac{m}
{2}[\dot x^2 +\dot y^2 +\dot z^2]dt$, where the values of $x(t),y(t),z(t)$ at times $t_{1,2}$ are fixed.
In principle, to find the minimum value of the integral $\int _{t_1}^{t_2} L(q_i,\dot q_i) dt$ one would have to evaluate that integral for each possible trajectory $q_i(t)$ and then choose the
"optimal" trajectory $q_i^{*}(t)$ for which this integral has the smallest value. (Of course, we shall learn and use a much more efficient mathematical approach to determine this "optimal" trajectory
instead of trying every possible set of functions $q_i(t)$.) The value of the mentioned integral is called the action corresponding to a particular trajectory $q_i(t)$. Therefore the requirement that
the integral should have the smallest value is often called "the principle of least action" or just action principle.
At this point, we need to answer the pressing question:
• How can it be that the correct trajectory $q_i^{*}(t)$ is found not by considering the forces but by requiring that some integral should have the minimum value? How does each point mass "know"
that it needs to minimize some integral when it moves around?
The short answer is that the least action requirement is mathematically equivalent to the consideration of forces if the Lagrangian $L$ is chosen correctly. The condition that some integral has the
minimum value (when the integral is correctly chosen) is mathematically the same as the Newtonian equations for the acceleration. The point masses perhaps "know" nothing about this integral. It is
simply mathematically convenient to formulate the mechanical laws in one sentence rather than in many sentences. (We shall see another, more intuitive explanation below.)
Suppose that we understand how the requirement that an integral has the minimum value can be translated into equations for the acceleration. Obviously the form of the integral needs to be different
for each mechanical system since the equations of motion are different. Then the second question presents itself:
• How can we find the Lagrange function $L(q_i, \dot q_i)$ corresponding to each mechanical system?
This is a more complicated problem and one needs to study many examples to gain a command of this approach. (In brief: the Lagrange function is the kinetic energy minus the potential energy.)
Before considering Lagrange functions, we shall look at how the mathematical requirement of "least action" can be equivalent to equations of motion such as given in the examples above.
Variation of a functionalEdit
A function is a map from numbers into numbers; a functional is a map from functions into numbers. An application of a functional to a function is usually denoted by square brackets, e.g. $S[f(x)]$.
Random examples of functionals, just to illustrate the concept:
$S[f(x)] = \int _0 ^\infty \sqrt{f(3x)}dx$
$S[f(x)] = \int _{-1} ^1 \frac{f(x)}{1-x^2}dx$
$S[f(x)] = f(15) - 8 f'(3) - \int _0 ^1 \sin\left[(f(x-2)+\sqrt{x}e^{-x})^3\right] dx$
In principle, a functional can be anything that assigns a number to any function. In practice, only some functionals are interesting and have applications in physics.
Since the action integral maps trajectories into numbers, we can call it the action functional. The action principle is formulated as follows: the trajectory $q_i(t)$ must be such that the action
functional evaluated on this trajectory has the minimum value among all trajectories.
This may appear to be similar to the familiar condition for the mechanical equilibrium: the coordinates $x,y,z$ are such that the potential energy has the minimum value. However, there is a crucial
difference: when we minimize the potential energy, we vary the three numbers $x,y,z$ until we find the minimum value; but when we minimize a functional, we have to vary the whole function $q_i(t)$
until we find the minimum value of the functional.
The branch of mathematics known as calculus of variations studies the problem of minimizing (maximizing, extremizing) functionals. One needs to learn a little bit of variational calculus at this
point. Let us begin by solving some easy minimization problems involving functions of many variables; this will prepare us for dealing with functionals which can be thought of as functions of
infinitely many variables. You should try the examples yourself before looking at the solutions.
Example 1: Minimize the function $f(x,y) = x^2+xy+y^2$ with respect to $x,y$.
Solution: Compute the partial derivatives of $f$ with respect to $x,y$. These derivatives must both be equal to zero. This can only happen if $x=0, y=0$.
Example 2: Minimize the function $f(x_1,...,x_n) = x_1^2+x_1 x_2+x_2^2+x_2 x_3+...+x_n^2$ with respect to all $x_j$.
Solution: Compute the partial derivatives of $f$ with respect to all $x_j$, where $j=1,...,n$. These derivatives must all be equal to zero. This can only happen if all $x_j=0$.
Example 3: Minimize the function $f(x_0,...,x_n) = (x_1-x_0)^2+(x_2-x_1)^2+...+(x_n-x_{n-1})^2$ with respect to all $x_j$ subject to the restrictions $x_0=0, x_n=A$.
Solution: Compute the partial derivatives of $f$ with respect to $x_j$, where $j=2,...,n-1$. These derivatives must all be equal to zero. This can only happen if $x_j-x_{j-1}=x_{j+1}-x_j$ for $j=
1,2,...,n-1$. The values $x_0, x_n$ are known, therefore we find $x_j=jA/n$.
Intuitive calculationEdit
Let us now consider the problem of minimizing the functional $S[x]=\int _0 ^1 {\dot x(t)}^2 dt$ with respect to all functions $x(t)$ subject to the restrictions $x(0)=0, x(1)=L$. We shall first
perform the minimization in a more intuitive but approximate way, and then we shall see how the same task is handled more elegantly by the variational calculus.
Let us imagine that we are trying to minimize the integral $L[x]$ with respect to all functions $x(t)$ using a digital computer. The first problem is that we cannot represent "all functions" $x(t)$
on a computer because we can only store finitely many values $x(t_0), x(t_1), ..., x(t_N)$ in an array within the computer memory. So we split the time interval $[0,1]$ into a large number $N$ of
discrete steps $[0,t_1], [t_1, t_2], ..., [t_{N-1}, 1]$, where the step size $t_j-t_{j-1}\equiv \Delta t = 1/N$ is small; in other words, $t_j = j/N, j=1, ..., N-1$. We can describe the function $x
(t)$ by its values $x_j$ at the points $t_j$, assuming that the function $x(t)$ is a straight line between these points. The time moments $t_1, ..., t_{N-1}$ will be kept fixed, and then the various
values $x_j$ will correspond to various possible functions $x(t)$. (In this way we definitely will not describe all possible functions $x(t)$, but the class of functions we do describe is broad
enough so that we get the correct results in the limit $N\to\infty$. Basically, any function $x(t)$ can be sufficiently well approximated by one of these "piecewise-linear" functions when the step
size $\Delta t$ is small enough.)
Since we have discretized the time and reduced our attention to piecewise-linear functions, we have
$\dot x = \frac{x_j-x_{j-1}}{\Delta t}$
within each interval $t\in[t_{j-1}, t_j]$. So we can express the integral $S[x]$ as the finite sum,
$S[x] =\int _0^1 {\dot x(t)}^2 dt = \sum_{j=1}^N \frac{{(x_j-x_{j-1})}^2}{\Delta t^2} \Delta t,$
where we have defined for convenience $t_0 =0, t_N = 1$.
At this point we can perform the minimization of $S[x]$ quite easily. The functional $S[x]$ is now a function of $N-1$ variables $x_1, ..., x_{N-1}$, i.e. $S[x]=S(x_1,...,x_{N-1})$, so the minimum is
achieved at the values $x_j$ where the derivatives of $S(x_1,...,x_{N-1})$ with respect to each $x_j$ are zero. This problem is now quite similar to the Example 3 above, so the solution is $x_j = jL/
N, j=0,...,N$. Now we recall that $x_j$ is the value of the unknown function $x(t)$ at the point $t_j=j/N$. Therefore the minimum of the functional $S[x]$ is found at the values $x_j$ such that would
correspond to the function $x(t)=Lt$. As we increase the number $N$ of intervals, we still obtain the same function $x(t)=Lt$, therefore the same function is obtained in the limit $N\to\infty$. We
conclude that the function $x(t)=Lt$ minimizes the functional $L[x]$ with the restrictions $x(0)=0, x(1)=L$.
Variational calculationEdit
The above calculation has the advantage of being more intuitive and visual: it makes clear that minimization of a functional $S[x(t)]$ with respect to a function $x(t)$ is quite similar to the
minimization of a function $S(x_1, ..., x_N)$ with respect to a large number of variables $x_j$ in the limit of infinitely many such variables. However, the formalism of variational calculus provides
a much more efficient computational procedure. Here is how one calculates the function $x(t)$ that minimizes $S[x]$.
Let us consider a very small change $\epsilon(t)$ in the function $x(t)$ and see how the functional $S[x]$ changes:
$\delta S[x(t), \epsilon(t)] \equiv S[x(t)+\epsilon(t)]-S[x(t)].$
(In many textbooks, the change in $x(t)$ is denoted by $\delta x(t)$, and generally the change of any quantity $Q$ is denoted by $\delta Q$. We chose to write $\epsilon(t)$ instead of $\delta x(t)$
for clarity.)
The functional $\delta S[x, \epsilon]$ is called the variation of the functional $S[x]$ with respect to the change $\epsilon(t)$ in the function $x(t)$. The variation is itself a functional depending
on two functions, $x(t)$ and $\epsilon(t)$. When $\epsilon(t)$ is very small, we expect that the variation will be linear in $\epsilon(t)$, just like the variation in the value of a normal function
is linear in the amount of change in the argument, e.g. $f(t+\alpha)-f(t)=f'(t) \alpha$ for small $\alpha$. So we expect that the variation $\delta S[x, \epsilon]$ of the functional $S[x]$ will be a
linear functional of $\epsilon(t)$. To understand what a linear functional looks like, consider a linear function $f(\epsilon_j)$ depending on several variables $\alpha_j$, $j=1,2,...$. This function
can always be written as
$f(\epsilon_j) = \sum_{j}^{} A_j \epsilon_j$
where $A_j$ are suitable constants. Since a functional is like a function of infinitely many variables, the index $j$ becomes a continuous variable $t$, the variables $\epsilon_j$ and the constants
$A_j$ become functions $\epsilon(t),A(t)$, while the sum over $j$ becomes an integral over $t$. Thus, a linear functional of $\epsilon(t)$ can be written as an integral,
$\delta S[x, \epsilon]=\int _0^1 A(t) \epsilon(t) dt,$
where $A(t)$ is a suitable function. In the case of the usual function $f(t)$, the "suitable constant $A$" is the derivative $A=df(t)/dt$. By analogy we call $A(t)$ above the variational derivative
of the functional and denote it by $\delta S[x]/\delta x(t)$.
A function has a minimum (or maximum, or extremum) at a point where its derivative vanishes. So a functional $S[x(t)]$ has a minimum (or maximum, or extremum) at the function $x(t)$ where the
functional derivative vanishes. We shall justify this statement below, and for now let us now compute the functional derivative of the functional $S[x(t)]=\int_0^1 \dot x^2 dt$.
Substituting $x(t)+\epsilon(t)$ instead of $x(t)$ into the functional, we get
$\delta S[x,\epsilon]=\int _0^1 [(\dot x+\dot \epsilon)^2-\dot x^2]dt=2\int_0^1\dot x \dot \epsilon dt +O(\epsilon^2),$
where we are going to neglect terms quadratic in $\epsilon(t)$ and so we didn't write them out. We now need to rewrite this integral so that no derivatives of $\epsilon(t)$ appear there; so we
integrate by parts and find
$\delta S[x(t),\epsilon(t)]= \left. \epsilon(t)\dot x(t)\right| _0^1 - 2\int _0^1 \ddot x(t) \epsilon(t) dt.$
Since in our case the values $x(0),x(1)$ are fixed, the function $\epsilon(t)$ must be such that $\epsilon(0)=\epsilon(1)=0$, so the boundary terms vanish. The variational derivative is therefore
$\delta S/\delta x(t) = -2\ddot x(t).$
The functional $S[x]$ has an extremum when its variation under an arbitrary change $\epsilon(t)$ is second-order in $\epsilon(t)$. However, above we have obtained the variation as a first-order
quantity, linear in $\epsilon(t)$; so this first-order quantity must vanish for $x(t)$ where the functional has an extremum. An integral such as $\int_0^1 A(t) \epsilon(t)$ can vanish for arbitrary $
\epsilon(t)$ only if the function $A(t)$ vanishes for all $t$. In our case, the "function $A(t)$," i.e. the variational derivative $\delta S/\delta x(t)$, is equal to $-2\ddot x(t)$. Therefore the
function $x(t)$ on which the functional $S[x]$ has an extremum must satisfy $-2\ddot x(t)=0$ or more simply $\ddot x=0$. This differential equation has the general solution $x(t)=a+bt$, and with the
additional restrictions $x(0)=0,x(1)=L$ we immediately get the solution $x(t)=Lt$.
General formulationEdit
To summarize: the requirement that the functional $S[x(t)]$ must have an extremum at the function $x(t)$ leads to a differential equation on the unknown function $x(t)$. This differential equation is
found as
$\delta S[x]/\delta x(t) =0.$
The procedure is quite similar to finding on extremum of a function $f(t)$, where the point $t$ of the extremum is found from the equation $df(t)/dt=0$.
Suppose that we are now asked to minimize the functional $S[x(t)]=\int _0^1 (x^2+\dot x^2-x^4\sin t)dt$ subject to the restrictions $x(0)=0,x(1)=1$; in mechanics we shall mostly be dealing with
functionals of this kind. We might try to discretize the function $x(t)$, as we did above, but this is difficult. Moreover, for a different functional $S[x]$ everything will have to be computed anew.
Rather than go through the above procedure again and again, let us now derive the formula for the functional derivative for all functionals of this form, namely
$S[x_i(t)]=\int_a^b L(x_i,\dot x_i,t)dt,$
where $L(x_i,v_i,t)dt$ is a given function of the coordinates $x_i$ and velocities $v_i\equiv \dot x_i$ (assuming that there are $n$ coordinates, so $i=1,...,n$). This function $L(x_i,v_i,t)dt$ is
called the Lagrange function or simply the Lagrangian.
We introduce the infinitesimal changes $\epsilon_i(t)$ into the functions $x_i(t)$ and express the variation of the functional first through $\epsilon_i(t)$ and $\dot\epsilon_i(t)$,
$\delta S[x_i(t),\epsilon_i(t)]=\int_a^b \sum _{i=1}^n\left[ \frac{\partial L}{\partial x_i}\epsilon_i(t)+\frac{\partial L}{\partial v_i}\dot \epsilon_i(t) \right] dt.$
Then we integrate by parts, discard the boundary terms and obtain
$\delta S[x_i(t),\epsilon_i(t)]=\int_a^b \sum _{i=1}^n\left[ \frac{\partial L}{\partial x_i}-\frac{d}{dt}\frac{\partial L}{\partial v_i} \right] \epsilon_i(t)dt.$
Thus the variational derivatives can be written as
$\frac{\delta S[x]}{\delta x_i(t)}=\frac{\partial L}{\partial x_i}-\frac{d}{dt}\frac{\partial L}{\partial v_i}.$
Euler-Lagrange equationsEdit
Consider again the condition for a functional to have an extremum at $x_i(t)$: the first-order variation must vanish. We have derived the above formula for the variation $\delta S[x_i, \epsilon_i]$.
Since all $\epsilon_i(t)$ are completely arbitrary (subject only to the boundary conditions $\epsilon_i(a)=\epsilon_i(b)=0$), the first-order variation vanishes only if the functions in square
brackets all vanish at all $t$. Therefore we obtain the Euler-Lagrange equations
$\frac{\partial L}{\partial x_i}-\frac{d}{dt}\frac{\partial L}{\partial v_i} = 0.$
These are the differential equations that express the mathematical requirement that the functional $S[x_i(t),\dot x_i(t),t]$ has an extremum at the set of functions $x_i(t)$. There are as many
equations as unknown functions $x_i(t)$, one equation for each $i=1,...,n$.
Note that the Euler-Lagrange equations involve partial derivatives of the Lagrangian with respect to coordinates and velocities. The derivatives with respect to velocities $v=\dot x$ are sometimes
written as $\partial L/\partial \dot x$ which might at first sight appear confusing. However, all that is meant by this notation is the derivative of the function $L(x,v,t)$ with respect to its
second argument.
The Euler-Lagrange equations also involve the derivative $d/dt$ with respect to the time. This is not a partial derivative with respect to $t$ but a total derivative. In other words, to compute $\
frac{d}{dt}\frac{\partial L}{\partial \dot x_i}$, we need to substitute the functions $x_i(t)$ and $\dot x_i(t)$ into the expression $\frac{\partial L}{\partial \dot x_i}$, thus obtain a function of
time only, and then take the derivative of this function with respect to time.
Remark: If the Lagrangian contains higher derivatives (e.g. the second derivative), the Euler-Lagrange formula is different. For example, if the Lagrangian is $L=L(x,\dot x, \ddot x)$, then the
Euler-Lagrange equation is
$\frac{\partial L}{\partial x}-\frac{d}{dt}\frac{\partial L}{\partial \dot x} + \frac {d^2}{dt^2}\frac{\partial L}{\partial \ddot x}= 0.$
Note that this equation may be up to fourth-order in time derivatives! Usually, one does not encounter such Lagrangians in studies of classical mechanics because ordinary systems are described by
Lagrangians containing only first-order derivatives.
Summary: In mechanics, one specifies a system by writing a Lagrangian and pointing out the unknown functions in it. From that, one derives the equations of motion using the Euler-Lagrange formula.
You need to know that formula really well and to understand how to apply it. This comes only with practice.
How to choose the LagrangianEdit
The basic rule is that the Lagrangian is equal to the kinetic energy minus the potential energy. (Both should be measured in an inertial system of reference! In a non-inertial system, this rule may
It can be shown that this rule works for an arbitrary mechanical system made up of point masses, springs, ropes, frictionless rails, etc., regardless of how one introduces the generalized
coordinates. We shall not study the proof of this statement, but instead go directly to the examples of Lagrangians for various systems.
Examples of LagrangiansEdit
• The Lagrangian for a free point mass moving along a straight line with coordinate $x$:
$L=\frac{1}{2} m {\dot x}^2.$
• A point mass moving along a straight line with coordinate $x$, in a force field with potential energy $V(x)$:
$L=\frac{1}{2} m {\dot x}^2 - V(x).$
• A point mass moving in three-dimensional space with coordinates $x_i\equiv (x,y,z)$, in a force field with potential energy $V(x,y,z)$:
$L=\frac{1}{2}\sum_{i=1}^3 \left( m {\dot x_i}^2\right) - V(x,y,z)=\frac{m}{2}|\dot\vec x|^2-V(\vec x).$
• A point mass constrained to move along the circle $x^2+z^2=R^2$ in the gravitational field near the Earth (the $z$ axis is vertical). It is convenient to introduce the angle $\theta$ as the
coordinate, with $z=R\cos\theta, x=R\sin\theta$. Then the potential energy is $U=mgz=mgR\cos\theta$, while the kinetic energy is $K=mv^2/2=mR^2\omega^2/2=mR^2\dot\theta^2/2$. So the Lagrangian is
Note that we have written the Lagrangian (and therefore we can derive the equations of motion) without knowing the force needed to keep the mass moving along the circle. This shows the great
conceptual advantage of the Lagrangian approach; in the traditional Newtonian approach, the first step would be to determine this force, which is initially unknown, from a system of equations
involving an unknown acceleration of the point mass.
• Two (equal) point masses connected by a spring with length $l$:
$L=\frac{m}{2}(\dot x_1^2 + \dot x_2^2)-\frac{k}{2}(x_1-x_2-l)^2.$
• A mathematical pendulum, i.e. a massless rigid stick of length $l$ with a point mass attached at the end, that can move only in the $x-z$ plane in the gravitational field near the Earth (vertical
$z$ axis). As the coordinate, we choose the angle $\theta$ between the stick and the $z$ axis. The Lagrangian is
$L=\frac{m}{2} l^2 \dot \theta^2 +mgl\cos\theta .$
• A point mass $m$ sliding without friction along an inclined plane that makes an angle $\alpha$ with the horizontal, in the gravitational field of the Earth. As the coordinate, we choose $x,y$,
where $y$ is parallel to the incline. The height $z$ is then $z=x\tan\alpha$, so the potential energy is $U=mgz=mgx\tan\alpha$. The kinetic energy is computed as
$K=\frac{m}{2} (\dot x^2 + \dot y^2 + \dot z^2)=\frac{m}{2} (\dot x^2/\cos^2\alpha + \dot y^2).$
Hence, the Lagrangian is
$L=K-U=\frac{m}{2} (\dot x^2/\cos^2\alpha + \dot y^2) -mgx\tan\alpha.$
Further workEdit
Exercise: You should now determine the Euler-Lagrange equations that follow from each the above Lagrangians and verify that these equations are the same as would be obtained from school-level
Newtonian considerations for the respective physical systems. This should occupy you for at most an hour or two. Only then you will begin to appreciate the power of the Lagrangian approach.
Some more Lagrangian exercises here.
For more examples of setting up Lagrangians for mechanical systems and for deriving the Euler-Lagrange equations, ask your physics teacher or look up in any theoretical mechanics problem book. Much
of the time, the Euler-Lagrange equations for some complicated system (say, a pendulum attached to the endpoint of another pendulum) would be too difficult to solve, but the point is to gain
experience deriving them. Their derivation would be much less straightforward in the old Newtonian approach using forces.
See here for a very brief primer on differential equations.
If this is your first time looking at Lagrangians, you might be still asking yourself: how could the motion of a system be described by saying that some integral has the minimal value? Is it a purely
formal mathematical trick, and if not, how can one get a more visually intuitive understanding? A partial answer is here.
Last modified on 31 March 2012, at 02:43
|
{"url":"http://en.m.wikibooks.org/wiki/Classical_Mechanics/Lagrangian","timestamp":"2014-04-20T01:31:15Z","content_type":null,"content_length":"70696","record_id":"<urn:uuid:ac7f45fd-362d-4d90-a1ce-cc7188fe1e1f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Is Easy
Geometric properties is the dimension and assessment of appearance, angles, points, planes, and surfaces. A character is the external form of an thing or shape such as a ring, triangle, quadrangle,
parallelogram, rectangle, trapezoid, rhombus, octagon, pentagon, and hexagon. Geometry is every one concerning shapes and their properties. Two the majority common subjects are plane geometry, solid
geometry. Geometry is the earth-measuring is a division of mathematics apprehensive. Originally a organization of useful information regarding lengths, areas, and volumes.
Properties of Geometric Solids
Volume, mass, weight density, and surface area are properties that all solids possess. Properties of solids is used by engineers and manufacturers to determine material type, cost, and other factors
associated with the design of objects.
Volume (V) refers to the quantity of hole unavailable by a thing or enclosed within a container.
Mass(M) refers to the amount of substance in an entity. It is frequently confused with the notion of weight in the metric scheme
Weight (W) is the power of seriousness performing on an entity. It is frequently confused with the idea of mass in the English scheme
Weight density (W[D]) is an object’s power per component quantity.
There is a difference between area (A) and surface area (SA).
Area describes the assess of the two-dimensional space enclosed by a shape.
Surface area is the addition of every areas of the three-dimensional solid.
Activity of geometric solids and their properties
Inpre prearranged accommodating groups, let students hold, inspect and converse the substantial characteristics of the inexpressive geometric solids.
All groups will considerately produce a Venn diagram based on the geometric solids in frontage of them. Then they will be collective with the group. This will be used to contrast and contrast the
As a group, produce a list of individuality for all solid. The information is record on a graph.
Exhibita number of physical models of geometric solids to the group that are seen in the earth approximately us on a every day basis.
|
{"url":"http://mathiseasy.zohosites.com/Geometric-solids-and-their-properties.html","timestamp":"2014-04-18T00:29:59Z","content_type":null,"content_length":"13543","record_id":"<urn:uuid:a5758628-89bb-4ad1-bf36-5f4c6443dd5a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Learn to Program using Alice
Expressions and Operators
Learn about statements, expressions, arithmetic operators, and equality operators.
Published: April 5, 2007
Last updated: November 5, 2007
By Richard G. Baldwin
Alice Programming Notes # 160
This tutorial lesson is part of a series designed to teach you how to program using the Alice programming environment under the assumption that you have no prior programming knowledge or experience.
Have some fun
Because Alice is an interactive graphic 3D programming environment, it is not only useful for learning how to program, Alice makes learning to program fun. Therefore, you should be sure to explore
the many possibilities for being creative provided by Alice while you are learning to program using these tutorials. And above all, have fun in the process of learning.
In the previous lesson titled "Syntax, Runtime, and Logic Errors" I taught you about syntax errors, runtime errors, and logic errors, and some of the ways to avoid them.
Up to this point in this series, we have pretty much been kicking the tires and polishing the chrome on this programming vehicle. It's time to open the hood and start getting our hands dirty.
In this lesson, I will teach you about expressions, arithmetic operators, and equality operators. Future lessons will teach you about the following topics, with a few other topics thrown in for good
Sequence, selection, and loop structures
Relational and logical operators
Counter loops, nested loops, and sentinel loops
Arrays and lists
Events and event handling
The capabilities represented by these topics are the capabilities that separate a real computer from a programmable device such as a microwave oven or a VCR recorder, so this is the real core of
computer programming.
I recommend that you open another copy of this document in a separate browser window and use the following links to easily find and view the figures and listings while you are reading about them.
Once you have mastered Alice, I recommend that you also study the other lessons in my extensive collection of online programming tutorials. You will find a consolidated index at www.DickBaldwin.com.
All of the computer programming languages with which I am familiar consist of statements, which in turn, consist of expressions.
(An expression is a specific combination of operators and operands, which evaluates to a particular result.
The operands can be variables, literals, or method calls that return a value.)
In your past experience, you may have referred to expressions by the names formulas or equations.
Although formulas and equations are not exactly the same thing as expressions, they are close enough to help you understand what expressions are and how they are used.
A statement is a specific combination of expressions. In Java, C++, and C# a statement is terminated by a semicolon. The same appears to be true in Alice, but because the drag and drop paradigm takes
care of syntax issues like that for you, it really isn't important.
The following is an example of a statement comprised of expressions.
z = x + y;
Operationally, in the above statement, values are retrieved from the variables named x and y. These two values are added together. The result is stored in (assigned to) the variable named z,
replacing whatever value may previously have been contained in that variable.
Operators are the action elements of a computer program.
They perform actions such as:
• Adding two variables
• Dividing one variable by another variable
• Comparing one variable to another variable, etc.
Operators operate on operands. Stated differently, operands are the things that are operated on by operators.
For example, in the following expression, the plus character is an operator while x and y are operands.
x + y
Assuming that x and y are numeric variables, this expression produces the sum of the values stored in the two variables.
In some languages, if x and y are string variables, this expression produces a new string, which is the concatenation of the string contents of the two string variables. (Alice provides a function
that is used to accomplish string concatenation.) According to the current jargon, the plus character is an overloaded operator. Its specific behavior depends on the types of its operands.
The variable x would be called the left operand and the variable y would be called the right operand.
Unary, binary, and ternary operators
Java, C++, and C# provide operators that can be used to perform an action on one, two, or three operands.
An operator that operates on one operand is called a unary operator. (Alice supports at least one unary operator: the increment operator.)
An operator that operates on two operands is called a binary operator. (Alice supports several binary operators.)
An operator that operates on three operands is called a ternary operator. (As near as I have been able to determine, Alice does not support ternary operators.)
Some operators can be either unary or binary
In Java, C++, and C#, some operators can behave either as a unary or as a binary operator. The best known operator that can behave either way in those languages is the minus sign. (While Alice has
both unary and binary operators, as near as I have been able to determine, Alice does not support any operators that behave as both unary and binary operators.)
The minus sign as binary operator
As a binary operator, the minus sign causes its right operand to be subtracted from its left operand (provided that the two operands evaluate to numeric values).
For example, the following code subtracts the variable y from the variable x and assigns the result of the subtraction to the variable z. After the third statement is executed, the variable z
contains the value 1.
int x = 6;
int y = 5;
int z = x - y;
The minus sign as unary operator
Just to expand your thinking a little beyond Alice, in Java, C++, and C#, as a unary operator, the minus sign causes the algebraic sign of the right operand to be changed.
For example, the following two statements cause a value of -5 to be stored in the variable x.
int y = 5;
int x = -y;
Binary operators use infix notation
To keep you abreast of the current jargon, binary operators in Alice as well as Java, C++, and C# use infix notation. This means that the operator appears between its operands.
Prefix and postfix notation
In those other languages, (but apparently not in Alice) there are some unary operators that use prefix notation. In Alice and those other languages, there is at least one unary operator that uses
postfix notation. This operator is the increment operator, which I will explain in a future lesson.
For prefix notation, the operator appears before (to the left of) its operand.
For postfix notation, the operator appears after (to the right of) its operand.
General behavior of an operator
As a result of performing the specified action, an operator can be said to return a value (or evaluate to a value) of a given type.
The type of value returned depends on the operator and on the types of the operands.
(To evaluate to a value means that after the action is performed, the operator and its operands are effectively replaced in the expression by the value that is returned.)
Operator categories
There are many different categories of operators in most programming languages. The operators in those different categories have different purposes. This lesson will not attempt to teach you about,
or even to introduce you to all of the different operators. Rather, this lesson will teach you about arithmetic operators and equality operators. Future lessons will teach you about relational
operators and logical operators.
You will learn about many other operators in due time if you continue to pursue an education in computer programming using Java, C++, or C#.
Programming languages such as Java, C++, and C# typically support the five arithmetic operators shown in Figure 1.
Figure 1. Arithmetic operators.
│Operator Description │
│ │
│ + Adds its operands │
│ - Subtracts the right operand│
│ from the left operand │
│ * Multiplies the operands │
│ / Divides the left operand by│
│ the right operand │
│ % Remainder of dividing the │
│ left operand by the right │
│ operand - modulus operator │
│ A bug in IEEERemainder function │
│ If you call the IEEERemainder function to get the remainder of 7/2, the value returned is -1, which is incorrect. │
│ │
│ If you call the IEEERemainder function to get the remainder of 8/3, the value returned is -1, which is incorrect. │
│ │
│ If you call the IEEERemainder function to get the remainder of 5/2, the value returned is 1, which is correct. │
│ │
│ Sometimes the function returns the correct answer and sometimes it returns the wrong answer. On the basis of a very limited amount of experimentation, it seems to return the correct answer for │
│ those cases where the correct answer is 0. │
Alice supports four of the five
Alice supports the first four operators shown in Figure 1 but as near as I have been able to determine, does not support the modulus operator. However, the world has a function named
Math.IEEERemainder that may serve the same purpose. Note, however, that it is a function and not an operator.
Using the arithmetic operators
The arithmetic operators in Alice are exposed whenever you click the triangle immediately to the right of a component in an expression that can serve as the left operand for the operator. An example
of this procedure is shown in Figure 2.
Figure 2. Arithmetic operators exposed.
How did I produce the outcome shown in Figure 2?
In Figure 2, I clicked the triangle immediately to the right of 1 meter. That exposed the topmost menu shown in Figure 2. One of the selections on that menu reads math. When I selected the math item,
the menu immediately to its right opened up showing the left operand and a choice of each of the four arithmetic operators. When I selected the item that reads 1+, that exposed the rightmost menu in
Figure 2.
The b at the top of that menu indicates that the rightmost menu represents the right operand for the selected arithmetic operator. As usual, for numeric data, there was a choice of values plus the
item that reads other... As you have seen before, selecting other... exposes a numeric keypad that allows you to specify any numeric value. It also allows you to compute the quotient of two numeric
values and to specify that quotient as the right operand.
Mixing arithmetic operators
When using the drag and drop paradigm to construct expressions containing arithmetic operators, you must be careful to construct them in such a way that they will be evaluated properly. An example of
this situation is shown in Figure 3.
Figure 3. Two examples of mixed operator types.
Figure 3 contains two statements that evaluate an arithmetic expression using the current values of the variables a and b, along with the literal constant 3. The expressions also include the addition
operator (+) and the multiplication operator (*). Given the values shown for the two variables, the result of evaluating the arithmetic expression in the first statement would be 21. The result of
evaluating the arithmetic expression in the second statement would be 11.
Can you see the difference?
Basically when expressions containing terms that are grouped within matching parentheses are evaluated, the contents of the inner-most pair of parentheses are evaluated first. Then that pair of
parentheses and its contents are replaced by the value that was produced and that value is used to evaluate the next inner-most pair of parentheses. This process continues until all of the
parentheses have been eliminated. (Do you remember doing this in your high school algebra class?)
Using this approach, we see that the result of evaluating the arithmetic expression in the first statement in Figure 3 is 7*3 or 21. The result of evaluating the arithmetic expression in the second
statement is 5+6 or 11. Just remember, parentheses are always cleared out beginning with the inner-most parentheses and working outward to the outer-most pair of parentheses.
At this point in your studies, you should be able to create the two statements shown in Figure 3. See if you can do it.
Mixed-type arithmetic
Programming languages such as Java, C++, and C# have several different numeric types. Some of those types are whole number or integer types. The others are types that can contain a fractional part.
However, Alice has only one numeric type (Number) and it is a type that can contain a fractional part.
When you master Alice and move on to those other programming languages, you will find that there are some subtle, somewhat tricky, and important issues involved in performing arithmetic using values
of different types. For the time being, however, you can relax because you won't have to deal with those issues with Alice. Don't become too complacent, however. If you continue in your efforts to
become a computer programmer, the day will come when you too will have to deal with the problems associated with mixed-type arithmetic.
There are two more operators that I want to teach you about before we leave this lesson: the equality operators. Each of these operators is a binary operator, which returns the boolean values true or
false, depending on whether their two operands are equal or not. The operators are pictured in Figure 4.
Figure 4. Equality operators.
│Operator Description │
│ │
│ == Returns true if operands are equal. │
│ Otherwise returns false. │
│ != Returns true if operands are not equal.│
│ Otherwise returns false. │
An example of equality operators
The Alice code in Figure 5 illustrates the behavior of both operators.
Figure 5. Illustration of "equal" and "not equal" operators.
│ Variable names │
│ While I normally prefer to use meaningful variable names beginning with lower-case characters and camelCase, I elected to use single characters for the names in this example so that the code │
│ would fit into this narrow publication format without having to be reduced. I elected to use upper-case characters because they stand out better in the text that discusses the variables. │
Performing operations on three variables
The simple code in Figure 5 declares and initializes three variables named A, B, and C. The variables A and B are each of type Number, and are given values of 5 and 2 respectively. Thus, the values
in these two variables are not equal to one another.
The variable named C is of type Boolean, and its initial value is immaterial because that value will be overwritten by the code in Figure 5 when the program is run.
Evaluating the expression
The first statement in Figure 5 evaluates the expression
and stores the value that is returned in the variable named C, overwriting the previous contents of C. The second statement in Figure 5 prints the value stored in C, producing the first line of
output text in Figure 6. As you can see, the printed output indicates that the expression is false because A is not equal to B.
Figure 6. Output produced by the code in Figure 5.
│the value of world.main.C is false │
│the value of world.main.C is true │
Evaluating the other expression
The third statement in Figure 5 evaluates the expression
and stores the result in variable C, once again overwriting the previous contents of C. The fourth statement in Figure 5 prints the contents of C, producing the second line of text in Figure 6. The
printed output indicates that the expression is true because A is not equal to B.
Modify values for A and B
Figures 7 and 8 show the result of modifying the values of the two variables so that the value of A is equal to the value of B and running the program again.
Figure 7. Illustration of "equal" and "not equal" operators.
Figure 8. Output produced by the code in Figure 7.
│the value of world.main.C is true │
│the value of world.main.C is false │
Make certain that you understand the outcome
We will be using these two operators extensively in a future lesson on selection and loop structures, so you need to make certain that you understand their behavior before leaving this lesson.
In this lesson, I taught you about expressions, arithmetic operators, and equality operators.
Future lessons will teach you about the following topics, with a few other topics thrown in for good measure:
Sequence, selection, and loop structures
Relational and logical operators
Counter loops, nested loops, and sentinel loops
Arrays and lists
Events and event handling
There is no lab project for this lesson. You will use the expressions and operators that you learned about in this lesson in the lab projects for the lessons that follow.
General resources
Resources from earlier lessons in the series titled "Learn to Program using Alice"
Copyright 2007, Richard G. Baldwin. Faculty and staff of public and private non-profit educational institutions are granted a license to reproduce and to use this material for purposes consistent
with the teaching process. This license does not extend to commercial ventures. Otherwise, reproduction in whole or in part in any form or medium without express written permission from Richard
Baldwin is prohibited.
Richard Baldwin is a college professor (at Austin Community College in Austin, TX) and private consultant whose primary focus is a combination of Java, C#, and XML. In addition to the many platform
and/or language independent benefits of Java and C# applications, he believes that a combination of Java, C#, and XML will become the primary driving force in the delivery of structured information
on the Web.
Richard has participated in numerous consulting projects and he frequently provides onsite training at the high-tech companies located in and around Austin, Texas. He is the author of Baldwin's
Programming Tutorials, which have gained a worldwide following among experienced and aspiring programmers. He has also published articles in JavaPro magazine.
In addition to his programming expertise, Richard has many years of practical experience in Digital Signal Processing (DSP). His first job after he earned his Bachelor's degree was doing DSP in the
Seismic Research Department of Texas Instruments. (TI is still a world leader in DSP.) In the following years, he applied his programming and DSP expertise to other interesting areas including sonar
and underwater acoustics.
Richard holds an MSEE degree from Southern Methodist University and has many years of experience in the application of computer technology to real-world problems.
|
{"url":"http://www.dickbaldwin.com/alice/Alice0160.htm","timestamp":"2014-04-19T02:54:47Z","content_type":null,"content_length":"31556","record_id":"<urn:uuid:df880589-ed61-46da-b641-1157fb57ec1d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Twitter Interview Questions
Twitter Interview Questions
You have n - 1 numbers from 1 to n. Your task is to find the missing number.
n = 5
v = [4, 2, 5, 1]
The result is 3.
Given an array with numbers, your task is to find 4 numbers that will satisfy this equation:
A + B + C = D
given two nodes of a binary tree, find number of nodes on the path between the two nodes.
4, 3 -> (4-2-1-3): 4
Two sorted 2D arrrays, get the third one sorted
A = [["a", 1], ["b", 2]] sorted all elements have different names
B = [["a", 2], ["c", 3]] sorted
C = [["a", 3], ["b", 2], ["c", 3]] sorted
Implement LRU cache.
Given a list of names. Find whether a particular name occurs inside a given tweet or not. If found return true otherwise false Time complexity should be less than O(n).
Ex: "Katy Perry","Ronan Keating" given as a list of string.
List<String> names;
bool findName(String tweet)
Given a preorder sequence from a Binary Tree, how many unique trees can be created from this? (They want a recurrence relation and start with the easy cases):
T(0) = 1
T(1) = 1
T(2) = 2
What is T(N) ?
For a technical phone screen:
Given a string "aaabbcccc", write a program to find the character with the second highest frequency.
given 4 points, whose x and y coordinates are both integers. they are all different. write a function to check if they form a square.
i forgot to point out that the points can be given in any order
Given a Tuple for eg. (a, b, c)..
Output : (*, *, *), (*, *, c), (*, b, *), (*, b, c), (a, *, *), (a, *, c), (a, b, *), (a, b, c)
String getSentence(String text, Set<String> dictionary);
// text is a string without spaces, you need to insert spaces into text, so each word seperated by the space in the resulting string exists in the dictionary, return the resulting string
// running time has to be at least as good as O(n)
// getSentence("iamastudentfromwaterloo", {"from, "waterloo", "hi", "am", "yes", "i", "a", "student"}) -> "i am a student from waterloo"
Build an HTTP Library that is common and can be used by various clients like Twitter, Gmail, facebook etc. What features would you add into this library
Write a class hashmap and implement: insert, get and delete (Open addressing/chaining). What points to be considered before going to production (focus on concurrency)
Design a modified stack that in addition to Push and Pop can also provide minimum element present in the stack via Min function.
Design a hash table that is thread safe. That is it can support concurrent reads but protects on write.
Design a collaborative text editor where each participant has infinite undo/redo. Consider the scenario where a user goes offline and then comes online and tries to undo/redo.
Given a sorted array that is sorted by rotated, find a given number. For example take an array: 1 3 8 10 12 56 and rotate it so you have 10 12 56 1 3 8 and then find a candidate e.g. 3 in it.
Given a string representing sorted numbers with spaces print the count of each number. For example if the input string is: "1 1 2 3 4 4" then you should print 1:2, 2:1, 3:1, 4:2
Then the question was modified so there could be invalid number in the string which must be skipped.
Then an added requirement to handle hex numbers in the string.
Given a string representing roman numeral, find and return its numeric value. e.g. XXIV = 24 and so on.
Leader election algorithm in a distributed system.
|
{"url":"http://www.careercup.com/page?pid=twitter-interview-questions","timestamp":"2014-04-16T18:56:52Z","content_type":null,"content_length":"156592","record_id":"<urn:uuid:b2039e92-89e7-4ff7-b6f7-3d11f0101e08>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Systems of Linear Equations Hey, Soul System True or False
1. Complete the following statement as accurately as possible: A solution to a system of equations -> must satisfy one or more of the equations in the system.
2. Which of the following statements is false? -> A system of two linear equations may have one solutions.
What is the solution to the system of equations shown below?
4. Complete the following statement as accurately as possible: If two lines are parallel, they have -> a good chance of making it onto the U.S. Olympic gymnastics team.
Which of the four points
I. (2, 6)
II. (0, -2)
5. III. (1, -1)
IV. (3, 1)
are solutions to the system of equations
To eliminate the variable x from the system of equations
6. we could
-> multiply the first equation by -5 and the second equation by 3.
Find the value of x in the solution to the system of equations
Find the value of y in the solution to the system of equations
Solve the system of equations
Solve the system of equations
|
{"url":"http://www.shmoop.com/linear-equation-systems/quiz-1-true-false.html","timestamp":"2014-04-20T03:23:16Z","content_type":null,"content_length":"40250","record_id":"<urn:uuid:b6165535-a951-4797-94e2-18095c5dc8f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum - Problems Library - Math Fundamentals, Measurement/Units
This page:
About Levels
of Difficulty
Math Fundamentals
operations with numbers
number sense
number theory
parts of wholes
ratio & proportion
algebraic reasoning
discrete math
logical reasoning
statistics & data analysis
Browse all
Math Fundamentals
Math Fundamentals
About the
PoW Library
These problems require students to manipulate and calculate different types of measurements. They include geometric measurements, as well as the concepts of money and time.
Some of these problems are also in the following subcategories:
Related Resources
Interactive resources from our Math Tools project:
Math 4: Measurement
The closest match in our Ask Dr. Math archives:
Elementary Measurement
NCTM Standards:
Measurement Standard for Grades 3-5
Access to these problems requires a Membership.
|
{"url":"http://mathforum.org/library/problems/sets/fun_measurement101100.html","timestamp":"2014-04-18T06:10:59Z","content_type":null,"content_length":"21344","record_id":"<urn:uuid:1d992d54-0360-4a34-904f-99406fcd0b7d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math: Gift from God or Work of Man?
Calculus 1: Derivatives as the Mathematics of Transcending, Used to Handle Changing Quantities
Calculus 2: Integrals as the Mathematics of Unification, Used to Handle Wholeness
Calculus 3: Unified Management of Change in All Possible Directions
Calculus 4: Locating Silence within Dynamism
Evolution, a Counterargument to the Divine Nature of Mathematics
Of course, there are more sophisticated ideas that are vaguely similar, and there have been first-rate scientists who have taken mathematics to be some sort of divine manifestation. One of the most
well-known such arguments is due to physicist Eugene Wigner. In his famous 1960 paper, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences," he maintained that ability of
mathematics to describe and predict the physical world is no accident, but rather is evidence of a deep and mysterious harmony.
But is the usefulness of mathematics really so mysterious? There is a quite compelling alternative explanation why mathematics is so useful. We count, we measure, we employ basic logic, and these
activities are stimulated by ubiquitous aspects of the physical world. The size of a collection (of stones, grapes, animals), for example, is associated with the size of a number and keeping track of
it leads to counting. Putting collections together is associated with adding numbers, and so on.
Another metaphor associates the familiar realm of measuring sticks (small branches, say, or pieces of string) with the more abstract one of geometry, The length of a stick is associated with the size
of a number (once some segment is associated with the number one), and relations between the numbers associated with a triangle, say, are noted. (Scores of such metaphors underlying more advanced
mathematical disciplines have been developed by linguist George Lakoff and psychologist Rafael Nunez in their book, "Where Mathematics Comes From.")
Once part of human practice, these various notions are abstracted, idealized and formalized to create basic mathematics, and the deductive nature of mathematics then makes this formalization useful
in realms to which it is only indirectly related.
The universe acts on us, we adapt to it, and the notions that we develop as a result, including the mathematical ones, are in a sense taught us by the universe. That great bugbear of creationists,
evolution has selected those of our ancestors (both human and not) whose behavior and thought are consistent with the workings of the universe. The usefulness of mathematics is thus not so
There are, of course, many other views of mathematics (Platonism, formalism, et cetera), but whatever one's philosophy of the subject, the curricula cited above and others like them are a bit absurd,
even funny. In private schools they're none of our business. This is not so if aspects of these "creation math" curricula slip into the public schools, a prospect no doubt devoutly wished for by
John Allen Paulos, a professor of mathematics at Temple University, is the author of the bestsellers "Innumeracy" and "A Mathematician Reads the Newspaper," as well as of the forthcoming (in
December) "Irreligion." His "Who's Counting?" column on ABCNews.com appears the first weekend of every month.
|
{"url":"http://abcnews.go.com/Technology/WhosCounting/story?id=3543453&page=2","timestamp":"2014-04-19T05:46:18Z","content_type":null,"content_length":"89728","record_id":"<urn:uuid:7339ed0c-5771-4a50-b37f-235e995dd3e4>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
YinYang Bipolar Relativity: A Unifying Theory of Nature, Agents and Causality with Applications in Quantum Computing, Cognitive Informatics and Life Sciences (9781609605254): Wen-Ran Zhang: Books | IGI Global
A few years away from the centennial celebration of general relativity, the author of this monograph feels blessed for having the opportunity to present YinYang Bipolar Relativity to readers of the
world. It seems surreal but, hopefully, the book can serve three purposes: (1) to add a piece of firework to the centennial celebration; (2) to introduce a deeper theory that transcends spacetime;
(3) to reveal the ubiquitous effects of quantum entanglement in simple, logically comprehendible terms. Certainly, whether it is indeed an applicable deeper theory or just a piece of firework is
ultimately up to the readers to make a judgment. As pointed out by Einstein: “Experience remains, of course, the sole criterion of the physical utility of a mathematical construction. But the
creative principle resides in mathematics.” (Einstein 1934)
In this book we refer to relativity theories defined in spacetime as spacetime relativity. Thus, all previous relativity theories by Galileo, Newton, Lorenz, and Einstein are classified as spacetime
relativity. This terminological treatment is for distinguishing YinYang bipolar geometry from spacetime geometry.
Regarding judgment, believe it or not, in the world-wide scientific community there may be more Chinese who emotionally resent the word “YinYang” due to misinformation or misunderstanding than
Westerners who scientifically oppose the YinYang cosmology. This may sound ironic but is actually a historical phenomenon with socioeconomic reasons. First, most modern day Chinese want China to be
integrated into the modern world and don’t care much about YinYang, deemed an unscientific concept of the old school. Secondly, some overseas Chinese are concerned that the word “YinYang” might
offend Western colleagues.
Subsequently, while the word “YinYang” has appeared in numerous Western publications spanning almost the whole spectrum of arts and sciences including but not limited to the prestigious journals
Science, Nature, and Cell, some Chinese scholars including some researchers in traditional Chinese medicine (TCM) tend to shun YinYang. For instance, a few years ago a well-established Chinese
American friend strongly advised (or demanded) the author to drop the word “YinYang” from titles of future submissions to avoid “hurting the others’ feelings.”
Western scholars, on the other hand, are free from carrying the above historical or socioeconomical baggage and are curious about YinYang. While many Westerners regard “YinYang” objectively as a
philosophical word related to nature, society, and TCM, some Western scientists expect YinYang to play a critical or even unifying role in modern science. Here are a few examples:
1. Regarding the “hurting the others’ feelings” matter, the author consulted a few “Westerner” colleagues and was given exactly the opposite advice: YinYang symbolizes the two energies of dynamic
equilibrium, harmony, and complementarity; bipolarity without YinYang is often used in the West to indicate disorder, chaos, and dichotomy.
2. Legendary German mathematician Leibniz – co-founder of calculus – invented the modern binary numeral system in the 17th century and attributed his invention to YinYang hexagrams recorded in the
oldest Chinese Book of Change – I Ching (Leibniz 1703) (Karcher 2002).
3. Legendary Danish physicist Niels Bohr, father figure of quantum mechanics besides Einstein, brought YinYang into quantum theory for his particle-wave complementarity principle. When he was
awarded the Order of the Elephant by the Danish government in 1947, he designed his own coat of arms which featured in the center a YinYang logo (or Taiji symbol) with the Latin motto “contraria
sunt complementa” or “opposites are complementary.”
4. Following Einstein’s lead that history and philosophy provides the context for science and should be a significant part of physics education (Smolin 2006 p310-311), a group of renowned scientists
and linguists in North America noticed that different philosophies and cosmologies could result in different cultures and linguistic terms which in turn could make a major difference in the
interpretation and understanding of space, time, and the quantum world (Alford 1993). Specifically, the word “YinYang” is deemed a most suitable noun for characterizing quantum interaction. As
stated by linguist Alford (Alford 1993), YinYang “represents a higher level of formal operations” “which lies beyond normal Western Indo-European development.”
5. A widely referenced genetic agent (protein) discovered at Harvard Medical School is named YinYang 1 (YY1) (Shi et al. 1991) due to its ubiquitous repressor-activator (YinYang) functionalities in
gene expression regulation in all cell types of living species (Jacobsen & Skalnik 1999).
6. A YinYang Pavilion created by American artist Dan Graham is dedicated to MIT and housed in Simmons Hall on the MIT campus (MIT News 2004).
7. A New York Times science report (Overbye 2006) described a subatomic particle discovered at the Fermi National Accelerator Laboratory as a “YinYang dance” that can change polarity three trillion
times per second (Fermilab 2006).
While Western science and media don’t seem to have problem with the word “YinYang”, the word is, nevertheless, largely mysterious, albeit extremely pervasive. Its pervasive and mysterious nature can
be characterized with a famous quote from Einstein: “After a certain high level of technical skill is achieved, science and art tend to coalesce in esthetics, plasticity, and form. The greatest
scientists are always artists as well."
Evidently, a resolution to the “science and art” YinYang mystery bears great significance and has become imperative for the advancement of science and humanity. Unfortunately, such a resolution has
been deemed scientifically impossible by many. This monograph is intended to accomplish the mission impossible based on the following observations and assertions:
1. The “science and art” YinYang paradox is similar to particle-wave quantum duality in Niels Bohr’s complementarity principle. Unfortunately, quantum mechanics has so far only recognized YinYang
complementarity but failed to identify the essence of YinYang bipolarity. Without bipolarity, any complementarity is less fundamental due to the missing “opposites.” In one word, the negative and
positive poles such as action-reaction forces and particle-antiparticle pairs are the most fundamental opposites of Mother Nature but science-art, particle-wave, and truth-falsity are not exactly
YinYang bipolar opposites.
2. Resolving the YinYang mystery is essentially the same as logically defining Aristotle’s causality principle, axiomatizing all of physics (Hilbert 1901), resolving the EPR (Einstein, Podolsky &
Rosen 1935), or providing a logical foundation for the grand unification of general relativity and quantum mechanics.
3. The “higher level” “post-formal” YinYang operation entails a philosophically different logical foundation that does indeed lie “beyond normal Western Indo-European development” and such a logical
foundation is attainable in formal mathematical terms.
The objective of this monograph is to present YinYang bipolar relativity as an equilibrium-based unifying computing paradigm with a minimal but most general axiomatization of physics that (1)
logically defines causality; (2) logically unifies gravity with quantum theory; (3) brings relativity and quantum entanglement to the real-world of microscopic and macroscopic agent interaction,
coordination, decision, and global regulation in physical, social, and life sciences especially in quantum computing and exploratory knowledge discovery.
The intended audience of the book includes, but is not limited to,
1. Students, professors, and researchers in mathematics, computer science, artificial intelligence, information science, information technology, data mining and knowledge discovery. These readers
may find bipolar mathematical abstraction, bipolar sets, bipolar dynamic logic, bipolar quantum linear algebra, bipolar quantum cellular automata and their applications useful in their fields of
teaching, learning, and research.
2. Students, professors, and researchers in quantum computing, physical sciences, nanotechnology, and engineering. These readers may find both of the theoretical and application aspects useful in
their field of teaching, learning, and research. It is expected that quantum computing will be a major interest to these readers.
3. Students, professors, and researchers in bioinformatics, computational biology, genomics, bioeconomics, psychiatry, neuroscience, traditional Chinese medicine, and biomedical engineering. These
readers may use the book material as an alternative holistic approach to problem solving in their fields of teaching, learning, research, and development.
4. Students, professors, and researchers in socioeconomics, bioeconomics, cognitive science, and decision science. These readers may find the mathematical tools and the quantum computing view useful
in their fields of teaching, learning, research, and development.
5. Industrial researcher/developers in all fields who are interested in equilibrium-based modeling, analysis, and exploratory knowledge discovery in quantum computing, cognitive informatics, and
life sciences. These readers may actually apply the theory of bipolar relativity for dealing with uncertainties and resolving unsolved problems in uncharted territories.
Limited logical and mathematical proofs of related theorems are included in Chapters 3-8. The proofs are for the convenience of logicians and mathematicians. They can be skipped by non-mathematical
readers who are only interested in using the mathematical results for practical applications.
While YinYang bipolar relativity can trace its philosophical origin back to ancient Chinese YinYang cosmology which claimed that everything has two sides or two opposite but reciprocal poles or
energies, the formal theory presented in this monograph, however, is not the result of experimentation or elaboration of ancient Chinese YinYang but the result of free invention with the following
1. According to Einstein logical axiomatization of physics is possible: “Physics constitutes a logical system of thought which is in a state of evolution, whose basis (principles) cannot be
distilled, as it were, from experience by an inductive method, but can only be arrived at by free invention.” (Einstein 1916).
2. According to Einstein: “Evolution is proceeding in the direction of increasing simplicity of the logical basis (principles).” “We must always be ready to change these notions – that is to say,
the axiomatic basis of physics – in order to do justice to perceived facts in the most perfect way logically.” (Einstein 1916)
3. According to Einstein: “… pure thought can grasp reality, as the ancients dreamed” and “nature is the realization of the simplest conceivable mathematical ideas.” (Einstein 1934)
4. According to Einstein the grand unification of general relativity and quantum mechanics needs a new logical foundation: “For the time being we have to admit that we do not possess any general
theoretical basis for physics which can be regarded as its logical foundation.” (Einstein 1940)
5. According to Einstein: “Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. That’s relativity.”
In the last quote, Einstein used sorrow and joy to hint the two sides of YinYang in general. Symbolically, the two sides can be paired up as a bipolar variable and generalized to action-reaction
forces denoted (-f, +f), negative-positive electromagnetic charges denoted (-q, +q), matter-antimatter particles (-p, +p) or the equilibrium-based bipolar variable (e-, e+) in a YinYang bipolar
dynamic logic (BDL) for the theory of YinYang bipolar relativity (Zhang 2009a,b,c,d).
While space and time are not symmetrical to each other, not quantum entangled with each other, and not bipolar interactive, the concept of YinYang bipolarity is symmetrical and applicable in both
microscopic and macroscopic worlds of physical and social sciences for characterizing agent interaction and bipolar quantum entanglement. Arguably, if space is expanding, spacetime has to be caused
by something more fundamental; if YinYang bipolarity can survive a black hole due to particle-antiparticle emission or Hawking radiation, the logical foundation of physics has to be bipolar in
nature; if particle-antiparticle pairs and nature’s basic action-reaction forces are the most fundamental components of our universe, YinYang bipolar relativity has to be more fundamental than
spacetime relativity. These arguments provide a basis for the transcending and unifying property of YinYang bipolar relativity beyond spacetime geometry.
Historically, even though YinYang has been the philosophical basis in the actual practice of TCM for thousands of years in China, it has failed to enter the arena of modern science until recent
decades. It is a living proof to Einstein’s assertion that “Physics constitutes a logical system of thought which is in a state of evolution, whose basis (principles) cannot be distilled, as it were,
from experience by an inductive method, but can only be arrived at by free invention.” (Einstein 1916).
Here are a few major modern developments in YinYang research:
1. Biological YinYang. The most noticeable result in this category is the discovery of the genetic regulator protein Yin Yang 1 (YY1) in 1991 at Harvard Medical School (Shi et al. 1991). YY1
exhibits ubiquitous repressor-activator functionalities in gene expression regulation in all types of cells of living species. The discovery of YY1 marks the formal entry of the ancient YinYang
into genomics – a core area of bioinformatics. Since then, YY1 has been widely referenced by the top research institutions in the US and the world.
2. Bayesian YinYang (BYY). BYY harmony learning (Xu 2007) has been widely cited and has become a well-established area in neural networks.
3. Binary or Boolean YinYang. Boolean YinYang (Zhang 1992; Kandel & Zhang 1998) follows Leibniz binary interpretation of YinYang. The binary interpretation provides a basis for all digital
4. Bipolar YinYang. Bipolar YinYang consists of YinYang bipolar sets, bipolar dynamic logic, bipolar quantum linear algebra, YinYang-N-Element quantum cellular automata, bipolar quantum
entanglement, and the theory of YinYang bipolar relativity for applications in quantum computing, socioeconomics, and brain and life sciences (Zhang and coauthors, 1989-2009) (Zhang 1996-2010).
Bipolar YinYang follows the YinYang cosmology that claims everything in the universe including the universe itself has two opposite reciprocal poles or energies.
This book follows the direction of bipolar YinYang. However, it should be remarked that the above different approaches to YinYang are interrelated or overlapped with each other. The repression and
activation regulatory properties of Yin Yang 1 are bipolar in nature; YinYang equilibrium is essential in YinYang harmony; the two poles of YinYang are truth objects plus reciprocal bipolarity. From
a physical science perspective, (-,+) bipolarity and symmetry in particle physics can also be considered evidence that supports the YinYang bipolar cosmology. From a decision science perspective,
YinYang has been an influential philosophy in business management, socioeconomics, and international relations especially in Eastern countries. Noticeably, the national flag of South Korea is
featured with a YinYang logo.
Indeed, YinYang has entered every aspect of the Western as well as the Eastern societies. Due to its lack of a unique formal logical basis, however, YinYang theory has remained largely mysterious.
This book is to fill this gap. Although the technical ideas have been partially reported in refereed journal and conference articles, they have never been systematically presented as a coherent
relativity theory in a monograph.
It is well-known that microscopic and macroscopic agents and agent interactions are essential in physics, socioeconomics, and life sciences. Unifying logical and mathematical axiomatization of agent
interaction in microscopic and macroscopic worlds including but not limited to quantum, molecular, genetic, and neurobiological worlds is needed for scientific discoveries and for the coordination
and global regulation of both non-autonomous and autonomous agents. Since agent interactions are governed by physical and social dynamics, the difficulty of axiomatizing agent interactions can be
traced back to Hilbert’s effort in axiomatizing physics, Aristotle’s causality principle, the concept of singularity, and bipolar equilibrium.
English mathematical physicist Roger Penrose described two mysteries of quantum entanglement (Penrose 2005, p591). The first mystery is identified as the phenomenon itself. The second one, according
to Penrose, is “How are we to come to terms with quantum entanglement and to make sense of it in terms of ideas that we can comprehend, so that we can manage to accept it as something that forms an
important part of the workings of actual universe? .. The second mystery is somewhat complementary to the first. Since according to quantum mechanics, entanglement is such a ubiquitous phenomenon –
and we recall that the stupendous majority of quantum states are actually entangled ones – why is it something that we barely notice in our direct experience of the world? Why do these ubiquitous
effects of entanglement not confront us at every turn? I do not believe that this second mystery has received nearly the attention that it deserves, people’s puzzlement having been almost entirely
concentrated on the first.”
A major argument of this monograph is that equilibrium or non-equilibrium, as a physical state of any dynamic agent or universe at the system, molecular, genomic, particle, or subatomic level, forms
a philosophical chicken and egg paradox with the universe because no one knows exactly which one created the other in the very beginning. Since bipolar equilibrium (or non-equilibrium) is a generic
form of equilibrium (or non-equilibrium), any multidimensional model in spacetime geometry is not fundamental. It is postulated that the most fundamental property of the universe is YinYang
bipolarity. Based on this postulate, bipolar relativity is presented that extends YinYang cosmology from “Everything has two reciprocal poles” to a formal logical foundation for physical and social
sciences which claims that “Everything has two reciprocal poles and nature is the realization of YinYang bipolar relativity or bipolar quantum entanglement.”
The main idea of the book starts with the paradox “logical axiomatization for illogical physics” (LAFIP) (Zhang 2009a) on Hilbert’s Problem 6. It is observed that without bipolarity the bivalent
truth values 0 for false and 1 for true are incapable of carrying any shred of direct physical syntax and semantics, let alone illogical physical phenomena such as chaos, particle-wave duality,
bipolar disorder, equilibrium, non-equilibrium, and quantum entanglement. Therefore, truth-based (unipolar) mathematical abstraction as a basis for positivist thinking cannot avoid the LAFIP paradox.
It is suggested that this is the fundamental reason why there is so far no truth-based, logically definable causality, no truth-based axiomatization of all physics, no decisive battleground in the
quest for quantum gravity, and no logic for particle-wave duality, bipolar disorder, economic depression, big bang, black hole, and quantum entanglement.
Furthermore, it is pointed out that, while no physicist would say “electron is isomorphic to positron”, it is widely considered in logic and mathematics that “-1 is isomorphic to +1” and (-,+)
bipolar symmetry, equilibrium, or non-equilibrium is not observable. If we check the history of negative numbers, we would find that the ancient Chinese and Indians started to use negative numbers
thousands of years ago but European mathematicians resisted the concept of negative numbers until the 18th centuries (Temple 1986, pp.141) (Bourbaki 1998) (Martinez 2006).
Regardless of the great achievement of Western science and technology, it is undoubtedly necessary to bridge the gap between the Western positivist thinking and the Eastern balanced thinking for
solving unsolved scientific problems. As “passion for symmetry” can “permeate the Standard Model of elementary particle physics” and can unify “the smallest building blocks of all matter and three of
nature's four forces in one single theory” (The Royal Swedish Academy of Sciences 2008), it is not only reasonable but also inevitable to explore the bipolar equilibrium-based computing paradigm
(Note: Equilibrium-based is to equilibrium and non-equilibrium as truth-based is to truth and falsity with fundamentally different syntax and semantics).
YinYang bipolar relativity is intended to be a logical unification of general relativity and quantum mechanics. The monograph can be considered the first step to address the gigantic topic with
real-world applications in both natural and social sciences focused on quantum computing and agent interaction in socioeconomics, cognitive informatics and life sciences. Subjects opened in the book
can be further addressed in succeeding volumes in depth.
The main body of the book starts with a new set-theoretic logical foundation. To avoid LAFIP, bipolar set theory is introduced with a holistic equilibrium-based approach to mathematical abstraction.
Bipolar sets lead to YinYang bipolar dynamic logic (BDL). A key element of BDL is bipolar universal modus ponens (BUMP) that provides, for the first time, logically definable causality. It is shown
that BDL is a non-linear bipolar dynamic fusion of Boolean logic and quantum entanglement. The non-linearity, however, does not compromise the basic law of excluded middle (LEM) and bipolar
computability. Soundness and completeness of a bipolar axiomatization are asserted. Bipolar sets and BDL are extended to bipolar fuzzy sets, bipolar dynamic fuzzy logic (BDFL) and equilibrium
With the emergence of space, time, and bipolar agents, a completely background independent theory of bipolar relativity, the central theme of the book, is formally introduced based on bipolar sets
and BDL. It is shown that, with bipolar agents and bipolar relativity, causality is logically definable; a real-world bipolar string theory is scalable, and an equilibrium-based minimal but most
general axiomatization of physics, socioeconomics, and life sciences, as a partial solution to Hilbert’s Problem 6, is logically provable.
It is shown that YinYang bipolar relativity is rich in predictions. Predictions are presented, some of which are expected to be falsifiable in the foreseeable future. In particular, it is shown that
bipolar relativity provides the unified logical form for both gravity and quantum entanglement. It is conjectured that all forces in the universe are bipolar quantum entanglement in nature in large
or small scales and in symmetrical or asymmetrical forms; the speed of gravity is not necessarily limited by the speed of light as it could well be limited by the speed of quantum entanglement.
Due to bipolar quantum entanglement, YinYang bipolar relativity leads to a logically complete theory for quantum computing with digital compatibility. The bipolar quantum computing paradigm is ideal
for modeling non-linear bipolar dynamic oscillation and interaction such as non-local connection and particle-wave duality in quantum mechanics as well as self-negation/self-assertion abilities in
cognitive informatics and competition-cooperation in socioeconomics. In particular, it is shown that bipolar quantum entanglement makes quantum teleportation theoretically possible without
conventional communication between Bob and Alice. Furthermore, it is shown that bipolar quantum-digital compatibility and bitwise cryptography have the potential to make obsolete both prime number
based encryption and quantum factorization.
Based on the logical foundation, limited mathematical construction is presented. Specifically, bipolar quantum linear algebra (BQLA) and YinYang-N-Element bipolar quantum cellular automata (BQCA) are
introduced with illustrations in biosystem simulation and equilibrium-based global regulation. It is shown that the dimensional view, bipolar logical view, and YinYang-N-Element BQCA view are
logically consistent. Therefore, bipolar set theory, bipolar dynamic logic, BQLA, bipolar agents, bipolar causality, and BQCA are all unified under YinYang bipolar relativity.
It is contended that YinYang bipolar relativity is an Eastern road toward quantum gravity. It is argued that it would be hard to imagine that quantum gravity as the grand unification of gravity, and
quantum mechanics would not be the governing theory for all sciences. This argument leads to five sub-theories of quantum gravity: physical quantum gravity, logical quantum gravity, social quantum
gravity, biological quantum gravity, and mental quantum gravity that form a Q5 quantum computing paradigm. The Q5 paradigm is then used as a vehicle to illustrate the ubiquitous effects of bipolar
quantum entanglement that confronts us at every turn of our lives in comprehendible logical terms.
Mathematically, the theory of YinYang bipolar relativity as a pure invention is not derived from general relativity or quantum theory. Instead, it presents a fundamentally different approach to
quantum gravity. As a first step, the monograph is focused on the logical level of the theory and its applications in physical, social, brain, biological, and computing sciences with limited
mathematical or algebraic extensions. Thus, equilibrium-based bipolar logical unification of gravity and quantum mechanics is within the scope of the book; the quantization of YinYang bipolar
relativity and the mathematical unification of Einstein’s equations of general relativity and that of quantum mechanics have to be left for future research efforts because “For the time being we have
to admit that we do not possess any general theoretical basis for physics which can be regarded as its logical foundation” (Einstein 1940).
Theoretically, YinYang bipolar relativity presents an open-world and open-ended approach to science that is not “a theory of everything.” In this approach, the author doesn’t attempt to define the
smallest fundamental element such as strings in string theory. Instead, it is postulated that YinYang bipolarity is the most fundamental property of the universe based on well-established
observations in physical and social sciences. With the basic hypothesis, equilibrium-based logical constructions are developed with a number of predictions for experimental verification or
falsification. This approach actually follows the principle of exploratory scientific knowledge discovery.
Practically, YinYang bipolar relativity is expected to be applicable wherever bipolar equilibrium or non-equilibrium is central (e.g. Zhang 2003a,b; Zhang 2006; Zhang, Pandurangi & Peace 2007; Zhang
et al. 2010). As a quantum logic theory it is recoverable to Boolean logic and, therefore, is computational. As a relativity theory, its major role is to provide predictions and interpretations about
nature, agents, and causality. Since it is not “a theory of everything”, it does not claim universal applicability. Simulated application examples are presented in quantum computing, cognitive
informatics, and life sciences to illustrate the utility of the theory. The examples, however, are not intended to be systematic and comprehensive applications but only sufficient illustrations.
While the theory is logically proven sound, predictions or interpretations made in the book can be either verified or falsified in the future, as usual.
References to others in this monograph are focused on important relevant works related to the logical foundation of this work. Since the formal system presented in the book is a free invention, not a
philosophical elaboration of YinYang or an extension of other quantum gravity theories, references to YinYang literature are limited to the well-known basic concepts related to the logical foundation
and references to relativity and quantum theory are limited to the basic concepts of spacetime geometry, particle physics, quantum entanglement, and teleportation. Selected references are mostly
published scientific works in peer reviewed books, journals, or conference proceedings. Non-peer reviewed Web articles cited are strictly limited to well-known historical facts or philosophical
non-technical viewpoints. This treatment ensures that all technical references are from peer-reviewed scientific sources but undisputed well-known historical facts available online, and freely
expressed, non-peer reviewed philosophical viewpoints published in the Web by related experts could be taken into account for readers’ convenience.
To the author’s knowledge, this is the first monograph of its kind to introduce logically definable causality into physical and social sciences and to make the ubiquitous effects of quantum
entanglement logically comprehendible. While Leibniz binary YinYang provided a technological basis for digital technologies, YinYang bipolar relativity is expected to bring quantum gravity into
logical, physical, social, biological, and mental worlds for quantum computing.
The significance of YinYang bipolar relativity lies in its four equilibrium-based logical unifications: (1) the unification of unipolar positivist truth with bipolar holistic truth, (2) the
unification of classical logic with quantum logic, (3) the unification of quantum entanglement with microscopic and macroscopic agent interaction in simple logical terms, and (4) the unification of
general relativity with quantum mechanics under bipolar equilibrium and symmetry. Despite its limited mathematical depth, it is shown that YinYang bipolar relativity constitutes a deeper theory
beyond spacetime geometry tailored for open-world open-ended exploratory knowledge discovery in all scientific fields where equilibrium and symmetry are central.
The book consists of twelve chapters which can be roughly divided into the following five sections:
Section I. Introduction and Background. This section consists of Chapter 1 and Chapter 2. Chapter 1 is an introduction; Chapter 2 is a background review.
Section II. Set Theoretic Logical Foundation. This section consists of Chapters 3-5. This section lays out the set-theoretic logical foundation for bipolar relativity including YinYang bipolar sets,
bipolar dynamic logic, bipolar quantum lattices, bipolar dynamic fuzzy logic, bipolar fuzzy sets and equilibrium relations.
Section III. YinYang Bipolar Relativity and Quantum Computing. This section consists of Chapters 6-8 which are focused on the central theme of the book. Chapter 6 presents the theory of agents,
causality, and YinYang bipolar relativity with a number of predictions. Chapter 7 presents bipolar quantum entanglement for quantum computing. Chapter 8 presents YinYang bipolar quantum linear
algebra (BQLA), bipolar quantum cellular automata (BQCA), and a unifying view of YinYang bipolar relativity in logical, geometrical, algebraic, and physical terms.
Section IV. Applications. This section consists of Chapters 9-11. Chapter 9 is focused on biosystem simulation with BQLA and BQCA. Chapter 10 is focused on bipolar computational neuroscience and
psychiatry. Chapter 11 is focused on bipolar cognitive mapping and decision analysis.
Section V. Discussions and Conclusions. This section consists of the last chapter (Chapter 12) in which discussions and conclusions are presented.
Chapter 1. Introduction – Beyond Spacetime
This chapter serves as an introduction to bring readers from spacetime relativity to YinYang bipolar relativity. Einstein’s assertions regarding physics, logic, and theoretical invention are reviewed
and his hint of YinYang bipolar relativity is identified. The limitations of general relativity and quantum mechanics are briefly discussed. It is concluded that logically definable causality,
axiomatization of physics, axiomatization of agent interaction, and the grand unification of general relativity and quantum theory are essentially the same problem at the fundamental level. A paradox
on Hilbert’s Problem 6 – Logical Axiomatization for Illogical Physics (LAFIP) – is introduced. Bipolarity is postulated as the most fundamental property of nature transcending spacetime. The
theoretical basis of agents, causality and YinYang bipolar relativity is highlighted and distinguished from established theories. The main ideas of the book are outlined.
Chapter 2. Background – Quest for Definable Causality
This chapter presents a review on the quest for logically definable causality. The limitation of observability and truth-based cognition is discussed. The student-teacher philosophical dispute
between Aristotle and Plato is revisited. Aristotle’s causality principle, David Hume’s challenge, Lotfi Zadeh’s “Causality Is Undefinable” conclusion, and Judea Pearl’s probabilistic definability
are reviewed. Niels Bohr’s particle-wave complementarity principle, David Bohm’s causal interpretation of quantum mechanics, and Sorkin’s causal set program are discussed. Cognitive-map-based causal
reasoning is briefly visited. YinYang bipolar logic and bipolar causality are previewed. Social construction and destruction in science are examined. It is asserted that, in order to continue its
role as the doctrine of science, the logical definability of Aristotle’s causality principle has become an ultimate dilemma of science. It is concluded that, in order to resolve the dilemma, a formal
system with logically definable causality has to be developed, which has to be logical, physical, relativistic, and quantum in nature. The formal system has to be applicable in the microscopic world
as well as in the macroscopic world, in the physical world as well as in the social world, in cognitive informatics as well as in life sciences, and, above all, it has to reveal the ubiquitous
effects of quantum entanglement in simple comprehendible terms.
Chapter 3. YinYang Bipolar Sets and Bipolar Dynamic Logic
In this chapter an equilibrium-based set-theoretic approach to mathematical abstrac¬tion and axiomatization is presented for resolving the LAFIP paradox (Ch. 1) and for enabling logically definable
causality (Ch. 2). Bipolar set theory is formally presented, which leads to YinYang bipolar dynamic logic (BDL). BDL in zeroth-order, 1st-order, and modal forms are presented with four pairs of
dynamic DeMorgan’s laws and a bipolar universal modus ponens (BUMP). BUMP as a key element of BDL enables logically definable causality and quantum computing. Soundness and completeness of a bipolar
axiomatization are asserted; computability is proved; computational complexity is analyzed. BDL can be considered a non-linear bipolar dynamic generalization of Boolean logic plus quantum
entanglement. Despite its non-linear bipolar dynamic quantum property, it does not compromise the basic law of excluded middle. The recovery of BDL to Boolean logic is axiomatically proved through
depolarization and the computability of BDL is proved. A redress on the ancient paradox of the liar is presented with a few observations on Gödel’s incompleteness theorem. Based on BDL, bipolar
relations, bipolar transitivity, and equilibrium relations are introduced. It is shown that a bipolar equilibrium relation can be a non-linear bipolar fusion of many equivalence relations. Thus, BDL
provides a logical basis for YinYang bipolar relativity – an equilibrium-based axiomatization of social and physical sciences.
This chapter is based on ideas presented in (Zhang & Zhang 2003, 2004) (Zhang 2003a,b; 2005a,b; 2007; 2009a,b,c,d). Early works of this line of research can be found in (Zhang, Chen & Bezdek 1989)
(Zhang et al. 1992) (Zhang, Wang & King 1994)
Chapter 4. Bipolar Quantum Lattices and Dynamic Triangular Norms
Bipolar quantum lattice (BQL) and dynamic triangular norms (t-norms) are presented in this chapter. BQLs are defined as special types of bipolar partially ordered sets or posets. It is shown that
bipolar quantum entanglement is definable on BQLs. With the addition of fuzziness, BDL is extended to a bipolar dynamic fuzzy logic (BDFL). The essential part of BDFL consists of bipolar dynamic
triangular norms (t-norms) and their co-norms which extend their truth-based counterparts from a static unipolar fuzzy lattice to a bipolar dynamic quantum lattice. BDFL has the advantage in dealing
with uncertainties in bipolar dynamic environments. With bipolar quantum lattices (crisp or fuzzy), the concepts of bipolar symmetry and quasi-symmetry are defined which form a basis toward a
logically complete quantum theory. The concepts of strict bipolarity, linearity, and integrity of BQLs are introduced. A recovery theorem is presented for the depolarization of any strict BQL to
Boolean logic. The recovery theorem reinforces the computability of BDL or BDFL.
This chapter is based on the ideas presented in (Zhang & Zhang 2004) (Zhang 1996, 1998, 2003, 2005a,b, 2006a,b, 2007, 2009b). Early works of this line of research can be found in (Zhang, Chen &
Bezdek 1989) (Zhang et al. 1992) (Zhang, Wang & King 1994)
Chapter 5. Bipolar Fuzzy Sets and Equilibrium Relations
Based on bipolar sets and quantum lattices, the concepts of bipolar fuzzy sets and equilibrium relations are presented in this chapter for bipolar fuzzy clustering, coordination, and global
regulation. Related theorems are proved. Simulated application examples in multiagent macroeconomics are illustrated. Bipolar fuzzy sets and equilibrium relations provide a theoretical basis for
cognitive-map-based bipolar decision, coordination, and global regulation.
This chapter is based on the ideas presented in (Zhang 2003a,b, 2005a,b, 2006a). Early works of this line of research can be found in (Zhang, Chen & Bezdek 1989) (Zhang et al. 1992) (Zhang, Wang &
King 1994)
Chapter 6. Agents, Causality, and YinYang Bipolar Relativity
This chapter presents the theory of bipolar relativity – a central theme of this book. The concepts of YinYang bipolar agents, bipolar adaptivity, bipolar causality, bipolar strings, bipolar
geometry, and bipolar relativity are logically defined. The unifying property of bipolar relativity is examined. Space and time emergence from YinYang bipolar geometry is proposed. Bipolar relativity
provides a number of predictions. Some of them are domain dependent and some are domain independent. In particular, it is conjectured that spacetime relativity, singularity, gravitation,
electro¬mag¬netism, quan¬tum mechanics, bioinformatics, neurodynamics, and socio¬economics are different phenomena of YinYang bipolar relativity; microscopic and macroscopic agent inter¬actions in
physics, socioeconomics, and life science are dire¬ctly or indirectly caused by bipolar causality and regulated by bipolar relativity; all physical, social, mental, and biological action-reaction
forces are fundamentally different forms of bipolar quantum entanglement in large or small scales; gravity is not necessarily limited by the speed of light; graviton does not necessarily exist.
This chapter is based on the ideas presented in (Zhang 2009a,b,c,d; Zhang 2010).
Chapter 7. YinYang Bipolar Quantum Entanglement – Toward a Logically Complete Theory for Quantum Computing and Communication
YinYang bipolar relativity leads to an equilibrium-based logically complete quantum theory which is presented and discussed in this chapter. It is shown that bipolar quantum entanglement and bipolar
quantum computing bring bipolar relativity deeper into microscopic worlds. The concepts of bipolar qubit and YinYang bipolar complementarity are proposed and compared with Niels Bohr’s particle-wave
complementarity. Bipolar qubit box is compared with Schrödinger’s cat box. Since bipolar quantum entanglement is fundamentally different from classical quantum theory (which is referred to as
unipolar quantum theory in this book), the new approach provides bipolar quantum computing with the unique features: (1) it forms a key for equilibrium-based quantum controllability and
quantum-digital compatibility; (2) it makes bipolar quantum teleportation theoretically possible for the first time without conventional communication between Alice and Bob; (3) it enables bitwise
encryption without a large prime number that points to a different research direction of cryptography aimed at making prime-number-based cryptography and quantum factoring algorithm both obsolete;
(4) it shows potential to bring quantum computing and communication closer to deterministic reality; (5) it leads to a unifying Q5 paradigm aimed at revealing the ubiquitous effects of bipolar
quantum entanglement with the sub theories of logical, physical, mental, social, and biological quantum gravities and quantum computing.
This chapter is based on ideas presented in (Zhang 2003a, 2005a, Zhang 2009a,b,c,d; 2010).
Chapter 8. YinYang Bipolar Quantum Linear Algebra (BQLA) and Bipolar Quantum Cellular Automata (BQCA)
This chapter brings bipolar relativity from the logical and relational levels to the algebraic level. Following a brief review on traditional cellular automata and linear algebra, bipolar quantum
linear algebra (BQLA) and bipolar quantum cellular automata (BQCA) are presented. Three families of YinYang-N-Element bipolar cellular networks (BCNs) are developed, compared, and analyzed; YinYang
bipolar dynamic equations are derived for YinYang-N-Element BQCA. Global (system level) and local (element level) energy equilibrium and non-equilibrium conditions are established and axiomatically
proved for all three families of cellular structures that lead to the concept of collective bipolar equilibrium-based adaptivity. The unifying nature of bipolar relativity in the context of BQCA is
illustrated. The background independence nature of YinYang bipolar geometry is demonstrated with BQLA and BQCA. Under the unifying theory, it is shown that the bipolar dimensional view, cellular
view, and bipolar interactive view are logically consistent. The algebraic trajectories of bipolar agents in YinYang bipolar geometry are illustrated with simulations. Bipolar cellular processes in
cosmology, brain and life sciences are hypothesized and discussed.
This chapter is based on earlier chapters and the ideas presented in (Zhang 1996, 2005a, 200ba, Zhang 2009a,b,c,d, 2010; Zhang & Chen 2009; Zhang et al. 2009).
Chapter 9. Bipolar Quantum Bioeconomics for Biosystem Simulation and Regulation
As a continuation of Chapter 8, this chapter presents a theory of bipolar quantum bioeconomics (BQBE) with a focus on computer simulation and visualization of equilibrium, non-equilibrium, and
oscillatory properties of YinYang-N-Element cellular network models for growing and degenerating biological processes. From a modern bioinformatics perspective, it provides a scientific basis for
simulation and regulation in genomics, bioeconomics, metabolism, computational biology, aging, artificial intelligence, and biomedical engineering. It is also expected to serve as a mathematical
basis for biosystem inspired socioeconomics, market analysis, business decision support, multiagent coordination and global regulation. From a holistic natural medicine perspective, diagnostic
decision support in TCM is illustrated with the YinYang-5-Element bipolar cellular network; the potential of YinYang-N-Element BQCA in qigong, Chinese meridian system, and innate immunology is
briefly discussed.
This chapter is based on earlier chapters and the ideas presented in (Zhang 1996, 2005a, 2006a, Zhang 2009a,b,c,d, 2010; Zhang & Chen 2009; Zhang et al. 2009).
Chapter 10. MentalSquares - An Equilibrium-Based Bipolar Support Vector Machine for Computational Psychiatry and Neurobiological Data Mining
While earlier chapters have focused on the logical, physical, and biological aspects of the Q5 paradigm, this chapter shifts focus to the mental aspect. MentalSquares (MSQs) - an equilibrium-based
dimensional approach is presented for pattern classification and diagnostic analysis of bipolar disorders. While a support vector machine is defined in Hilbert space, MSQs can be considered a generic
dimensional approach to support vector machinery for modeling mental balance and imbalance of two opposite but bipolar interactive poles. A MSQ is dimensional because its two opposite poles form a
2-dimensional background independent YinYang bipolar geometry from which a third dimension – equilibrium or non-equilibrium – is transcendental with mental fusion or mental separation measures. It is
generic because any multidimensional mental equilibrium or non-equilibrium can be deconstructed into one or more bipolar equilibria which can then be represented as a mental square. Different MSQs
are illustrated for bipolar disorder (BPD) classification and diagnostic analysis based on the concept of mental fusion and separation. It is shown that MSQs extend the traditional categorical
standard classification of BPDs to a non-linear dynamic logical model while preserving all the pro¬perties of the standard; it supports both classi-fication and visualization with qualitative and
quan¬titative features; it serves as a scalable generic dimensional model in computational neuroscience for broader scientific discoveries; it has the cognitive simplicity for clinical and computer
operability. From a broader perspective, the agent-oriented nature of MSQs provides a basis for multiagent data mining (Zhang & Zhang 2004) and cognitive informatics of brain and behaviors (Wang
This chapter is based on earlier chapters and the ideas presented in (Zhang 2007; Zhang, Pandurangi & Peace 2007; Zhang & Peace 2007)
Chapter 11. Bipolar Cognitive Mapping and Decision Analysis – A Bridge from Bioeconomics to Socioeconomics
The focus of this chapter is on cognitive mapping and cognitive-map-based (CM-based) decision analysis. This chapter builds a bridge from mental quantum gravity to social quantum gravity. It is shown
that bipolar relativity, as an equilibrium-based unification of nature, agent and causality, is naturally the unification of quantum bioeconomics, brain dynamics, and socioeconomics as well.
Simulated examples are used to illustrate the unification with cognitive mapping and CM-based multiagent decision, coordination, and global regulation in international relations.
This chapter is based on earlier chapters and the ideas presented in (Zhang, Chen & Bezdek 1989) (Zhang et al. 1992) (Zhang, Wang & King 1994) (Zhang & Zhang 2004) (Zhang 1996, 1998, 2003, 2005a,b,
Chapter 12. Causality Is Logically Definable – An Eastern Road toward Quantum Gravity
This is the conclusion chapter. Bertrand Russell’s view on logic and mathematics is briefly reviewed. An enjoyable debate on bipolarity and isomorphism is presented. Some historical facts related to
YinYang are discussed. Distinctions are drawn between BDL from established logical paradigms including Boolean logic, fuzzy logic, multiple-valued logic, truth-based dynamic logic, intuitionist
logic, paraconsistent logic, and other systems. Some major comments from critics on related works are answered. A list of major research topics is enumerated. The ubiquitous effects of YinYang
bipolar quantum entanglement are summarized. Limitations of this work are discussed. Some conclusions are drawn.
Alford, D.M. (1993).
A report on the Fetzer Institute-sponsored dialogues between Western and indigenous scientists.
A presentation for the Annual Spring Meeting of the Society for the Anthropology of Consciousness, April 11, 1993. Retrieved from
Bourbaki, N. (1998). Elements of the history of Mathematics. Berlin, Heidelberg, New York: Springer-Verlag.
Einstein, A. (1916). The foundation of the general theory of relativity. Originally published in Annalen der Physik (1916), Collected Papers of Albert Einstein, English Translation of Selected Texts,
Translated by A. Engel, Vol. 6, (pp 146-200).
Einstein, A. (1934). On the method of theoretical Physics. The Herbert Spencer lecture, delivered at Oxford, June 10, 1933. Published in Mein Weltbild, Amsterdam: Querido Verlag.
Einstein, A. (1940). Considerations concerning the fundamentals of theoretical Physics. Science, 2369(91), 487-491.
Einstein, A., Podolsky, B. & Rosen N. (1935). Can quantum-mechanical description of physical reality be considered complete? Physics Review, 47, 777.
Hilbert, D. (1901). Mathematical problems. Bulletin of the American Mathematics Society, 8, 437-479.
Jacobsen, B.M. & Skalnik, D.G. (1999). YY1 binds five cis-elements and trans-activates the myeloid cell-restricted gp91phox promoter. Journal of Biological Chemistry, 274, 29984-29993.
Kandel, A. & Zhang, Y. (1998). Intrinsic mechanisms and application principles of general fuzzy logic through Yin-Yang analysis. Information Science, 106(1), 87-104.
Karcher, S. (2002). I Ching: The classic Chinese oracle of change: The first complete translation with concordance. London: Vega Books.
Leibniz, G. (1703). Explication de l'Arithmétique Binaire (Explanation of Binary Arithmetic); Gerhardt, Mathematical Writings VII.223.
Martinez, A.A. (2006). Negative math: How mathematical rules can be positively bent. Princeton University Press.
Penrose, R. (2005). The road to reality: A complete guide to the laws of the universe. New York: Alfred A. Knopf.
Shi, Y., Seto, E., Chang, L.-S. & Shenk, T. (1991). Transcrip¬tional repression by YY1, a human GLI-Kruppel-related protein, and relief of repression by adenovirus E1A protein. Cell, 67(2), 377-388.
Smolin, L. (2000). Three road to quantum gravity. Basic Books.
Smolin, L. (2006). The trouble with physics: The rise of string theory, the fall of a science, and what comes next? New York: Houghton Mifflin Harcourt.
Temple, R. (1986). The genius of China: 3,000 years of science, discovery, and invention. New York: Simon and Schuster.
Wang, Y. (2004). On cognitive informatics. Brain and Mind, 4(2), 151-167.
Woit, P. (2006). Not even wrong: The failure of string theory and the search for unity in physical law. New York: Basic Book.
Zadeh, L.A. (2001). Causality is undefinable–toward a theory of hierarchical definability. Proceedings of FUZZ-IEEE, (pp. 67-68).
Zhang, W.-R., Chen, S. & Bezdek, J.C. (1989). POOL2: A generic system for cognitive map development and decision analysis. IEEE Transactions on SMC, 19(1), 31-39.
Zhang, W.-R., Chen, S., Wang, W. & King, R. (1992). A cognitive map based approach to the coordinat¬ion of distributed cooperative agents. IEEE Transactions on SMC, 22(1), 103-114.
Zhang, W.-R., Wang, W. & King, R. (1994). An agent-oriented open system shell for distributed decision process modeling. Journal of Organizational Computing, 4(2), 127-154.
Zhang, W.-R. (1996). NPN fuzzy sets and NPN qualitative algebra: A computational framework for bipolar cognitive modeling and multiagent decision analysis. IEEE Transactions on SMC, 16, 561-574.
Zhang, W.-R. (1998). YinYang bipolar fuzzy sets. Proceedings of IEEE World Congress on Computational Intelligence, Fuzz-IEEE, (pp. 835-840). Anchorage, AK, May 1998.
Zhang W.-R. & Zhang, L. (2003). Soundness and completeness of a 4-valued bipolar logic. International Journal on Multiple-Valued Logic, 9, 241-256.
Zhang, W.-R. (2003a). Equilibrium relations and bipolar cognitive mapping for online analytical processing with applications in international relations and strategic decision support. IEEE
Transactions on SMC, Part B, 33(2), 295-307.
Zhang, W.-R. (2003b). Equilibrium energy and stability measures for bipolar decision and global regulation. International Journal of fuzzy systems, 5(2), 114-122.
Zhang, W.-R. & Zhang, L. (2004a). YinYang bipolar logic and bipolar fuzzy logic. Information Sciences, 165(3-4), 265-287.
Zhang, W.-R. & Zhang, L. (2004b). A Multiagent Data Warehousing (MADWH) and Multiagent Data Mining (MADM) approach to brain modeling and NeuroFuzzy control. Information Sciences, 167(1-4), 109-127.
Zhang, W.-R. (2005a). YinYang bipolar lattices and L-sets for bipolar knowledge fusion, visualization, and decision making. International Journal of Information Technology and Decision Making, 4(4),
Zhang, W.-R. (2005b). YinYang bipolar cognition and bipolar cognitive mapping. International Journal of Computational Cognition, 3(3), 53-65.
Zhang, W.-R. (2006a). YinYang bipolar fuzzy sets and fuzzy equilibrium relations for bipolar clustering, optimization, and global regulation. International Journal of Information Techno-logy and
Decision Making, 5(1), 19-46.
Zhang, W.-R. (2006b). YinYang bipolar T-norms and T-conorms as granular neurological operators. Proceedings of IEEE International Conference on Granular Computing, (pp. 91-96). Atlanta, GA.
Zhang, W.-R (2007). YinYang bipolar universal modus ponens (bump)–a fundamental law of non-linear brain dynamics for emotional intelligence and mental health. Walter J. Freeman Workshop on Nonlinear
Brain Dynamics, Proceedings of the 10th Joint Conference of Information Sciences, (pp. 89-95). Salt Lake City, Utah, July 2007.
Zhang, W.-R., Pandurangi, A. & Peace, K. (2007). YinYang dynamic neurobiological modeling and diagnostic analysis of major depressive and bipolar disorders. IEEE Transactions on Biomedical
Engineering, 54(10), 1729-39.
Zhang, W.-R. & Peace, K.E. (2007). YinYang MentalSquares–an equilibrium-based system for bipolar neurobiological pattern classification and analysis. Proceedings of IEEE BIBE, (pp. 1240-1244).
Boston, Oct. 2007.
Zhang, W.-R., Wang, P., Peace, K., Zhan, J. & Zhang, Y. (2008). On truth, uncertainty, equilibrium, and harmony–a taxonomy for YinYang scientific computing. International Journal of New Mathematics
and Natural Computing, 4(2), 207 – 229.
Zhang, W.-R, Zhang, H.J., Shi, Y. & Chen, S.S. (2009). Bipolar linear algebra and YinYang-N-Element cellular networks for equilibrium-based biosystem simulation and regulation. Journal of Biological
Systems, 17(4), 547-576.
Zhang, W.-R & Chen, S.S. (2009). Equilibrium and non-equilibrium modeling of YinYang WuXing for diagnostic decision analysis in traditional Chinese medicine. International Journal of Information
Technology and Decision Making, 8(3), 529-548.
Zhang, W.-R. (2009a). Six conjectures in quantum physics and computational neuroscience. Proceedings of 3rd International Conference on Quantum, Nano and Micro Technologies (ICQNM 2009), (pp. 67-72).
Cancun, Mexico, February 2009.
Zhang, W.-R. (2009b). YinYang Bipolar Dynamic Logic (BDL) and equilibrium-based computational neuroscience. Proceedings of International Joint Conference on Neural Networks (IJCNN 2009), (pp.
3534-3541). Atlanta, GA, June 2009.
Zhang, W.-R. (2009c). YinYang bipolar relativity–a unifying theory of nature, agents, and life science. Proceedings of International Joint Conference on Bioinformatics, Systems Biology and
Intelligent Computing (IJCBS), (pp. 377-383). Shanghai, China, Aug. 2009.
Zhang, W.-R. (2009d). The logic of YinYang and the science of TCM–an Eastern road to the unification of nature, agents, and medicine. International Journal of Functional Informatics and Personal
Medicine (IJFIPM), 2(3), 261–291.
Zhang, W.-R. (2010). YinYang bipolar quantum entanglement–toward a logically complete quantum theory. Proceedings of the 4th International Conference on Quantum, Nano and Micro Technologies (ICQNM
2010), (pp. 77-82). February 2010, St. Maarten, Netherlands Antilles.
Zhang, W.-R., Pandurangi, K.A., Peace, K.E., Zhang, Y.-Q. & Zhao, Z. (2010). MentalSquares-a generic bipolar support vector machine for psychiatric disorder classification, diagnostic analysis and
neurobiological data mining. International Journal on Bioinformatics and Data Mining.
Zhang, Y.-Q. (1992). Universal fundamental field theory and Golden Taichi.
Chinese Qigong, Special Issue 3
, 242-248. Retrieved from
(in Chinese)
|
{"url":"http://www.igi-global.com/book/yinyang-bipolar-relativity/45961","timestamp":"2014-04-21T07:14:26Z","content_type":null,"content_length":"180319","record_id":"<urn:uuid:e2fd5649-8a22-4727-a811-8149261edc00>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Normal Probability & t Distribution
1. 428577
Normal Probability & t Distribution
See attached file for formulas and amounts.
Question #1 / 5
Let be a standard normal random variable. Use the calculator provided to determine the value of such that .
Carry your intermediate computations to at least four decimal places. Round your answer to at least two decimal places.
Question #2 / 5
Suppose that the heights of adult women in the United States are normally distributed with a mean of inches and a standard deviation of inches. Jennifer is taller than of the population of U.S.
women. How tall (in inches) is Jennifer? Carry your intermediate computations to at least four decimal places. Round your answer to at least one decimal place.
Question #3 / 5
According to a recent survey, the salaries of assistant professors have a mean of $ and a standard deviation of $ . Assuming that the salaries of assistant professors follow a normal distribution,
find the proportion of assistant professors who earn less than $ . Round your answer to at least four decimal places.
Question #4/ 5
Use the calculator provided to solve the following problems.
â?¢ Consider a t distribution with degrees of freedom. Compute . Round your answer to at least three decimal places.
â?¢ Consider a t distribution with degrees of freedom. Find the value of such that . Round your answer to at least three decimal places.
The solution provides step by step method for the calculation of probability using the Z score and t distribution. Formula for the calculation and Interpretations of the results are also included.
|
{"url":"https://brainmass.com/statistics/normal-distribution/428577","timestamp":"2014-04-17T10:17:01Z","content_type":null,"content_length":"31289","record_id":"<urn:uuid:6df94ab4-2c03-4db2-8536-e95fbfb7a6fe>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nonantum ACT Tutor
Find a Nonantum ACT Tutor
...I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. My academic strengths are in mathematics and French. I can tutor
any subject area from elementary math to college level.I got an A+ in Discrete Mathematics in College and an A in the graduate course 6.431 Applied Probability at MIT last year.
16 Subjects: including ACT Math, French, elementary math, algebra 1
...I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these subjects, for the last several years, I have been successfully tutoring
for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams. I even earned a perfect score on the Math Subject Test.
36 Subjects: including ACT Math, English, reading, chemistry
...Your child can have "math confidence", be more independent in doing their homework and get better grades by getting her/him support for Algebra 2. NOW IS THE TIME - Algebra 1 introduced many
topics and skills which will be PRACTICED A LOT in Algebra 2. If your child has come this far but still ...
9 Subjects: including ACT Math, geometry, algebra 2, algebra 1
...Teaching has been a long-time passion of mine and I hope to continue teaching throughout my life time. I have tutored SAT Math, SAT II and various science subjects since 2006. During my time at
MIT I was a teaching assistant for four semesters in Biology, Genetics, Biochemistry and Cell Biology classes, and I continuously served as a tutor in the department.
28 Subjects: including ACT Math, chemistry, calculus, algebra 1
...I have excellent reading and communication skills, and a background in Latin to help with vocabulary. I am well aware of study methods to improve standardized test scores, and am able to
communicate these methods effectively. I have over ten years of formal musical training (instrumental), so I have a solid foundation in music theory.
38 Subjects: including ACT Math, chemistry, English, reading
|
{"url":"http://www.purplemath.com/nonantum_act_tutors.php","timestamp":"2014-04-18T06:10:43Z","content_type":null,"content_length":"23965","record_id":"<urn:uuid:287bc54d-9d10-4306-b8ad-4ce0bf6b2a8e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: March 2005 [00259]
[Date Index] [Thread Index] [Author Index]
NDSolve, InterpolatingFunction objects and Splines
• To: mathgroup at smc.vnet.net
• Subject: [mg54983] NDSolve, InterpolatingFunction objects and Splines
• From: zosi <zosi at to.infn.it>
• Date: Wed, 9 Mar 2005 06:34:12 -0500 (EST)
• Organization: INFN sezione di Torino
• Sender: owner-wri-mathgroup at wolfram.com
How can we extract the **coefficients** of the polynomials
which constitute the InterpolatingFunction Object (IFO) representing
the solution of NDSolve ?
We have been able to extract only the abscissae (x_i) - ordinates {y_i)
of the numerical output; we suspect they are hidden.
Our curiosity arises from the fact that we have tried
(rather naively) to improve the representation of the output
of NDSolve applied, to begin, to a very simple ODE
tfin= 3 Pi;
solnNDS = NDSolve[{x''[t]+x[t]==0, x[0] == 1,x'[0]==1}, x, {t, 0, tfin }];
solnum[t_] := x[t] /. First[solnNDS];
To begin, we have extracted the couples
nodes = (x /. First[solnNDS])[[3, 1]];
residualfunction[t_] := Evaluate[x''[t] + x[t] /. First[solnNDS]];
plotresidualfunction=Plot[residualfunction[t], {t, 0, tfin },ImageSize
-> 500];
the residual ranges approx. between +/- 1.5*10^(-6), if we exclude
the very first points.
To have a comparison with Splines, we have
a) applied the command SplineFit (cubic)
to the same set of data (nodes and solnum[nodes]),
b) extracted the polynomials (in each interval)
c) calculated again the residual;
To our surprise the residual ranges between +/- 1*10^(-3).
A loss by a factor of 1000.
In this mail we do not shown the details to calculate the 1st and 2nd
derivative of the spline (see Note 1)
We said "surprise" because the polynomials contained in the IFO are
not so smooth as the splines.
About the behaviour of the IFO, a very simple example by Stan Wagon
(and slightly modified) is illuminating
data = {{-1, -1}, {-0.96, -0.4}, {-0.88, 0.3},
{-0.62, 0.78}, {0.13, 0.91}, {1, 1},
{1.2, 1.5}, {1.5, 1.6}, {2.2, 1.2}};
f = Interpolation[data];
Plot[f[t], {t, -1, 2.2},
Epilog -> {PointSize[0.025], Point /@ data}];
The two bumps clearly indicate that the first derivative
is not continuous in at least two-three points.
cub = SplineFit[data, Cubic]; (* see note 1 *)
ParametricPlot[cub[t], {t, 0, 8},
Compiled -> False, PlotRange->All, AspectRatio ->
Epilog -> {PointSize[0.02], Map[Point, data]}];
Conclusion: If we are using the same WorkingPrecision (=Default),
shouldn't be **better** the residual obtained
through the splines ?
Note 1. As we have not been able to calculate directly the 1st and 2nd
of cub, we have extracted the coefficients of each polynomial
contained in cub,
then made the calculation of the derivative. Is there any
quickier method
to calculate directly the **second** derivative of the
SplineFit output ?
Thanks for your patience and help.
Gianfranco Zosi
Dip Fisica Generale "A. Avogadro"
Universita di Torino
v. P. Giuria 1 - 10125 Torino - Italy
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Mar/msg00259.html","timestamp":"2014-04-18T21:20:38Z","content_type":null,"content_length":"37265","record_id":"<urn:uuid:3604cd78-f6c4-486f-b537-49dbda00c8cf>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The fxt demos: combinatorics
The fxt demos: combinatorics
Directory comb: Combinatorial objects like combinations, permutations, Gray codes, partitions and necklaces.
Find a list of all files in this directory here. An index of all topics is here
You may want to look at the outputs first.
acgray-out.txt is the output of acgray-demo.cc.
Adjacent changes (AC) Gray codes for n<=6
The demo uses the functions from acgray.h (fxt/src/comb/acgray.h) delta2gray.h (fxt/src/comb/delta2gray.h) acgray.cc (fxt/src/comb/acgray.cc)
arrangement-rgs-out.txt is the output of arrangement-rgs-demo.cc.
RGS for arrangements (all permutations of all subsets): a digit is at most 1 + the number of nonzero digits in the prefix. The positions of nonzero digits determine the subset, and their values
(decreased by 1) are the (left) inversion table (a rising factorial number) for the permutation. Lexicographic order. Cf. OEIS sequence A000522.
The demo uses the functions from arrangement-rgs.h (fxt/src/comb/arrangement-rgs.h) print-arrangement-rgs-perm.h (fxt/src/comb/print-arrangement-rgs-perm.h)
ascent-alt-rgs-out.txt is the output of ascent-alt-rgs-demo.cc.
Restricted growth strings (RGS) a[1,..,n], where a[k] < k and no digit a[j] in prefix such that a[j] == a[k] + 1 Lexicographic order. Cf. OEIS sequence A022493.
The demo uses the functions from ascent-alt-rgs.h (fxt/src/comb/ascent-alt-rgs.h)
ascent-nonflat-rgs-out.txt is the output of ascent-nonflat-rgs-demo.cc.
Ascent sequences (restricted growth strings, RGS) without flat steps (i.e., no two adjacent digits are equal), lexicographic order. An ascent sequence is a sequence [d(1), d(2), ..., d(n)] where d
(k)>=0 and d(k) <= asc([d(1), d(2), ..., d(k-1)]) and asc(.) counts the number of ascents of its argument. Cf. OEIS sequence A138265.
The demo uses the functions from ascent-nonflat-rgs.h (fxt/src/comb/ascent-nonflat-rgs.h)
ascent-rgs-out.txt is the output of ascent-rgs-demo.cc.
Ascent sequences (restricted growth strings, RGS) in lexicographic order. An ascent sequence is a sequence [d(1), d(2), ..., d(n)] where d(1)=0, d(k)>=0, and d(k) <= 1 + asc([d(1), d(2), ..., d
(k-1)]) and asc(.) counts the number of ascents of its argument. Cf. OEIS sequence A022493.
The demo uses the functions from ascent-rgs.h (fxt/src/comb/ascent-rgs.h)
ascent-rgs-stats-out.txt is the output of ascent-rgs-stats-demo.cc.
Statistics for ascent sequences. Cf. the following OEIS sequences: A218577: triangle, length-n ascent sequences with maximal element k-1. An ascent sequence is a sequence [d(1), d(2), ..., d(n)]
where d(1)=0, d(k)>=0, and d(k) <= 1 + asc([d(1), d(2), ..., d(k-1)]) and asc(.) counts the number of ascents of its argument. A137251: triangle, sequences with k ascents. A175579: triangle,
sequences with k zeros. A218579: triangle, sequences with position of last zero at index k-1. A218580: triangle, sequences with position of first occurrence of maximal value at index k-1. A218581:
triangle, sequences with position of last occurrence of maximal value at index k-1.
The demo uses the functions from ascent-rgs.h (fxt/src/comb/ascent-rgs.h) word-stats.h (fxt/src/comb/word-stats.h)
ascent-rgs-subset-lex-out.txt is the output of ascent-rgs-subset-lex-demo.cc.
Ascent sequences (restricted growth strings, RGS) in subset-lex order. An ascent sequence is a sequence [d(1), d(2), ..., d(n)] where d(1)=0, d(k)>=0, and d(k) <= 1 + asc([d(1), d(2), ..., d(k-1)])
and asc(.) counts the number of ascents of its argument. Cf. OEIS sequence A022493.
The demo uses the functions from ascent-rgs-subset-lex.h (fxt/src/comb/ascent-rgs-subset-lex.h)
ballot-seq-stats-out.txt is the output of ballot-seq-stats-demo.cc.
Statistics for ballot sequences (and standard Young tableaux): Cf. the following OEIS sequences: A161126, A238121, A238123, A238125, A238128, and A238129.
The demo uses the functions from young-tab-rgs.h (fxt/src/comb/young-tab-rgs.h) word-stats.h (fxt/src/comb/word-stats.h) young-tab-rgs-descents.h (fxt/src/comb/young-tab-rgs-descents.h)
bell-number-out.txt is the output of bell-number-demo.cc.
Aitken's array and Bell numbers. Cf. OEIS sequence A011971: Aitken's array, also called Bell triangle or Peirce triangle.
big-fact2perm-out.txt is the output of big-fact2perm-demo.cc.
Generate all permutations from mixed radix (factorial) numbers, using left-right array, so conversion is fast also for large length.
The demo uses the functions from big-fact2perm.h (fxt/src/comb/big-fact2perm.h) big-fact2perm.cc (fxt/src/comb/big-fact2perm.cc) left-right-array.h (fxt/src/ds/left-right-array.h)
binary-debruijn-out.txt is the output of binary-debruijn-demo.cc.
Generating binary De Bruijn sequences.
The demo uses the functions from binary-debruijn.h (fxt/src/comb/binary-debruijn.h) binary-necklace.h (fxt/src/comb/binary-necklace.h)
binary-huffman-out.txt is the output of binary-huffman-demo.cc.
Partitions of 1 into n powers of 1/2: 1 == a[0]/1 + a[1]/2 + a[2]/4 + a[3]/8 + ... a[m]/(2^m) (for n>=1), n == a[0] + a[1] + a[2] + a[3] + ... + a[m]. Same as: binary Huffman codes (canonical trees)
with n terminal nodes, a[k] is the number of terminal nodes of depth k. Reversed lexicographic order. See: Christian Elsholtz, Clemens Heuberger, Helmut Prodinger: "The number of Huffman codes,
compact trees, and sums of unit fractions", arXiv:1108.5964 [math.CO], (30-August-2011). Cf. OEIS sequence A002572.
The demo uses the functions from binary-huffman.h (fxt/src/comb/binary-huffman.h)
binary-necklace-out.txt is the output of binary-necklace-demo.cc.
Binary pre-necklaces, necklaces, and Lyndon words: CAT generation. Cf. OEIS sequences A062692 (pre-necklaces), A000031 (necklaces), and A001037 (Lyndon words).
The demo uses the functions from binary-necklace.h (fxt/src/comb/binary-necklace.h)
binary-sl-gray-out.txt is the output of binary-sl-gray-demo.cc.
Binary numbers in a minimal-change order related so subset-lex order ("SL-Gray" order). Cf. OEIS sequence A217262.
The demo uses the functions from binary-sl-gray.h (fxt/src/comb/binary-sl-gray.h)
binomial-out.txt is the output of binomial-demo.cc.
Print the Pascal triangle (binomial coefficients).
The demo uses the functions from binomial.h (fxt/src/aux0/binomial.h)
catalan-out.txt is the output of catalan-demo.cc.
Catalan restricted growth strings (RGS) and parentheses strings in minimal-change order.
The demo uses the functions from catalan.h (fxt/src/comb/catalan.h) catalan.cc (fxt/src/comb/catalan.cc)
catalan-number-out.txt is the output of catalan-number-demo.cc.
Catalan numbers and ballot numbers. Cf. OEIS sequences A009766 (Catalan's triangle) and A033184.
catalan-path-lex-out.txt is the output of catalan-path-lex-demo.cc.
Catalan paths in lexicographic order, CAT algorithm.
The demo uses the functions from catalan-path-lex.h (fxt/src/comb/catalan-path-lex.h)
catalan-rgs-out.txt is the output of catalan-rgs-demo.cc.
Catalan restricted growth strings (RGS): strings a[0,1,...,n-1] where a[0]=0 and a[k] <= a[k-1] + 1. Lexicographic order.
The demo uses the functions from catalan-rgs.h (fxt/src/comb/catalan-rgs.h) paren-string-to-rgs.h (fxt/src/comb/paren-string-to-rgs.h) paren-string-to-rgs.cc (fxt/src/comb/paren-string-to-rgs.cc)
catalan-rgs-gray-out.txt is the output of catalan-rgs-gray-demo.cc.
Catalan restricted growth strings (RGS): strings a[0,1,...,n-1] where a[0]=0 and a[k] <= a[k-1] + 1. Gray code for parenthesis strings (but not for the RGS).
The demo uses the functions from catalan-rgs-gray.h (fxt/src/comb/catalan-rgs-gray.h) paren-string-to-rgs.h (fxt/src/comb/paren-string-to-rgs.h)
catalan-rgs-gslex-out.txt is the output of catalan-rgs-gslex-demo.cc.
Catalan restricted growth strings (RGS): strings a[0,1,...,n-1] where a[0]=0 and a[k] <= a[k-1] + 1. Ordering similar to gslex (and subset-lex) order. Loopless algorithm.
The demo uses the functions from catalan-rgs-gslex.h (fxt/src/comb/catalan-rgs-gslex.h)
catalan-rgs-subset-lex-out.txt is the output of catalan-rgs-subset-lex-demo.cc.
Catalan restricted growth strings (RGS): strings a[0,1,...,n-1] where a[0]=0 and a[k] <= a[k-1] + 1. Subset-lex order.
The demo uses the functions from catalan-rgs-subset-lex.h (fxt/src/comb/catalan-rgs-subset-lex.h)
catalan-step-rgs-colex-out.txt is the output of catalan-step-rgs-colex-demo.cc.
Catalan (step-)RGS for lattice paths from (0,0) to (n,n) that do not go below the diagonal (k, k) for 0 <= k <= n. Co-lexicographic order. Cf. OEIS sequence A000108.
The demo uses the functions from catalan-step-rgs-colex.h (fxt/src/comb/catalan-step-rgs-colex.h) is-catalan-step-rgs.h (fxt/src/comb/is-catalan-step-rgs.h)
catalan-step-rgs-lex-out.txt is the output of catalan-step-rgs-lex-demo.cc.
Catalan (step-)RGS for lattice paths from (0,0) to (n,n) that do not go below the diagonal (k, k) for 0 <= k <= n. Lexicographic order. Cf. OEIS sequence A000108.
The demo uses the functions from catalan-step-rgs-lex.h (fxt/src/comb/catalan-step-rgs-lex.h) is-catalan-step-rgs.h (fxt/src/comb/is-catalan-step-rgs.h)
catalan-step-rgs-subset-lexrev-out.txt is the output of catalan-step-rgs-subset-lexrev-demo.cc.
Catalan (step-)RGS for lattice paths from (0,0) to (n,n) that do not go below the diagonal (k, k) for 0 <= k <= n. Subset-lexrev order.
The demo uses the functions from catalan-step-rgs-subset-lexrev.h (fxt/src/comb/catalan-step-rgs-subset-lexrev.h) is-catalan-step-rgs.h (fxt/src/comb/is-catalan-step-rgs.h)
cayley-perm-out.txt is the output of cayley-perm-demo.cc.
Cayley permutations: Length-n words such that all elements from 0 to the maximum value occur at least once. Same as: permutations of the (RGS for) set partitions of n. Same as: weak orders on n
elements (weak orders are relations that are transitive and complete). Same as: preferential arrangements of n labeled elements. Generation such that the major order is by content, and minor order
lexicographic. Cf. OEIS sequence A000670.
The demo uses the functions from cayley-perm.h (fxt/src/comb/cayley-perm.h)
comb2comp-out.txt is the output of comb2comp-demo.cc.
Relation between combinations and compositions.
The demo uses the functions from combination-revdoor.h (fxt/src/comb/combination-revdoor.h) comp2comb.h (fxt/src/comb/comp2comb.h)
combination-chase-out.txt is the output of combination-chase-demo.cc.
Combinations in near-perfect minimal-change order (Chase's sequence).
The demo uses the functions from combination-chase.h (fxt/src/comb/combination-chase.h) binomial.h (fxt/src/aux0/binomial.h)
combination-colex-out.txt is the output of combination-colex-demo.cc.
Generating all combinations in co-lexicographic order.
The demo uses the functions from combination-colex.h (fxt/src/comb/combination-colex.h)
combination-emk-out.txt is the output of combination-emk-demo.cc.
Combinations in strong minimal-change order (Eades-McKay sequence). Iterative generation via modulo moves.
The demo uses the functions from combination-emk.h (fxt/src/comb/combination-emk.h)
combination-emk-rec-out.txt is the output of combination-emk-rec-demo.cc.
Eades-McKay (strong revolving door) order for combinations
combination-endo-out.txt is the output of combination-endo-demo.cc.
Strong minimal-change order for combinations (Chase's sequence) via endo steps
The demo uses the functions from combination-endo.h (fxt/src/comb/combination-endo.h)
combination-enup-out.txt is the output of combination-enup-demo.cc.
Strong minimal-change order for combinations via enup (endo) steps
The demo uses the functions from combination-enup.h (fxt/src/comb/combination-enup.h)
combination-enup-rec-out.txt is the output of combination-enup-rec-demo.cc.
Recursive generation of enup order for combinations (strong minimal-change order).
combination-gray-rec-out.txt is the output of combination-gray-rec-demo.cc.
Generating all combinations in minimal-change order, recursive implementation.
combination-lam-out.txt is the output of combination-lam-demo.cc.
Minimal-change order for combinations with k>=2 elements. Good performance for small k.
combination-lex-out.txt is the output of combination-lex-demo.cc.
Generating all combinations in lexicographic order.
The demo uses the functions from combination-lex.h (fxt/src/comb/combination-lex.h)
combination-mod-out.txt is the output of combination-mod-demo.cc.
Combinations in strong minimal-change order. Iterative generation via modulo moves.
The demo uses the functions from combination-mod.h (fxt/src/comb/combination-mod.h)
combination-pref-out.txt is the output of combination-pref-demo.cc.
Combinations via prefix shifts ("cool-lex" order)
The demo uses the functions from combination-pref.h (fxt/src/comb/combination-pref.h)
combination-rank-out.txt is the output of combination-rank-demo.cc.
Ranking and unranking combinations in near-perfect order
The demo uses the functions from composition-rank.h (fxt/src/comb/composition-rank.h) composition-rank.cc (fxt/src/comb/composition-rank.cc) comp2comb.h (fxt/src/comb/comp2comb.h)
combination-rec-out.txt is the output of combination-rec-demo.cc.
Combinations in lexicographic, Gray code, complemented enup, and complemented Eades-McKay order.
The demo uses the functions from combination-rec.h (fxt/src/comb/combination-rec.h) combination-rec.cc (fxt/src/comb/combination-rec.cc)
combination-revdoor-out.txt is the output of combination-revdoor-demo.cc.
Generating all combinations in minimal-change order: revolving door algorithm.
The demo uses the functions from combination-revdoor.h (fxt/src/comb/combination-revdoor.h)
composition-colex-out.txt is the output of composition-colex-demo.cc.
Generating all compositions of n into k parts in co-lexicographic (colex) order.
The demo uses the functions from composition-colex.h (fxt/src/comb/composition-colex.h) comp2comb.h (fxt/src/comb/comp2comb.h)
composition-colex2-out.txt is the output of composition-colex2-demo.cc.
Generating all compositions of n into k parts in co-lexicographic (colex) order. Algorithm efficient also with sparse case, i.e. k much greater than n.
The demo uses the functions from composition-colex2.h (fxt/src/comb/composition-colex2.h) comp2comb.h (fxt/src/comb/comp2comb.h)
composition-dist-unimodal-out.txt is the output of composition-dist-unimodal-demo.cc.
Strongly unimodal compositions into distinct parts. Internal representation as list of parts in increasing order, with each part except for the last of 2 sorts. Cf. OEIS sequence A072706.
The demo uses the functions from composition-dist-unimodal.h (fxt/src/comb/composition-dist-unimodal.h)
composition-ex-colex-out.txt is the output of composition-ex-colex-demo.cc.
Compositions of n into exactly k parts in co-lexicographic (colex) order.
The demo uses the functions from composition-ex-colex.h (fxt/src/comb/composition-ex-colex.h)
composition-gray-rec-out.txt is the output of composition-gray-rec-demo.cc.
Generating all compositions of n into k parts in minimal-change order.
The demo uses the functions from comp2comb.h (fxt/src/comb/comp2comb.h) comb-print.h (fxt/src/comb/comb-print.h)
composition-nz-binary-out.txt is the output of composition-nz-binary-demo.cc.
Compositions of n into powers of 2, lexicographic order. Cf. OEIS sequence A023359.
The demo uses the functions from composition-nz-binary.h (fxt/src/comb/composition-nz-binary.h)
composition-nz-carlitz-out.txt is the output of composition-nz-carlitz-demo.cc.
Compositions of n into positive parts such that adjacent parts are different (Carlitz compositions). Cf. OEIS sequence A003242.
The demo uses the functions from composition-nz-carlitz.h (fxt/src/comb/composition-nz-carlitz.h)
composition-nz-out.txt is the output of composition-nz-demo.cc.
Compositions of n into positive parts.
The demo uses the functions from composition-nz.h (fxt/src/comb/composition-nz.h) composition-nz-rank.h (fxt/src/comb/composition-nz-rank.h)
composition-nz-i-smooth-out.txt is the output of composition-nz-i-smooth-demo.cc.
Internally smooth compositions: consecutive parts differ by at most 1. Lexicographic order. See OEIS sequence A034297.
The demo uses the functions from composition-nz-i-smooth.h (fxt/src/comb/composition-nz-i-smooth.h)
composition-nz-left-2smooth-out.txt is the output of composition-nz-left-2smooth-demo.cc.
Left-2smooth compositions: compositions of n with maximal up-step 1, no consecutive up-steps, and first part 1. Lexicographic order. Cf. OEIS sequence A186085.
The demo uses the functions from composition-nz-left-2smooth.h (fxt/src/comb/composition-nz-left-2smooth.h)
composition-nz-left-smooth-out.txt is the output of composition-nz-left-smooth-demo.cc.
Left-smooth compositions: compositions of n with maximal up-step <= 1 and first part 1. Lexicographic order. Same as "fountains of coins", see OEIS sequence A005169.
The demo uses the functions from composition-nz-left-smooth.h (fxt/src/comb/composition-nz-left-smooth.h)
composition-nz-max-out.txt is the output of composition-nz-max-demo.cc.
Compositions of n into positive parts <= mx. Lexicographic order.
The demo uses the functions from composition-nz-max.h (fxt/src/comb/composition-nz-max.h) composition-nz-rank.h (fxt/src/comb/composition-nz-rank.h)
composition-nz-min-out.txt is the output of composition-nz-min-demo.cc.
Compositions of n into positive parts >= mi. Lexicographic order.
The demo uses the functions from composition-nz-min.h (fxt/src/comb/composition-nz-min.h) composition-nz-rank.h (fxt/src/comb/composition-nz-rank.h)
composition-nz-minc-out.txt is the output of composition-nz-minc-demo.cc.
Compositions of n into positive parts with first part c and each part <= f times its predecessor. For c=1 the same as: f-ary Huffman codes (canonical trees) with (f-1)*n+1 terminal nodes, a[k] is the
number of internal nodes of depth k. Such compositions (for f=2) are treated in Henryk Minc: "A Problem in Partitions: Enumeration of Elements of a given Degree in the free commutative entropic
cyclic Groupoid", Proceedings of the Edinburgh Mathematical Society (2), vol.11, no.4, pp.223-224, (November-1959). The compositions for f=2 are also called "Cayley compositions", see George E.
Andrews, Peter Paule, Axel Riese, Volker Strehl: "MacMahon's Partition Analysis V: Bijections, Recursions, and Magic Squares", in: Algebraic Combinatorics and Applications, proceedings of
Euroconference Alcoma 99, September 12-19, 1999, Goessweinstein, Germany, A. Betten, A. Kohnert, R. Laue, A. Wassermann eds., Springer-Verlag, pp.1-39, (2001). See also: Christian Elsholtz, Clemens
Heuberger, Helmut Prodinger: "The number of Huffman codes, compact trees, and sums of unit fractions", arXiv:1108.5964 [math.CO], (30-August-2011). Cf. OEIS sequences: f=2: A002572 (c=1), A002573 (c=
2), A002574 (c=3), A049284 (c=4), A049285 (c=5). c=1: A002572 (f=2), A176485 (f=3), A176503 (f=4), A194628 (f=5), A194629 (f=6), A194630 (f=7), A194631 (f=8), A194632 (f=9), A194633 (f=10).
The demo uses the functions from composition-nz-minc.h (fxt/src/comb/composition-nz-minc.h)
composition-nz-numparts-out.txt is the output of composition-nz-numparts-demo.cc.
Compositions of n into non-zero parts. Ordering is firstly by number of parts (1, 2, ..., n) and secondly co-lexicographic (colex).
The demo uses the functions from composition-nz-numparts.h (fxt/src/comb/composition-nz-numparts.h)
composition-nz-odd-out.txt is the output of composition-nz-odd-demo.cc.
Compositions of n into positive odd parts, lexicographic order. Loopless algorithm. Cf. OEIS sequence A000045.
The demo uses the functions from composition-nz-odd.h (fxt/src/comb/composition-nz-odd.h)
composition-nz-odd-subset-lex-out.txt is the output of composition-nz-odd-subset-lex-demo.cc.
Compositions of n into positive odd parts, subset-lex order. Loopless algorithm. Cf. OEIS sequence A000045.
The demo uses the functions from composition-nz-odd-subset-lex.h (fxt/src/comb/composition-nz-odd-subset-lex.h)
composition-nz-restrpref-out.txt is the output of composition-nz-restrpref-demo.cc.
Compositions of n into positive parts with restricted prefixes. Lexicographic order.
The demo uses the functions from composition-nz-restrpref.h (fxt/src/comb/composition-nz-restrpref.h)
composition-nz-rl-out.txt is the output of composition-nz-rl-demo.cc.
Compositions of n into positive parts, run-length order. Loopless algorithm.
The demo uses the functions from composition-nz-rl.h (fxt/src/comb/composition-nz-rl.h) composition-nz-rank.h (fxt/src/comb/composition-nz-rank.h)
composition-nz-smooth-out.txt is the output of composition-nz-smooth-demo.cc.
Smooth compositions: compositions of n with first and last part 1 and maximal absolute difference 1 between consecutive parts. Lexicographic order. Same as "1-dimensional sand piles", see OEIS
sequence A186085.
The demo uses the functions from composition-nz-smooth.h (fxt/src/comb/composition-nz-smooth.h)
composition-nz-sorts-out.txt is the output of composition-nz-sorts-demo.cc.
Compositions of n into positive parts of s sorts. Lexicographic order: major order by sorts, minor by parts, where comparison proceeds as sort1, part1; sort2, part2; sort3, part3, etc. Loopless
algorithm. Cf. OEIS sequences (compositions of n into parts of s kinds): A011782 (s=1), A025192 (s=2), A002001 (s=3), A005054 (s=4), A052934 (s=5), A055272 (s=6), A055274 (s=7), and A055275 (s=8).
The demo uses the functions from composition-nz-sorts.h (fxt/src/comb/composition-nz-sorts.h)
composition-nz-sorts2-out.txt is the output of composition-nz-sorts2-demo.cc.
Compositions of n into positive parts of s sorts. Lexicographic order: major order by parts, minor by sorts, where comparison proceeds as part1, sort1; part2, sort2; part3, sort3, etc. Loopless
algorithm. Cf. OEIS sequences (compositions of n into parts of s kinds): A011782 (s=1), A025192 (s=2), A002001 (s=3), A005054 (s=4), A052934 (s=5), A055272 (s=6), A055274 (s=7), and A055275 (s=8).
The demo uses the functions from composition-nz-sorts2.h (fxt/src/comb/composition-nz-sorts2.h)
composition-nz-sorts2-pp-out.txt is the output of composition-nz-sorts2-pp-demo.cc.
Compositions of n into positive parts of s[k] sorts for part (size) k. Lexicographic order: major order by parts, minor by sorts, where comparison proceeds as part1, sort1; part2, sort2; part3,
sort3, etc. Loopless algorithm. Cf. OEIS sequence A088305 compositions of n into one sort of 1's, two sorts of 2's, ..., k sorts of k's. Cf. OEIS sequences (compositions of n into (all) parts of s
kinds): A011782 (s=1), A025192 (s=2), A002001 (s=3), A005054 (s=4), A052934 (s=5), A055272 (s=6), A055274 (s=7), and A055275 (s=8).
The demo uses the functions from composition-nz-sorts2-pp.h (fxt/src/comb/composition-nz-sorts2-pp.h)
composition-nz-stats-out.txt is the output of composition-nz-stats-demo.cc.
Statistics for compositions into non-zero parts. Cf. the following OEIS sequences: A048004: Compositions by value of largest part. A105147: compositions by value of smallest part. A119473:
compositions by number of even values. A100749: compositions by number of odd values. A105422: compositions by number of ones A106356: compositions by number of flat steps A238130: compositions by
number of non-flat steps A225084: compositions by max ascent
The demo uses the functions from composition-nz.h (fxt/src/comb/composition-nz.h) word-stats.h (fxt/src/comb/word-stats.h)
composition-nz-subset-lex-out.txt is the output of composition-nz-subset-lex-demo.cc.
Compositions of n into positive parts, subset-lex order. Loopless generation.
The demo uses the functions from composition-nz-subset-lex.h (fxt/src/comb/composition-nz-subset-lex.h)
composition-nz-superdiagonal-out.txt is the output of composition-nz-superdiagonal-demo.cc.
Superdiagonal compositions: compositions a[1] + a[2] + ... + a[m] = n such that a[k] >= k. Lexicographic order. Same as: superdiagonal bargraphs, see Emeric Deutsch, Emanuele Munarini, Simone
Rinaldi: "Skew Dyck paths, area, and superdiagonal bargraphs", Journal of Statistical Planning and Inference, vol.140, no.6, pp.1550-1562, (June-2010). Cf. OEIS sequence A219282.
The demo uses the functions from composition-nz-superdiagonal.h (fxt/src/comb/composition-nz-superdiagonal.h)
composition-nz-upstep-out.txt is the output of composition-nz-upstep-demo.cc.
Compositions of n into positive parts, with limit on up-step. Lexicographic order. Cf. OEIS sequences A003116 (max up-step 1) and A224959 (max up-step 2). Max up-step 0 gives partitions as weakly
descending lists.
The demo uses the functions from composition-nz-upstep.h (fxt/src/comb/composition-nz-upstep.h)
composition-nz-weakly-unimodal-out.txt is the output of composition-nz-weakly-unimodal-demo.cc.
Weakly unimodal compositions, lexicographic order. Cf. OEIS sequence A001523.
The demo uses the functions from composition-nz-weakly-unimodal.h (fxt/src/comb/composition-nz-weakly-unimodal.h)
composition-rank-out.txt is the output of composition-rank-demo.cc.
Ranking and unranking compositions in lexicographic, Gray, and enup (two-close) order
The demo uses the functions from composition-rank.h (fxt/src/comb/composition-rank.h) num-compositions.h (fxt/src/comb/num-compositions.h)
composition-unimodal-out.txt is the output of composition-unimodal-demo.cc.
Strongly unimodal compositions. Internal representation as list of parts in weakly ascending order, with each part except for the last of 2 sorts and no repeated parts of the same sort. Cf. OEIS
sequence A059618.
The demo uses the functions from composition-unimodal.h (fxt/src/comb/composition-unimodal.h)
conference-quadres-out.txt is the output of conference-quadres-demo.cc.
Conference and Hadamard matrices by quadratic residues
The demo uses the functions from matrix.h (fxt/src/matrix/matrix.h) copy.h (fxt/src/aux1/copy.h)
cyclic-perm-out.txt is the output of cyclic-perm-demo.cc.
Generate all cyclic permutations in minimal-change order, CAT algorithm.
The demo uses the functions from cyclic-perm.h (fxt/src/comb/cyclic-perm.h) fact2cyclic.cc (fxt/src/comb/fact2cyclic.cc) mixedradix-gray.h (fxt/src/comb/mixedradix-gray.h)
debruijn-out.txt is the output of debruijn-demo.cc.
Generating De Bruijn sequences.
The demo uses the functions from debruijn.h (fxt/src/comb/debruijn.h) necklace.h (fxt/src/comb/necklace.h)
descent-rgs-out.txt is the output of descent-rgs-demo.cc.
Descent sequences (restricted growth strings, RGS), lexicographic order. A descent sequence is a sequence [d(1), d(2), ..., d(n)] where d(1)=0, d(k)>=0, and d(k) <= 1 + asc([d(1), d(2), ..., d(k-1)])
and asc(.) counts the number of descents of its argument. The number of length-n RGS is (OEIS sequence A225588) 1, 1, 2, 4, 9, 23, 67, 222, 832, 3501, 16412, 85062, 484013, ...
The demo uses the functions from descent-rgs.h (fxt/src/comb/descent-rgs.h)
descent-rgs-stats-out.txt is the output of descent-rgs-stats-demo.cc.
Statistics for sescent sequences. Cf. the following OEIS sequences: A225624: triangle, length-n descent sequences with k-1 descents. A descent sequence is a sequence [d(1), d(2), ..., d(n)] where d
(1)=0, d(k)>=0, and d(k) <= 1 + asc([d(1), d(2), ..., d(k-1)]) and asc(.) counts the number of descents of its argument.
The demo uses the functions from descent-rgs.h (fxt/src/comb/descent-rgs.h) word-stats.h (fxt/src/comb/word-stats.h)
dyck-gray-out.txt is the output of dyck-gray-demo.cc.
Gray code for k-ary Dyck words. Loopless algorithm, all changes are homogeneous.
The demo uses the functions from dyck-gray.h (fxt/src/comb/dyck-gray.h)
dyck-gray2-out.txt is the output of dyck-gray2-demo.cc.
Gray code for k-ary Dyck words. Loopless algorithm, homogeneous two-close changes.
The demo uses the functions from dyck-gray2.h (fxt/src/comb/dyck-gray2.h)
dyck-pref-out.txt is the output of dyck-pref-demo.cc.
k-ary Dyck words via prefix shifts.
The demo uses the functions from dyck-pref.h (fxt/src/comb/dyck-pref.h)
dyck-pref2-out.txt is the output of dyck-pref2-demo.cc.
k-ary Dyck words via prefix shifts (loopless generation).
The demo uses the functions from dyck-pref2.h (fxt/src/comb/dyck-pref2.h)
dyck-rgs-out.txt is the output of dyck-rgs-demo.cc.
All restricted growth strings (RGS) for k-ary Dyck words in lexicographic order: strings s[0,...,n-1] such that s[k] <= s[k-1]+i (where i=k-1).
The demo uses the functions from dyck-rgs.h (fxt/src/comb/dyck-rgs.h)
dyck-rgs-subset-lex-out.txt is the output of dyck-rgs-subset-lex-demo.cc.
Restricted growth strings (RGS) for k-ary Dyck words in subset-lex order.
The demo uses the functions from dyck-rgs-subset-lex.h (fxt/src/comb/dyck-rgs-subset-lex.h)
fact2cyclic-out.txt is the output of fact2cyclic-demo.cc.
Generate all cyclic permutations from mixed radix numbers.
The demo uses the functions from fact2perm.h (fxt/src/comb/fact2perm.h) fact2cyclic.cc (fxt/src/comb/fact2cyclic.cc) mixedradix-modular-gray.h (fxt/src/comb/mixedradix-modular-gray.h) perminvert.h
(fxt/src/perm/perminvert.h) permq.h (fxt/src/perm/permq.h)
fact2perm-out.txt is the output of fact2perm-demo.cc.
Generate all permutations from mixed radix (factorial) numbers.
The demo uses the functions from fact2perm.h (fxt/src/comb/fact2perm.h) fact2perm.cc (fxt/src/comb/fact2perm.cc) mixedradix-lex.h (fxt/src/comb/mixedradix-lex.h) reverse.h (fxt/src/perm/reverse.h)
permcomplement.h (fxt/src/perm/permcomplement.h) perminvert.h (fxt/src/perm/perminvert.h)
fact2perm-rev-out.txt is the output of fact2perm-rev-demo.cc.
Generate all permutations from mixed radix (factorial) numbers.
The demo uses the functions from fact2perm.h (fxt/src/comb/fact2perm.h) fact2perm-rev.cc (fxt/src/comb/fact2perm-rev.cc) perminvert.h (fxt/src/perm/perminvert.h) perminvert.cc (fxt/src/perm/
perminvert.cc) mixedradix-lex.h (fxt/src/comb/mixedradix-lex.h)
fact2perm-rot-out.txt is the output of fact2perm-rot-demo.cc.
Generate all permutations from mixed radix (factorial) numbers.
The demo uses the functions from fact2perm.h (fxt/src/comb/fact2perm.h) fact2perm-rot.cc (fxt/src/comb/fact2perm-rot.cc) perminvert.h (fxt/src/perm/perminvert.h) perminvert.cc (fxt/src/perm/
perminvert.cc) mixedradix-lex.h (fxt/src/comb/mixedradix-lex.h)
fact2perm-swp-out.txt is the output of fact2perm-swp-demo.cc.
Generate all permutations from mixed radix (factorial) numbers.
The demo uses the functions from fact2perm.h (fxt/src/comb/fact2perm.h) fact2perm-swp.cc (fxt/src/comb/fact2perm-swp.cc) perminvert.h (fxt/src/perm/perminvert.h) perminvert.cc (fxt/src/perm/
perminvert.cc) mixedradix-lex.h (fxt/src/comb/mixedradix-lex.h)
ffact2kperm-out.txt is the output of ffact2kperm-demo.cc.
Generate all k-permutations of n elements from falling factorial numbers.
The demo uses the functions from fact2perm.h (fxt/src/comb/fact2perm.h) fact2perm.cc (fxt/src/comb/fact2perm.cc) mixedradix-lex.h (fxt/src/comb/mixedradix-lex.h)
fib-alt-gray-out.txt is the output of fib-alt-gray-demo.cc.
Gray codes for certain ternary words counted by Fibonacci numbers
fibgray-rec-out.txt is the output of fibgray-rec-demo.cc.
Gray code for Fibonacci words, recursive CAT algorithm
gexz-gray-out.txt is the output of gexz-gray-demo.cc.
Gray code for r-ary words where digit x is followed by x or more zeros.
hadamard-srs-out.txt is the output of hadamard-srs-demo.cc.
Hadamard matrices by maximum length shift register sequences (SRS)
The demo uses the functions from lfsr.h (fxt/src/bpol/lfsr.h) matrix.h (fxt/src/matrix/matrix.h) copy.h (fxt/src/aux1/copy.h)
hanoi-rec-out.txt is the output of hanoi-rec-demo.cc.
Towers of Hanoi, recursive algorithm.
hilbert-ndim-out.txt is the output of hilbert-ndim-demo.cc.
Fred Lunnon's iterative algorithm for the n-dimensional Hilbert curve
The demo uses the functions from hilbert-ndim.h (fxt/src/comb/hilbert-ndim.h)
hilbert-ndim-rec-out.txt is the output of hilbert-ndim-rec-demo.cc.
Fred Lunnon's recursive algorithm for the n-dimensional Hilbert curve
The demo uses the functions from hilbert-ndim-rec.h (fxt/src/comb/hilbert-ndim-rec.h)
involution-stats-out.txt is the output of involution-stats-demo.cc.
Statistics for involutions (self-inverse permutations): Cf. the following OEIS sequences: A099174, A161126, A238889, and A239145.
The demo uses the functions from perm-involution.h (fxt/src/comb/perm-involution.h) word-stats.h (fxt/src/comb/word-stats.h)
involution-zero-map-rgs-out.txt is the output of involution-zero-map-rgs-demo.cc.
Restricted growth strings (RGS): each digit a[k] is either zero or a[k] < k, a[a[k]] == 0, and there is at most one digit a[k] in the RGS. Same as: maps from {1, 2, 3, ..., n} to {0, 1, 2, 3, ..., n}
such that f(x) < x and f(f(x)) == 0 and there is no t!=x with f(t) = f(x). Lexicographic order. Cf. OEIS sequence A000085.
The demo uses the functions from involution-zero-map-rgs.h (fxt/src/comb/involution-zero-map-rgs.h) is-zero-map-rgs.h (fxt/src/comb/is-zero-map-rgs.h)
kperm-gray-out.txt is the output of kperm-gray-demo.cc.
Generate all k-permutations of n elements in minimal-change order via Gray code for falling factorial numbers, CAT algorithm. Same as: k-prefixes of permutations of n elements. Same as: arrangements
of k out of n elements.
The demo uses the functions from kperm-gray.h (fxt/src/comb/kperm-gray.h)
kperm-lex-out.txt is the output of kperm-lex-demo.cc.
Generate all k-permutations of n elements in lexicographic order. Same as: k-prefixes of permutations of n elements. Same as: arrangements of k out of n elements.
The demo uses the functions from kperm-lex.h (fxt/src/comb/kperm-lex.h)
kproducts-colex-out.txt is the output of kproducts-colex-demo.cc.
Generating all k-products of the n smallest primes via combinations in co-lexicographic order.
The demo uses the functions from combination-colex.h (fxt/src/comb/combination-colex.h) primes.h (fxt/src/mod/primes.h)
ksubset-gray-out.txt is the output of ksubset-gray-demo.cc.
k-subsets (kmin<=k<=kmax) in minimal-change order
The demo uses the functions from ksubset-gray.h (fxt/src/comb/ksubset-gray.h)
ksubset-rec-out.txt is the output of ksubset-rec-demo.cc.
k-subsets where kmin<=k<=kmax in various orders. Recursive CAT algorithm.
The demo uses the functions from ksubset-rec.h (fxt/src/comb/ksubset-rec.h) ksubset-rec.cc (fxt/src/comb/ksubset-rec.cc)
ksubset-twoclose-out.txt is the output of ksubset-twoclose-demo.cc.
k-subsets (kmin<=k<=kmax) in two-close order. Recursive algorithm.
The demo uses the functions from ksubset-twoclose.h (fxt/src/comb/ksubset-twoclose.h)
map23-rgs-out.txt is the output of map23-rgs-demo.cc.
Restricted growth strings (RGS) for maps f: [n] -> [n] with f(x)<=x and f(f(x)) == f(f(f(x))). Lexicographic order. Cf. OEIS sequence A187761.
The demo uses the functions from map23-rgs.h (fxt/src/comb/map23-rgs.h)
maxrep-gray-out.txt is the output of maxrep-gray-demo.cc.
Gray code for generalized Fibonacci words, recursive CAT algorithm
mixedradix-colex-out.txt is the output of mixedradix-colex-demo.cc.
Mixed radix counting, most significant digit changes most often.
The demo uses the functions from mixedradix-colex.h (fxt/src/comb/mixedradix-colex.h)
mixedradix-endo-out.txt is the output of mixedradix-endo-demo.cc.
Mixed radix counting: endo sequence (endo := "Even Numbers DOwn, odd (numbers up)")
The demo uses the functions from mixedradix-endo.h (fxt/src/comb/mixedradix-endo.h) endo-enup.h (fxt/src/comb/endo-enup.h)
mixedradix-endo-gray-out.txt is the output of mixedradix-endo-gray-demo.cc.
Mixed radix counting: endo Gray sequence (endo := "Even Numbers Down, Odd (numbers up)")
The demo uses the functions from mixedradix-endo-gray.h (fxt/src/comb/mixedradix-endo-gray.h) endo-enup.h (fxt/src/comb/endo-enup.h)
mixedradix-gray-out.txt is the output of mixedradix-gray-demo.cc.
Mixed radix Gray code, CAT algorithm.
The demo uses the functions from mixedradix-gray.h (fxt/src/comb/mixedradix-gray.h)
mixedradix-gray2-out.txt is the output of mixedradix-gray2-demo.cc.
Mixed radix Gray code, loopless algorithm. Implementation following Knuth.
The demo uses the functions from mixedradix-gray2.h (fxt/src/comb/mixedradix-gray2.h)
mixedradix-gslex-alt-out.txt is the output of mixedradix-gslex-alt-demo.cc.
Mixed radix numbers in alternative gslex (generalized subset lex) order.
The demo uses the functions from mixedradix-gslex-alt.h (fxt/src/comb/mixedradix-gslex-alt.h)
mixedradix-gslex-alt2-out.txt is the output of mixedradix-gslex-alt2-demo.cc.
Mixed radix numbers in alternative gslex (generalized subset lex) order.
The demo uses the functions from mixedradix-gslex-alt2.h (fxt/src/comb/mixedradix-gslex-alt2.h)
mixedradix-gslex-out.txt is the output of mixedradix-gslex-demo.cc.
Mixed radix numbers in gslex (generalized subset lex) order.
The demo uses the functions from mixedradix-gslex.h (fxt/src/comb/mixedradix-gslex.h)
mixedradix-gslex2-out.txt is the output of mixedradix-gslex2-demo.cc.
Mixed radix numbers in gslex (generalized subset lex) order. Loopless algorithm.
The demo uses the functions from mixedradix-gslex2.h (fxt/src/comb/mixedradix-gslex2.h)
mixedradix-lex-out.txt is the output of mixedradix-lex-demo.cc.
Mixed radix counting.
The demo uses the functions from mixedradix-lex.h (fxt/src/comb/mixedradix-lex.h)
mixedradix-modular-gray-out.txt is the output of mixedradix-modular-gray-demo.cc.
Modular mixed radix Gray code. Implementation following Knuth (loopless algorithm).
The demo uses the functions from mixedradix-modular-gray.h (fxt/src/comb/mixedradix-modular-gray.h)
mixedradix-modular-gray2-out.txt is the output of mixedradix-modular-gray2-demo.cc.
Modular mixed radix Gray code, CAT algorithm.
The demo uses the functions from mixedradix-modular-gray2.h (fxt/src/comb/mixedradix-modular-gray2.h)
mixedradix-naf-out.txt is the output of mixedradix-naf-demo.cc.
Mixed radix non-adjacent forms (NAF).
The demo uses the functions from mixedradix-naf.h (fxt/src/comb/mixedradix-naf.h)
mixedradix-naf-gray-out.txt is the output of mixedradix-naf-gray-demo.cc.
Gray code for mixed radix non-adjacent forms (NAF).
The demo uses the functions from mixedradix-naf-gray.h (fxt/src/comb/mixedradix-naf-gray.h)
mixedradix-naf-subset-lex-out.txt is the output of mixedradix-naf-subset-lex-demo.cc.
Mixed radix non-adjacent forms (NAF) in subset-lex order. Loopless generation.
The demo uses the functions from mixedradix-naf-subset-lex.h (fxt/src/comb/mixedradix-naf-subset-lex.h)
mixedradix-restrpref-out.txt is the output of mixedradix-restrpref-demo.cc.
Mixed radix counting with restricted prefixes.
The demo uses the functions from mixedradix-restrpref.h (fxt/src/comb/mixedradix-restrpref.h)
mixedradix-rfact-out.txt is the output of mixedradix-rfact-demo.cc.
Counting in rising factorial base, special case of mixed radix counting.
The demo uses the functions from mixedradix-rfact.h (fxt/src/comb/mixedradix-rfact.h)
mixedradix-sl-gray-out.txt is the output of mixedradix-sl-gray-demo.cc.
Mixed radix numbers in a minimal-change order related so subset-lex order ("SL-Gray" order).
The demo uses the functions from mixedradix-sl-gray.h (fxt/src/comb/mixedradix-sl-gray.h)
mixedradix-sl-gray-rec-out.txt is the output of mixedradix-sl-gray-rec-demo.cc.
Recursive generation of mixed radix numbers in a minimal-change order related so subset-lex order ("SL-Gray" order).
mixedradix-sod-lex-out.txt is the output of mixedradix-sod-lex-demo.cc.
Mixed radix numbers with fixed sum of digits. Also: k-subsets (combinations) of a multiset.
The demo uses the functions from mixedradix-sod-lex.h (fxt/src/comb/mixedradix-sod-lex.h)
mixedradix-subset-lex-out.txt is the output of mixedradix-subset-lex-demo.cc.
Mixed radix numbers in subset-lex order.
The demo uses the functions from mixedradix-subset-lex.h (fxt/src/comb/mixedradix-subset-lex.h)
mixedradix-subset-lexrev-out.txt is the output of mixedradix-subset-lexrev-demo.cc.
Mixed radix numbers in reversed subset-lexicographic order.
The demo uses the functions from mixedradix-subset-lexrev.h (fxt/src/comb/mixedradix-subset-lexrev.h)
monotonicgray-out.txt is the output of monotonicgray-demo.cc.
Monotonic Gray path (Savage/Winkler), as given by Knuth.
The demo uses the functions from monotonic-gray.h (fxt/src/comb/monotonic-gray.h) monotonic-gray.cc (fxt/src/comb/monotonic-gray.cc)
motzkin-nonflat-rgs-lex-out.txt is the output of motzkin-nonflat-rgs-lex-demo.cc.
Motzkin (nonflat) restricted growth strings (RGS). Same as: Catalan RGS with no flat steps. Cf. OEIS sequences A086246 and A001006.
The demo uses the functions from motzkin-nonflat-rgs-lex.h (fxt/src/comb/motzkin-nonflat-rgs-lex.h)
motzkin-path-lex-out.txt is the output of motzkin-path-lex-demo.cc.
Motzkin paths in lexicographic order, CAT algorithm. Cf. OEIS sequence A001006.
The demo uses the functions from motzkin-path-lex.h (fxt/src/comb/motzkin-path-lex.h)
motzkin-rgs-lex-out.txt is the output of motzkin-rgs-lex-demo.cc.
Motzkin restricted growth strings (RGS): words a[0,1,...,n-1] where a[0] = 0, a_[k] <= a[k-1] + 1, and there are no two consecutive up-steps. lexicographic order. Cf. OEIS sequence A001006.
The demo uses the functions from motzkin-rgs-lex.h (fxt/src/comb/motzkin-rgs-lex.h) paren-string-to-rgs.h (fxt/src/comb/paren-string-to-rgs.h)
motzkin-step-rgs-lex-out.txt is the output of motzkin-step-rgs-lex-demo.cc.
Motzkin step RGS (restricted growth strings), lexicographic order. RGS are a[] such that a[k] >= a[k-1] (weakly ascending), a[k]<=k, and a[k] - a[k-1] != 1 (no increments by 1). Same as: rising
factorial numbers where the digits are sorted where increments by 1 are disallowed. Cf. OEIS sequence A001006.
The demo uses the functions from motzkin-step-rgs-lex.h (fxt/src/comb/motzkin-step-rgs-lex.h) is-motzkin-step-rgs.h (fxt/src/comb/is-motzkin-step-rgs.h)
mpartition-out.txt is the output of mpartition-demo.cc.
Integer partitions of n into m parts. Same as: compositions into m weakly ascending parts. Cf. OEIS sequence A008284.
The demo uses the functions from mpartition.h (fxt/src/comb/mpartition.h)
mset-ksubset-out.txt is the output of mset-ksubset-demo.cc.
k-subsets (combinations) of a multiset. Essentially the same as mixed radix numbers with fixed sum of digits.
The demo uses the functions from mixedradix-sod-lex.h (fxt/src/comb/mixedradix-sod-lex.h)
mset-perm-gray-out.txt is the output of mset-perm-gray-demo.cc.
All multiset permutations in minimal-change order (Fred Lunnon's Gray code). Same as: all strings with fixed content.
The demo uses the functions from mset-perm-gray.h (fxt/src/comb/mset-perm-gray.h)
mset-perm-lex-out.txt is the output of mset-perm-lex-demo.cc.
All multiset permutations in lexicographic order, iterative generation. Same as: all strings with fixed content.
The demo uses the functions from mset-perm-lex.h (fxt/src/comb/mset-perm-lex.h)
mset-perm-lex-rec-out.txt is the output of mset-perm-lex-rec-demo.cc.
All multiset permutations in lexicographic order, recursive generation. Same as: all strings with fixed content.
mset-perm-lex-rec2-out.txt is the output of mset-perm-lex-rec2-demo.cc.
All multiset permutations in lexicographic order. Recursive generation using a linked list. Same as: all strings with fixed content.
The demo uses the functions from mset-perm-lex-rec.h (fxt/src/comb/mset-perm-lex-rec.h) mset-perm-lex-rec.cc (fxt/src/comb/mset-perm-lex-rec.cc)
mset-perm-pref-out.txt is the output of mset-perm-pref-demo.cc.
All multiset permutations via prefix shifts ("cool-lex" order) Same as: all strings with fixed content.
The demo uses the functions from mset-perm-pref.h (fxt/src/comb/mset-perm-pref.h)
mset-subset-lex-out.txt is the output of mset-subset-lex-demo.cc.
Subsets of a multiset in lexicographic order with respect to subsets, generated as (multi-)delta sets (that is, mixed radix numbers).
The demo uses the functions from mixedradix-subset-lex.h (fxt/src/comb/mixedradix-subset-lex.h)
naf-gray-rec-out.txt is the output of naf-gray-rec-demo.cc.
Gray code for sparse signed binary representation (nonadjacent form, NAF). Recursive CAT algorithm.
naf-pos-rec-out.txt is the output of naf-pos-rec-demo.cc.
Near Gray code for sparse signed binary representations (nonadjacent form, NAF) of the positive numbers. Recursive CAT algorithm.
necklace-cat-out.txt is the output of necklace-cat-demo.cc.
Generate pre-necklaces as described by Cattell, Ruskey, Sawada, Miers, Serra. Recursive CAT algorithm.
necklace-out.txt is the output of necklace-demo.cc.
Generate all pre-necklaces, necklaces, and Lyndon words with a given number of colors.
The demo uses the functions from necklace.h (fxt/src/comb/necklace.h)
necklace-fkm-out.txt is the output of necklace-fkm-demo.cc.
Fredericksen, Kessler, Maiorana (FKM) algorithm for generating necklaces.
necklace-gray-out.txt is the output of necklace-gray-demo.cc.
Generate binary Lyndon words ordered so that only few changes between successive elements occur (note: in general not a Gray code). Recursive CAT algorithm.
necklace-gray3-out.txt is the output of necklace-gray3-demo.cc.
Generate Gray code (max 3 changes per update) for necklaces
necklace-sigma-tau-out.txt is the output of necklace-sigma-tau-demo.cc.
necklaces via cyclic shifts and complements (sigma-tau search)
The demo uses the functions from bitrotate.h (fxt/src/bits/bitrotate.h) bitcyclic-minmax.h (fxt/src/bits/bitcyclic-minmax.h) print-bin.h (fxt/src/bits/print-bin.h)
necklaces-via-gray-leaders-out.txt is the output of necklaces-via-gray-leaders-demo.cc.
Cycle leaders for gray permutation converted to necklaces
The demo uses the functions from gray-cycle-leaders.h (fxt/src/comb/gray-cycle-leaders.h) bittransforms.h (fxt/src/bits/bittransforms.h) bitcyclic-period.h (fxt/src/bits/bitcyclic-period.h)
no111-gray-out.txt is the output of no111-gray-demo.cc.
Gray code for binary words with no substring 111, recursive CAT algorithm
no1111-gray-out.txt is the output of no1111-gray-demo.cc.
Gray code for binary words with no substring 1111, recursive CAT algorithm
no1x1-gray-out.txt is the output of no1x1-gray-demo.cc.
Gray code for binary words with no substring 1x1, recursive CAT algorithm
no1xy1-gray-out.txt is the output of no1xy1-gray-demo.cc.
Gray code for binary words with no substring 1xy1, recursive CAT algorithm
ntnz-gray-out.txt is the output of ntnz-gray-demo.cc.
Gray code for strings with no two consecutive nonzero digits. These are non-adjacent forms (NAF).
ntz-gray-out.txt is the output of ntz-gray-demo.cc.
Gray code for words without two consecutive zeros, recursive CAT algorithm
num-partitions-out.txt is the output of num-partitions-demo.cc.
Number of integer partitions.
paren-out.txt is the output of paren-demo.cc.
Parentheses strings, co-lexicographic order. Representation as list of positions of opening parenthesis.
The demo uses the functions from paren.h (fxt/src/comb/paren.h)
paren-gray-out.txt is the output of paren-gray-demo.cc.
Parentheses strings in a homogeneous minimal-change order.
The demo uses the functions from paren-gray.h (fxt/src/comb/paren-gray.h)
paren-gray-rec-out.txt is the output of paren-gray-rec-demo.cc.
Gray code for paren strings via restricted growth strings (RGS), recursive algorithm.
paren-lex-out.txt is the output of paren-lex-demo.cc.
Parentheses strings, lexicographic order. Representation as list of positions of opening parenthesis.
The demo uses the functions from paren-lex.h (fxt/src/comb/paren-lex.h) is-paren-string.h (fxt/src/comb/is-paren-string.h)
paren-pref-out.txt is the output of paren-pref-demo.cc.
Generate all well-formed pairs of parentheses by prefix shifts
The demo uses the functions from paren-pref.h (fxt/src/comb/paren-pref.h)
partition-2fall-desc-out.txt is the output of partition-2fall-desc-demo.cc.
Partitions of n is a partition a[1] + a[2] + ... + a[m] = n such that 2*a[k] <= a[k-1]. Representation as weakly descending list of parts. Lexicographic order. Cf. OEIS sequence A000929.
The demo uses the functions from partition-2fall-desc.h (fxt/src/comb/partition-2fall-desc.h)
partition-asc-2rep-out.txt is the output of partition-asc-2rep-demo.cc.
Integer partitions where parts have multiplicity at most 2. Representation as weakly ascending list of parts. Lexicographic order. Cf. OEIS sequence A000726.
The demo uses the functions from partition-asc-2rep.h (fxt/src/comb/partition-asc-2rep.h)
partition-asc-2rep-subset-lex-out.txt is the output of partition-asc-2rep-subset-lex-demo.cc.
Integer partitions where parts have multiplicity at most 2. Representation as weakly ascending list of parts. Subset-lex order. Loopless algorithm. Cf. OEIS sequence A000726.
The demo uses the functions from partition-asc-2rep-subset-lex.h (fxt/src/comb/partition-asc-2rep-subset-lex.h)
partition-asc-out.txt is the output of partition-asc-demo.cc.
Integer partitions as weakly ascending list of parts. Same as: compositions into weakly ascending parts. Lexicographic order. Cf. OEIS sequence A000041.
The demo uses the functions from partition-asc.h (fxt/src/comb/partition-asc.h)
partition-asc-perim-out.txt is the output of partition-asc-perim-demo.cc.
Partitions into parts of 2 sorts where sorts are oscillating. These are conjectured to be equinumerous with non-empty sets of non-negative integers with perimeter n, as defined in OEIS sequence
A182372. Representations as weakly ascending lists. Lexicographic order: major order by sorts, minor by parts.
The demo uses the functions from partition-asc-perim.h (fxt/src/comb/partition-asc-perim.h)
partition-asc-sorts-out.txt is the output of partition-asc-sorts-demo.cc.
Partitions into parts of s sorts, as weakly ascending lists. Lexicographic order: major order by parts, minor by sorts, where comparison proceeds as sort1, part1; sort2, part2; sort3, part3, etc. Cf.
OEIS sequences (partitions of n into parts of s kinds): A000041 (s=1), A000712 (s=2), A000716 (s=3), A023003 (s=4), A023004 (s=5), A023005 (s=6), A023006 (s=7), and A023007 (s=8).
The demo uses the functions from partition-asc-sorts.h (fxt/src/comb/partition-asc-sorts.h)
partition-asc-sorts2-out.txt is the output of partition-asc-sorts2-demo.cc.
Partitions into parts of s sorts, as weakly ascending lists. Lexicographic order: major order by parts, minor by sorts, where comparison proceeds as part1, sort1; part2, sort2; part3, sort3, etc. Cf.
OEIS sequences (partitions of n into parts of s kinds): A000041 (s=1), A000712 (s=2), A000716 (s=3), A023003 (s=4), A023004 (s=5), A023005 (s=6), A023006 (s=7), and A023007 (s=8).
The demo uses the functions from partition-asc-sorts2.h (fxt/src/comb/partition-asc-sorts2.h)
partition-asc-sorts2-pp-out.txt is the output of partition-asc-sorts2-pp-demo.cc.
Partitions into parts of s[k] sorts for part (size) k. Representation as weakly ascending lists. Lexicographic order: major order by parts, minor by sorts, where comparison proceeds as part1, sort1;
part2, sort2; part3, sort3, etc. Cf. OEIS sequence A000219 (planar partitions). Cf. OEIS sequences (partitions of n into parts of s kinds): A000041 (s=1), A000712 (s=2), A000716 (s=3), A023003 (s=4),
A023004 (s=5), A023005 (s=6), A023006 (s=7), and A023007 (s=8).
The demo uses the functions from partition-asc-sorts2-pp.h (fxt/src/comb/partition-asc-sorts2-pp.h)
partition-asc-stats-out.txt is the output of partition-asc-stats-demo.cc.
Statistics for partitions (as weakly ascending lists of parts). Cf. the following OEIS sequences: A008284: partitions by value of largest part. A026794: partitions by value of smallest part. A116482:
partitions by number of even values. A103919: partitions by number of odd values. A116598: partitions by number of ones A133121: partitions by number of flat steps A116608: partitions by number of
non-flat steps (ascents)
The demo uses the functions from partition-asc.h (fxt/src/comb/partition-asc.h) word-stats.h (fxt/src/comb/word-stats.h)
partition-asc-subset-lex-out.txt is the output of partition-asc-subset-lex-demo.cc.
Partitions of n into positive parts, subset-lex order. Loopless algorithm.
The demo uses the functions from partition-asc-subset-lex.h (fxt/src/comb/partition-asc-subset-lex.h)
partition-binary-asc-out.txt is the output of partition-binary-asc-demo.cc.
Binary partitions as weakly ascending list of parts. Same as: compositions into weakly ascending parts that are powers of 2. Lexicographic order. Cf. OEIS sequences A018819 and A000123.
The demo uses the functions from partition-binary-asc.h (fxt/src/comb/partition-binary-asc.h)
partition-binary-desc-out.txt is the output of partition-binary-desc-demo.cc.
Binary partitions as weakly descending list of parts. Same as: compositions into weakly descending parts that are powers of 2. Lexicographic order. Cf. OEIS sequences A018819 and A000123.
The demo uses the functions from partition-binary-desc.h (fxt/src/comb/partition-binary-desc.h)
partition-out.txt is the output of partition-demo.cc.
Generate all integer partitions, iterative algorithm.
The demo uses the functions from partition.h (fxt/src/comb/partition.h)
partition-desc-bb-out.txt is the output of partition-desc-bb-demo.cc.
Integer partitions as weakly descending list of parts, with bounds for size of parts and number of parts. Lexicographic order.
The demo uses the functions from partition-desc-bb.h (fxt/src/comb/partition-desc-bb.h)
partition-desc-out.txt is the output of partition-desc-demo.cc.
Integer partitions as weakly descending list of parts. Same as: compositions into weakly descending parts. Cf. OEIS sequence A000041.
The demo uses the functions from partition-desc.h (fxt/src/comb/partition-desc.h)
partition-dist-asc-out.txt is the output of partition-dist-asc-demo.cc.
Integer partitions into distinct parts as weakly ascending list of parts, lexicographic order. Same as: compositions into distinct weakly ascending parts. Cf. OEIS sequence A000009.
The demo uses the functions from partition-dist-asc.h (fxt/src/comb/partition-dist-asc.h)
partition-dist-asc-stats-out.txt is the output of partition-dist-asc-stats-demo.cc.
Statistics for partitions into distinct parts (as weakly ascending lists of parts). Cf. the following OEIS sequences: A026836, A026821, A060016, A116679, and A116675
The demo uses the functions from partition-dist-asc.h (fxt/src/comb/partition-dist-asc.h) word-stats.h (fxt/src/comb/word-stats.h)
partition-dist-asc-subset-lex-out.txt is the output of partition-dist-asc-subset-lex-demo.cc.
Integer partitions into distinct parts. Representation as list of parts in increasing order. Subset-lexicographic order: The length of consecutive partitions changes by at most one. Only the last two
positions in a partition at the end change. Same as: compositions into distinct increasing parts. Loopless algorithm. Cf. OEIS sequence A000009.
The demo uses the functions from partition-dist-asc-subset-lex.h (fxt/src/comb/partition-dist-asc-subset-lex.h)
partition-dist-d-asc-out.txt is the output of partition-dist-d-asc-demo.cc.
Integer partitions such that parts differ by at least d. Representation as list of parts in increasing order. Lexicographic order. Cf. OEIS sequences A000041 (all partitions; d=0), A000009 (distinct
parts; d=1), A003114 (d=2), A025157 (d=3), A025158 (d=4), A025159 (d=5), A025160 (d=6), A025161 (d=7), and A025162 (d=8).
The demo uses the functions from partition-dist-d-asc.h (fxt/src/comb/partition-dist-d-asc.h)
partition-dist-desc-out.txt is the output of partition-dist-desc-demo.cc.
Partitions into distinct parts as decreasing list of parts. Same as: compositions into distinct decreasing parts. Lexicographic order. Cf. OEIS sequence A000009.
The demo uses the functions from partition-dist-desc.h (fxt/src/comb/partition-dist-desc.h)
partition-gen-out.txt is the output of partition-gen-demo.cc.
Generate all integer partitions, with parts repeated at most r times.
The demo uses the functions from partition-gen.h (fxt/src/comb/partition-gen.h)
partition-nonsquashing-desc-out.txt is the output of partition-nonsquashing-desc-demo.cc.
Non-squashing partitions as weakly descending list of parts. A non-squashing partition of n is a partition a[1] + a[2] + ... + a[m] = n such that a[k] >= sum(j=k+1..m, a[j] ). Lexicographic order.
See: N. J. A. Sloane, James A. Sellers: "On Non-Squashing Partitions", arXiv:math/0312418 [math.CO], (22-December-2003). Cf. OEIS sequences A018819 and A000123.
The demo uses the functions from partition-nonsquashing-desc.h (fxt/src/comb/partition-nonsquashing-desc.h)
partition-odd-asc-out.txt is the output of partition-odd-asc-demo.cc.
Integer partitions into odd parts as weakly ascending list of parts. Same as: compositions into weakly ascending odd parts. Cf. OEIS sequence A000009.
The demo uses the functions from partition-odd-asc.h (fxt/src/comb/partition-odd-asc.h)
partition-odd-asc-stats-out.txt is the output of partition-odd-asc-stats-demo.cc.
Statistics for partitions into odd parts (as weakly ascending lists of parts). Cf. the following OEIS sequences: A152140, A097304, A116799, A116856, A117408, A115604, A116663, and A115604
The demo uses the functions from partition-odd-asc.h (fxt/src/comb/partition-odd-asc.h) word-stats.h (fxt/src/comb/word-stats.h)
partition-odd-asc-subset-lex-out.txt is the output of partition-odd-asc-subset-lex-demo.cc.
Partitions of n into odd parts, subset-lex order. Loopless algorithm. Cf. OEIS sequence A000009.
The demo uses the functions from partition-odd-asc-subset-lex.h (fxt/src/comb/partition-odd-asc-subset-lex.h)
partition-odd-desc-out.txt is the output of partition-odd-desc-demo.cc.
Integer partitions into odd parts as weakly descending list of parts. Same as: compositions into weakly descending parts. Cf. OEIS sequence A000009.
The demo uses the functions from partition-odd-desc.h (fxt/src/comb/partition-odd-desc.h)
partition-odd-nonsquashing-desc-out.txt is the output of partition-odd-nonsquashing-desc-demo.cc.
Non-squashing partitions into odd parts as weakly descending list of parts. A non-squashing partition of n is a partition a[1] + a[2] + ... + a[m] = n such that a[k] >= sum(j=k+1..m, a[j] ).
Lexicographic order. Cf. OEIS sequence A187821.
The demo uses the functions from partition-odd-nonsquashing-desc.h (fxt/src/comb/partition-odd-nonsquashing-desc.h)
partition-rgs-lex-out.txt is the output of partition-rgs-lex-demo.cc.
Restricted growth strings (RGS) for partitions as descending lists, lexicographic order. Same as: least Young tableaux (as RGS) with fixed shape (partition). Cf. OEIS sequence A000041.
The demo uses the functions from partition-rgs-lex.h (fxt/src/comb/partition-rgs-lex.h) is-partition-rgs.h (fxt/src/comb/is-partition-rgs.h)
partition-s-desc-out.txt is the output of partition-s-desc-demo.cc.
S-partitions: integer partitions into parts 2^n-1. Representation as weakly descending list of parts. Cf. OEIS sequence A000929.
The demo uses the functions from partition-s-desc.h (fxt/src/comb/partition-s-desc.h)
partition-strongly-decr-desc-out.txt is the output of partition-strongly-decr-desc-demo.cc.
Strongly decreasing partitions as list of parts. A strongly decreasing partition of n is a partition a[1] + a[2] + ... + a[m] = n such that a[k] > sum(j=k+1..m, a[j] ). Lexicographic order. Cf. OEIS
sequences A040039 and A033485.
The demo uses the functions from partition-strongly-decr-desc.h (fxt/src/comb/partition-strongly-decr-desc.h)
pascal-out.txt is the output of pascal-demo.cc.
Pascal triangle. Cf. OEIS sequence A007318.
pellgen-gray-out.txt is the output of pellgen-gray-demo.cc.
Gray code for generalized Pell words, recursive CAT algorithm
pellgray-rec-out.txt is the output of pellgray-rec-demo.cc.
Gray code for Pell words, recursive CAT algorithm
perm-colex-out.txt is the output of perm-colex-demo.cc.
Generate all permutations in co-lexicographic (colex) order
The demo uses the functions from perm-colex.h (fxt/src/comb/perm-colex.h)
perm-derange-out.txt is the output of perm-derange-demo.cc.
Generate all permutations in derangement order.
The demo uses the functions from perm-derange.h (fxt/src/comb/perm-derange.h) perm-trotter.h (fxt/src/comb/perm-trotter.h)
perm-dist1-gray-out.txt is the output of perm-dist1-gray-demo.cc.
Gray code for permutations where i-1<=p(i)<=i+1 for all i, generated via Gray code for falling factorial numbers that are Fibonacci numbers
The demo uses the functions from fact2perm.h (fxt/src/comb/fact2perm.h) perminvert.h (fxt/src/perm/perminvert.h) reverse.h (fxt/src/perm/reverse.h)
perm-genus-out.txt is the output of perm-genus-demo.cc.
Genus of all permutations of n elements. Print parenthesis strings for permutations of genus zero. Cf. OEIS sequence A177267.
The demo uses the functions from perm-genus.h (fxt/src/perm/perm-genus.h) perm-lex-inv.h (fxt/src/comb/perm-lex-inv.h)
perm-gray-ffact-out.txt is the output of perm-gray-ffact-demo.cc.
Generate all permutations in minimal-change order via Gray code for falling factorial numbers, CAT algorithm.
The demo uses the functions from perm-gray-ffact.h (fxt/src/comb/perm-gray-ffact.h)
perm-gray-ffact2-out.txt is the output of perm-gray-ffact2-demo.cc.
Generate all permutations in minimal-change order via Gray code for falling factorial numbers, loopless algorithm.
The demo uses the functions from perm-gray-ffact2.h (fxt/src/comb/perm-gray-ffact2.h) mixedradix-gray2.h (fxt/src/comb/mixedradix-gray2.h)
perm-gray-lipski-out.txt is the output of perm-gray-lipski-demo.cc.
Four Gray codes for permutations, CAT algorithm.
The demo uses the functions from perm-gray-lipski.h (fxt/src/comb/perm-gray-lipski.h)
perm-gray-rfact-out.txt is the output of perm-gray-rfact-demo.cc.
Generate all permutations in minimal-change order via Gray code for rising factorial numbers, CAT algorithm.
The demo uses the functions from perm-gray-rfact.h (fxt/src/comb/perm-gray-rfact.h) mixedradix-gray.h (fxt/src/comb/mixedradix-gray.h)
perm-gray-rot1-out.txt is the output of perm-gray-rot1-demo.cc.
Generate all permutations in minimal-change order such that in the last permutations the first e elements are cyclically rotated by one where e is the greatest even number <=n.
The demo uses the functions from perm-gray-rot1.h (fxt/src/comb/perm-gray-rot1.h) mixedradix-gray.h (fxt/src/comb/mixedradix-gray.h)
perm-gray-wells-out.txt is the output of perm-gray-wells-demo.cc.
Two Gray codes for permutations (Wells' order and a variant of it), CAT algorithm.
The demo uses the functions from perm-gray-wells.h (fxt/src/comb/perm-gray-wells.h)
perm-heap-out.txt is the output of perm-heap-demo.cc.
Gray code for permutations, CAT algorithm. Algorithm following B.R.Heap (1963)
The demo uses the functions from perm-heap.h (fxt/src/comb/perm-heap.h) fact2perm.h (fxt/src/comb/fact2perm.h)
perm-heap2-out.txt is the output of perm-heap2-demo.cc.
Gray code for permutations, CAT algorithm, optimized version. Algorithm following B.R.Heap (1963)
The demo uses the functions from perm-heap2.h (fxt/src/comb/perm-heap2.h)
perm-heap2-swaps-out.txt is the output of perm-heap2-swaps-demo.cc.
Swaps for Gray code for permutations, CAT algorithm, optimized version. Algorithm following B.R.Heap (1963)
The demo uses the functions from perm-heap2-swaps.h (fxt/src/comb/perm-heap2-swaps.h)
perm-involution-out.txt is the output of perm-involution-demo.cc.
Involutions (self-inverse permutations). Cf. OEIS sequence A000085.
The demo uses the functions from perm-involution.h (fxt/src/comb/perm-involution.h)
perm-involution-naf-out.txt is the output of perm-involution-naf-demo.cc.
Self-inverse permutations (involutions) from falling factorial numbers that are non-adjacent forms (NAF). Cf. OEIS sequence A000085.
The demo uses the functions from mixedradix-naf-gray.h (fxt/src/comb/mixedradix-naf-gray.h)
perm-ives-out.txt is the output of perm-ives-demo.cc.
Permutations in the order c by Ives (inverse permutations are Ives' order a).
The demo uses the functions from perm-ives.h (fxt/src/comb/perm-ives.h)
perm-l1r2-gray-out.txt is the output of perm-l1r2-gray-demo.cc.
Gray code for permutations where i-1<=p(i)<=i+2 for all i, generated via Gray code for falling factorial numbers that are tribonacci numbers
The demo uses the functions from fact2perm.h (fxt/src/comb/fact2perm.h) perminvert.h (fxt/src/perm/perminvert.h) reverse.h (fxt/src/perm/reverse.h)
perm-lex-cycles-out.txt is the output of perm-lex-cycles-demo.cc.
Generate all permutations in lexicographic order, show cycles and inversion tables.
The demo uses the functions from perm-lex.h (fxt/src/comb/perm-lex.h) printcycles.h (fxt/src/perm/printcycles.h) fact2perm.h (fxt/src/comb/fact2perm.h)
perm-lex-out.txt is the output of perm-lex-demo.cc.
Generate all permutations in lexicographic order.
The demo uses the functions from perm-lex.h (fxt/src/comb/perm-lex.h)
perm-lex-inv-out.txt is the output of perm-lex-inv-demo.cc.
Generate all permutations in lexicographic order.
The demo uses the functions from perm-lex-inv.h (fxt/src/comb/perm-lex-inv.h)
perm-lex2-out.txt is the output of perm-lex2-demo.cc.
Generate all permutations in lexicographic order
The demo uses the functions from perm-lex2.h (fxt/src/comb/perm-lex2.h)
perm-mv0-out.txt is the output of perm-mv0-demo.cc.
Generate all inverse permutations with falling factorial numbers, CAT algorithm.
The demo uses the functions from perm-mv0.h (fxt/src/comb/perm-mv0.h)
perm-pref-out.txt is the output of perm-pref-demo.cc.
All permutations via prefix shifts ("cool-lex" order)
The demo uses the functions from perm-pref.h (fxt/src/comb/perm-pref.h)
perm-rec-out.txt is the output of perm-rec-demo.cc.
All cyclic permutations by an recursive algorithm.
The demo uses the functions from perm-rec.h (fxt/src/comb/perm-rec.h) fact2perm.h (fxt/src/comb/fact2perm.h)
perm-restrpref-out.txt is the output of perm-restrpref-demo.cc.
Permutations with restricted prefixes
The demo uses the functions from perm-restrpref.h (fxt/src/comb/perm-restrpref.h)
perm-rev-out.txt is the output of perm-rev-demo.cc.
Permutations by prefix reversals, CAT algorithm.
The demo uses the functions from perm-rev.h (fxt/src/comb/perm-rev.h)
perm-rev2-out.txt is the output of perm-rev2-demo.cc.
Permutations by prefix reversals, CAT algorithm.
The demo uses the functions from perm-rev2.h (fxt/src/comb/perm-rev2.h)
perm-right1-gray-out.txt is the output of perm-right1-gray-demo.cc.
Gray code for permutations where p(i)<=i+1 for all i, generated via Gray code for falling factorial numbers where digit x is followed by x or more zeros.
The demo uses the functions from fact2perm.h (fxt/src/comb/fact2perm.h) perminvert.h (fxt/src/perm/perminvert.h) reverse.h (fxt/src/perm/reverse.h)
perm-rot-out.txt is the output of perm-rot-demo.cc.
All permutations, by rotations (cyclic shifts).
The demo uses the functions from perm-rot.h (fxt/src/comb/perm-rot.h) perminvert.h (fxt/src/perm/perminvert.h) perminvert.cc (fxt/src/perm/perminvert.cc)
perm-rot-unrank-out.txt is the output of perm-rot-unrank-demo.cc.
Unranking for permutations by rotations (cyclic shifts).
The demo uses the functions from mixedradix-lex.h (fxt/src/comb/mixedradix-lex.h) rotate.h (fxt/src/perm/rotate.h) perminvert.h (fxt/src/perm/perminvert.h) perminvert.cc (fxt/src/perm/perminvert.cc)
perm-st-out.txt is the output of perm-st-demo.cc.
Single track ordering for permutations, CAT algorithm.
The demo uses the functions from perm-st.h (fxt/src/comb/perm-st.h) endo-enup.h (fxt/src/comb/endo-enup.h)
perm-st-gray-out.txt is the output of perm-st-gray-demo.cc.
Gray code for single track permutations: one transposition per update with odd n, one extra transposition once in (n-1)! updates with even n (optimal).
The demo uses the functions from perm-st-gray.h (fxt/src/comb/perm-st-gray.h) perm-gray-rot1.h (fxt/src/comb/perm-gray-rot1.h) mixedradix-gray.h (fxt/src/comb/mixedradix-gray.h)
perm-st-pref-out.txt is the output of perm-st-pref-demo.cc.
Single track ordering for permutations, swaps in prefix, CAT algorithm.
The demo uses the functions from perm-st-pref.h (fxt/src/comb/perm-st-pref.h) endo-enup.h (fxt/src/comb/endo-enup.h)
perm-star-out.txt is the output of perm-star-demo.cc.
Generate all permutations in star-transposition order.
The demo uses the functions from perm-star.h (fxt/src/comb/perm-star.h)
perm-star-inv-out.txt is the output of perm-star-inv-demo.cc.
Inverse star transposition permutations via permutations by prefix reversals.
The demo uses the functions from perm-rev2.h (fxt/src/comb/perm-rev2.h)
perm-star-swaps-out.txt is the output of perm-star-swaps-demo.cc.
Generate swaps for permutations in star-transpostion order. Cf. OEIS sequences A123400 and A159880.
The demo uses the functions from perm-star-swaps.h (fxt/src/comb/perm-star-swaps.h)
perm-trotter-out.txt is the output of perm-trotter-demo.cc.
Generate all permutations in strong minimal-change order using Trotter's algorithm. Smallest element moves most often.
The demo uses the functions from perm-trotter.h (fxt/src/comb/perm-trotter.h)
perm-trotter-lg-out.txt is the output of perm-trotter-lg-demo.cc.
Generate all permutations in strong minimal-change order using Trotter's algorithm. Largest element moves most often.
The demo uses the functions from perm-trotter-lg.h (fxt/src/comb/perm-trotter-lg.h)
perm2fact-out.txt is the output of perm2fact-demo.cc.
Show factorial representations (Lehmer, rev, rot, and swap) of permutations.
The demo uses the functions from fact2perm.h (fxt/src/comb/fact2perm.h) fact2perm.cc (fxt/src/comb/fact2perm.cc) fact2perm-rev.cc (fxt/src/comb/fact2perm-rev.cc) fact2perm-rot.cc (fxt/src/comb/
fact2perm-rot.cc) fact2perm-swp.cc (fxt/src/comb/fact2perm-swp.cc)
rgs-fincr-out.txt is the output of rgs-fincr-demo.cc.
All restricted growth strings (RGS) s[0,...,n-1] so that s[k] <= F[j]+i where F[0]=i, F[j+1]=( a[j+1]-a[j]==i ? F[j]+i : F[j] ) Lexicographic order. Cf. OEIS sequences A000110 (i=1), A004211 (i=2),
A004212 (i=3), A004213 (i=4), A005011 (i=5), A005012 (i=6).
The demo uses the functions from rgs-fincr.h (fxt/src/comb/rgs-fincr.h)
rgs-kincr-out.txt is the output of rgs-kincr-demo.cc.
All Restricted growth strings (RGS) s[0,...,n-1] so that s[k] <= s[k-1] + k Lexicographic order. Cf. OEIS sequence A107877.
The demo uses the functions from rgs-kincr.h (fxt/src/comb/rgs-kincr.h)
rgs-maxincr-out.txt is the output of rgs-maxincr-demo.cc.
All restricted growth strings (RGS) s[0,...,n-1] so that s[k] <= max( j < k, s[j] + i ) Lexicographic order
The demo uses the functions from rgs-maxincr.h (fxt/src/comb/rgs-maxincr.h)
rll-rec-out.txt is the output of rll-rec-demo.cc.
Run length limited (RLL) words (at most 2 identical consecutive bits), recursive CAT algorithm.
root-sums-out.txt is the output of root-sums-demo.cc.
Zero-sums of roots of unity.
The demo uses the functions from binary-necklace.h (fxt/src/comb/binary-necklace.h)
ruler-func-out.txt is the output of ruler-func-demo.cc.
Ruler function (zero-based), 2-valuations of n: 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0 4 0 1 0 2 0 1 ... Loopless algorithm (specialization of Knuth's method for mixed radix Gray code). Cf. OEIS sequence
The demo uses the functions from ruler-func.h (fxt/src/comb/ruler-func.h)
ruler-func-s-out.txt is the output of ruler-func-s-demo.cc.
Ruler function (one-based), s-valuations of s*n: s=2: 1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 5 1 2 1 3 1 2 1 ... cf. OEIS sequence A001511 and A007814 (zero based) s=3: 1 1 2 1 1 2 1 1 3 1 1 2 1 1 2 1 1 3 1 1
2 1 1 ... cf. OEIS sequences A051064 and A007949 (zero based) Loopless algorithm.
The demo uses the functions from ruler-func-s.h (fxt/src/comb/ruler-func-s.h) composition-nz-sorts.h (fxt/src/comb/composition-nz-sorts.h)
ruler-func1-out.txt is the output of ruler-func1-demo.cc.
Ruler function (one-based), 2-valuations of 2*n: 1 2 1 3 1 2 1 4 1 2 1 3 1 2 1 5 1 2 1 3 1 2 1 ... Loopless algorithm. Cf. OEIS sequence A001511.
The demo uses the functions from ruler-func1.h (fxt/src/comb/ruler-func1.h) composition-nz.h (fxt/src/comb/composition-nz.h)
schroeder-path-lex-out.txt is the output of schroeder-path-lex-demo.cc.
Schroeder paths in lexicographic order, CAT algorithm. Cf. OEIS sequence A006318: large Schroeder numbers.
The demo uses the functions from schroeder-path-lex.h (fxt/src/comb/schroeder-path-lex.h)
schroeder-rgs-lex-out.txt is the output of schroeder-rgs-lex-demo.cc.
Schroeder restricted growth strings (RGS): a Schroeder RGS is a word a[0,1,2,...,n-1] where a[k] <= k + 1 (rising factorial numbers), a[0] <= m0 and a[k] + 1 >= max(j=1..k-1, a[j]) m0 == 0 ==> little
Schroeder numbers, A001003 m0 == 1 ==> large Schroeder numbers, A006318 Lexicographic order.
The demo uses the functions from schroeder-rgs-lex.h (fxt/src/comb/schroeder-rgs-lex.h) is-schroeder-rgs.h (fxt/src/comb/is-schroeder-rgs.h)
schroeder-tree-out.txt is the output of schroeder-tree-demo.cc.
Generate all Schroeder trees with m-leaf nodes, loopless algorithm.
score-sequence-out.txt is the output of score-sequence-demo.cc.
Score sequences: weakly increasing sequences a[0,1,...,n-1] where sum(j=0..k, a[j]) >= k*(k+1)/2 and sum(j=0..n-1, a[j]) = (n-1)*n/2. Lexicographic order. See OEIS sequence A000571.
The demo uses the functions from score-sequence.h (fxt/src/comb/score-sequence.h)
setpart-ccf-rgs-lex-out.txt is the output of setpart-ccf-rgs-lex-demo.cc.
Restricted growth strings (RGS) for set partitions: each digit a[k] < k and a[k-1] != 0 implies a[k] <= a[k-1]. The RGS correspond to permutations in canonical cycle form (CCF) that are valid set
partitions. Same as: maps from {1, 2, 3, ..., n} to {0, 1, 2, 3, ..., n} such that f(x) < x and f(x-1) != 0 implies f(x) <= f(x-1).
The demo uses the functions from setpart-ccf-rgs-lex.h (fxt/src/comb/setpart-ccf-rgs-lex.h) is-setpart-ccf-perm.h (fxt/src/comb/is-setpart-ccf-perm.h)
setpart-ck-rgs-out.txt is the output of setpart-ck-rgs-demo.cc.
Restricted growth strings (RGS) for set partitions: each digit is either a fixed point or a digit from the prefix. Equivalently: rooted trees of height <= 2.
The demo uses the functions from setpart-ck-rgs.h (fxt/src/comb/setpart-ck-rgs.h)
setpart-out.txt is the output of setpart-demo.cc.
Set partitions.
The demo uses the functions from setpart.h (fxt/src/comb/setpart.h) setpart.cc (fxt/src/comb/setpart.cc)
setpart-p-rgs-lex-out.txt is the output of setpart-p-rgs-lex-demo.cc.
Set partitions of the n-set into p parts as restricted growth strings (RGS). Counted by the Stirling numbers of second kind S(n,p). Cf. OEIS sequence A008277.
The demo uses the functions from setpart-p-rgs-lex.h (fxt/src/comb/setpart-p-rgs-lex.h)
setpart-rgs-gray-out.txt is the output of setpart-rgs-gray-demo.cc.
Set partitions as restricted growth strings (RGS).
The demo uses the functions from setpart-rgs-gray.h (fxt/src/comb/setpart-rgs-gray.h)
setpart-rgs-lex-out.txt is the output of setpart-rgs-lex-demo.cc.
Set partitions as restricted growth strings (RGS).
The demo uses the functions from setpart-rgs-lex.h (fxt/src/comb/setpart-rgs-lex.h)
setpart-rgs-subset-lex-out.txt is the output of setpart-rgs-subset-lex-demo.cc.
Restricted growth strings (RGS) for set partitions in subset-lex order.
The demo uses the functions from setpart-rgs-subset-lex.h (fxt/src/comb/setpart-rgs-subset-lex.h)
setpart-s-zero-map-rgs-out.txt is the output of setpart-s-zero-map-rgs-demo.cc.
Set partitions into sets of size <= s+1 represented as restricted growth strings (RGS): each digit a[k] is either zero or the (one-based) index of a zero in the prefix and there are at most s digits
pointing to the same zero in the prefix. Same as: maps from {1, 2, 3, ..., n} to {0, 1, 2, 3, ..., n} such that f(x) < x and f(f(x)) == 0 and there are at most s values t such that f(t) = f(x).
Lexicographic order. Cf. OEIS sequences A000085 (for s=1), A001680 (s=2), A001681 (s=3), A110038 (s=4), and A000110 (for s>=n-1).
The demo uses the functions from setpart-s-zero-map-rgs.h (fxt/src/comb/setpart-s-zero-map-rgs.h) is-zero-map-rgs.h (fxt/src/comb/is-zero-map-rgs.h) print-zero-map-rgs.h (fxt/src/comb/
setpart-zero-map-rgs-out.txt is the output of setpart-zero-map-rgs-demo.cc.
Restricted growth strings (RGS) for set partitions: each digit a[k] is either zero or a[k] < k and a[a[k]] == 0. Same as: maps from {1, 2, 3, ..., n} to {0, 1, 2, 3, ..., n} such that f(x) < x and f
(f(x)) == 0. Cf. OEIS sequence A000110.
The demo uses the functions from setpart-zero-map-rgs.h (fxt/src/comb/setpart-zero-map-rgs.h) is-zero-map-rgs.h (fxt/src/comb/is-zero-map-rgs.h) print-zero-map-rgs.h (fxt/src/comb/
shift-subsets-out.txt is the output of shift-subsets-demo.cc.
Shifts-order for subsets of a binary word.
smooth-rfact-rgs-out.txt is the output of smooth-rfact-rgs-demo.cc.
Restricted growth strings (RGS) [d(0), d(1), d(2), ..., d(n-1)] where 0 <= d(k) <= k and abs(d(k)-d(k-1)) <= 1 (smooth factorial numbers). Cf. OEIS sequence A005773.
The demo uses the functions from smooth-rfact-rgs.h (fxt/src/comb/smooth-rfact-rgs.h)
stirling1-out.txt is the output of stirling1-demo.cc.
Stirling numbers of the first kind (Stirling cycle numbers).
stirling2-out.txt is the output of stirling2-demo.cc.
Stirling numbers of the second kind (set numbers) and Bell numbers.
stringsubst-out.txt is the output of stringsubst-demo.cc.
String substitution engine (Lindenmayer system, or L-system).
The demo uses the functions from stringsubst.h (fxt/src/comb/stringsubst.h)
stringsubst-hilbert3d-out.txt is the output of stringsubst-hilbert3d-demo.cc.
Hilbert's (3-dimensional) space filling curve generated by string substitution.
The demo uses the functions from stringsubst.h (fxt/src/comb/stringsubst.h)
subset-debruijn-out.txt is the output of subset-debruijn-demo.cc.
Generate all subsets in De Bruijn order.
The demo uses the functions from subset-debruijn.h (fxt/src/comb/subset-debruijn.h) debruijn.h (fxt/src/comb/debruijn.h) necklace.h (fxt/src/comb/necklace.h)
subset-deltalex-out.txt is the output of subset-deltalex-demo.cc.
All subsets in lexicographic order with respect to delta sets.
The demo uses the functions from subset-deltalex.h (fxt/src/comb/subset-deltalex.h) comb-print.h (fxt/src/comb/comb-print.h)
subset-gray-delta-out.txt is the output of subset-gray-delta-demo.cc.
Generate all subsets (as delta sets) in minimal-change order.
The demo uses the functions from subset-gray-delta.h (fxt/src/comb/subset-gray-delta.h)
subset-gray-out.txt is the output of subset-gray-demo.cc.
Generate all subsets in minimal-change order.
The demo uses the functions from subset-gray.h (fxt/src/comb/subset-gray.h)
subset-lex-out.txt is the output of subset-lex-demo.cc.
Nonempty subsets of the set {0,1,2,...,n-1} in (subset-)lexicographic order. Representation as list of parts. Loopless generation. Cf. OEIS sequence A108918
The demo uses the functions from subset-lex.h (fxt/src/comb/subset-lex.h)
weakly-unimodal-rgs-lex-out.txt is the output of weakly-unimodal-rgs-lex-demo.cc.
Weakly unimodal RGS (restricted growth strings), lexicographic order. Cf. OEIS sequences: A088536: unimodal maps [1..n] -> [1..n]. A225006: unimodal maps [1..n] -> [1..n+1]. A000000: unimodal maps
[1..n] -> [1..n+2] (not in the OEIS). A000000: unimodal maps [1..n] -> [1..2*n] (not in the OEIS). A000000: unimodal maps [1..2*n] -> [1..n] (not in the OEIS). A000000: unimodal maps [1..n] ->
[1..n-1] (not in the OEIS). A000000: unimodal maps [1..n] -> [1..floor(n/2)] (not in the OEIS). A000124: unimodal maps [1..n] -> [1,2]. A000000: unimodal maps [1..n] -> [1,2,3] (not in the OEIS).
A002412: unimodal maps [1,2,3] -> [1..n]. A006324: unimodal maps [1,2,3,4] -> [1..n].
The demo uses the functions from weakly-unimodal-rgs-lex.h (fxt/src/comb/weakly-unimodal-rgs-lex.h) is-unimodal.h (fxt/src/comb/is-unimodal.h)
wfl-hilbert-out.txt is the output of wfl-hilbert-demo.cc.
Fred Lunnon's (second) iterative algorithm to convert linear coordinate into coordinates of d-dimensional Hilbert curve (and back).
The demo uses the functions from wfl-hilbert.h (fxt/src/comb/wfl-hilbert.h)
young-tab-rgs-out.txt is the output of young-tab-rgs-demo.cc.
Restricted growth strings (RGS) for standard Young tableaux: the k-th occurrence of a digit d in the RGS must precede the k-th occurrence of the digit d+1. Lexicographic order. The strings are also
called ballot sequences. Cf. OEIS sequences A000085 (all tableaux), A001405 (tableaux with height <= 2, central binomial coefficients) A001006 (tableaux with height <= 3, Motzkin numbers) A005817
(height <= 4), A049401 (height <= 5), A007579 (height <= 6) A007578 (height <= 7), A007580 (height <= 8), A212915 (height <= 9), A212916 (height <= 10). A001189 (height <= n-1), A014495 (height = 2),
A217323 (height = 3), A217324 (height = 4), A217325 (height = 5), A217326 (height = 6), A217327 (height = 7), A217328 (height = 8). Cf. A182172 (tableaux of n cells and height <= k). Cf. OEIS
sequences A061343 (all shifted tableaux; using condition is_shifted(1)), A210736 (shifted, height <= 2), A082395 (shifted, height <= 3). Cf. OEIS sequences A161125 (descent numbers), A225617 (strict
inversions), and A225618 (weak inversions).
The demo uses the functions from young-tab-rgs.h (fxt/src/comb/young-tab-rgs.h) print-young-tab-rgs-aa.cc (fxt/src/comb/print-young-tab-rgs-aa.cc) is-shifted-young-tab-rgs.h (fxt/src/comb/
young-tab-rgs-subset-lex-out.txt is the output of young-tab-rgs-subset-lex-demo.cc.
Restricted growth strings (RGS) for standard Young tableaux: the k-th occurrence of a digit d in the RGS must precede the k-th occurrence of the digit d+1. Subset-lex order. Cf. OEIS sequences
A000085 (all tableaux), A061343 (all shifted tableaux; using condition is_shifted(1)), A210736 (shifted, height <= 2), A082395 (shifted, height <= 3).
The demo uses the functions from young-tab-rgs-subset-lex.h (fxt/src/comb/young-tab-rgs-subset-lex.h) print-young-tab-rgs-aa.cc (fxt/src/comb/print-young-tab-rgs-aa.cc)
|
{"url":"http://www.jjj.de/fxt/demo/comb/","timestamp":"2014-04-17T03:48:00Z","content_type":null,"content_length":"154814","record_id":"<urn:uuid:af313fa2-1718-4ff7-b6f4-b9f45458f2fe>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Isallobaric wind
From AMS Glossary
isallobaric wind
Mathematically, the isallobaric wind
is defined in terms of the local accelerations but approximated by the
gradient as follows:
is the vertical unit
Coriolis parameter
geostrophic wind
, α the
specific volume
the horizontal
del operator
, and
. The isallobaric wind is thus directed normal to the isallobars toward falling pressure, with magnitude proportional to the allobaric gradient. Because the isallobaric wind is associated with
effects and because of the many assumptions used in deriving its equation, observational evidence of this wind is unsatisfactory.
Haurwitz, B. 1941. Dynamic Meteorology. 155–159.
|
{"url":"http://glossary.ametsoc.org/wiki/Isallobaric_wind","timestamp":"2014-04-18T23:15:37Z","content_type":null,"content_length":"16635","record_id":"<urn:uuid:d871794b-d94f-48c7-96c5-03708159811f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HARD Hyperbola intersecting ellipse question. find acute angle between tangents!!!
January 10th 2010, 04:15 PM #1
Dec 2008
The hyperbola $\frac{x^2}{a^2}-\frac{y^2}{b^2} = 1$ has an eccentricity of e. The ellipse $\frac{x^2}{a^2+b^2}+\frac{y^2}{b^2} = 1$ has an eccentricity of 1/e.
If the two graphs intersect at P in the first quadrant, show that the acute angle $\Theta$ between the the tangents to the curves at P satisfies:
$tan\Theta = \sqrt{2}(e + \frac{1}{e})$
Use implicit differentiation to find the slope of the tangent line of each graph at point (x,y). The slope is, of course, the tangent of the angle each makes with the x-axis. And the angle
between the tangent lines (which is, by definition, the angle between the curves) is the difference between those two angles. Use the trig indentity $tan(x- y)= \frac{tan(x)- tan(y)}{1+ tan(x)tan
You will need to know, if you do not already, that the eccentricity of the ellipse, $\frac{x^2}{a^2}+ \frac{y^2}{b^2}= 1$, is given by $e= \frac{\sqrt{a^2- b^2}}{a}$ (for a> b) and that the
eccentricity of the hyperbola, $\frac{x^2}{a^2}- \frac{y^2}{b^2}= 1$, is given by $e= \frac{\sqrt{a^2+ b^2}}{a}$ (also for a> b).
Last edited by HallsofIvy; January 12th 2010 at 05:21 AM.
for your eccentricities: shouldn't those be divided by a?
January 11th 2010, 04:43 AM #2
MHF Contributor
Apr 2005
January 11th 2010, 05:58 PM #3
January 12th 2010, 05:19 AM #4
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/calculus/123192-hard-hyperbola-intersecting-ellipse-question-find-acute-angle-between-tangents.html","timestamp":"2014-04-16T13:31:09Z","content_type":null,"content_length":"46437","record_id":"<urn:uuid:57bfd622-88fa-4862-8040-a24210554a0d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Professor Alex Novikov (University of Technology, Sydney)
MIMS Distinguished Visitor May 2010 & January 2011
Professor of Mathematics, Department of Mathematical Sciences, University of Technology, Sydney.
1999-present Professor of Mathematics University of Technology, Sydney
1996-1999 Senior Lecturer Department of Statistics, University of Newcastle
1970-1996 Research Fellow Steklov Mathematical Institute, Moscow
Selected Publications:
1. J. Hinz, A. Novikov, `On fair pricing of emission-related derivatives', Bernoulli (forthcoming), available online November 2009 (http://isi.cbs.nl/bernoulli/future.htm).
2. Borovkov, K. & Novikov, A. 2008, `On exit times of Levy-driven Ornstein-Uhlenbeck processes', Statistics and Probability Letters, vol. 78, no. 12, pp. 1517-1525.
3. Novikov, A. & Kordzakhia, N. 2008, `Martingales and first passage times of AR(1) sequences', Stochastics, vol. 80, no. 2-3, pp.197-210.
4. Schmidt, T. & Novikov, A. 2008, `A Structural Model with Unobserved Default Boundary', Applied Mathematical Finance, vol. 15, no. 2, pp. 183-203.
5. Kordzakhia, N. & Novikov, A. 2008, `Pricing of Defaultable Securities under Stochastic Interest', in Sarychev, A; Shiryaev, A; Guerra, M; Grossinho, M (eds), Mathematical Control Theory and
Finance, Springer, Berlin, pp. 251-263.
6. Novikov, A. & Shiryaev, A.N. 2007, `On solution of the optimal stopping problem for processes with independent increments', Stochastics, vol. 79, no. 3-4, pp. 393-406.
7. Lipster, R. & Novikov, A. 2006, `Tail distributions of supremum and quadratic variation of local Martingales', in Kabanov Y; Lipster R; Stoyanov J (eds), From Stochastic Calculus to Mathematical
Finance, Springer, Heidelberg, pp. 421-432.
8. Sukparungsee, S. & Novikov, A. 2006, `On EWMA procedure for detection of a change in observation via Martingale approach', KMITL Science Journal, vol. 6, no. 2a, pp. 373-380.
9. Borovkov, K. & Novikov, A. 2005, `Explicit bounds for approximation rates of boundary crossing probabilities for the Wiener process', Journal of Applied Probability, vol. 42, no. 1, pp. 82-92.
10. Novikov, A., Melchers, R., Shinjikashvili, E. & Kordzakhia, N. 2005, `First passage time of filtered Poisson process with exponential shape function', Probabilistic Engineering Mechanics, vol.
20, no. 1, pp. 57-65.
Research while at MIMS
• Prof Novikov will be studying Optimal Stopping Problems during his time at the University.
Seminars while at MIMS
Contact details while at MIMS
Further information
|
{"url":"http://www.mims.manchester.ac.uk/visitors/a-novikov/index.html","timestamp":"2014-04-17T13:37:51Z","content_type":null,"content_length":"8819","record_id":"<urn:uuid:f3f5ef56-3b8f-402e-af96-eb09d9656133>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
|
PLaneT Package Repository : Home > cce > dracula.plt > package version 8.3
#lang scheme
(require "../acl2/parse.ss")
(provide (all-defined-out))
;; Helper Functions
;; split-all : String -> [Listof String]
;; Produces every character in a string.
(define (split-all str)
(regexp-split "" str))
;; Sample Output
(define prompt "ACL2 !>")
(define prompt-block (make-prompt-block prompt))
(define prompt-no-! "ACL2 >")
(define prompt-no-!-block (make-prompt-block prompt-no-!))
(define tree-prefix "#<\\<0")
(define tree-suffix "#>\\>")
;; One possible ACL2 preamble.
(define preamble
"Welcome to Clozure Common Lisp Version 1.2-r9226-RC1 (DarwinX8664)!\n\n"
" ACL2 Version 3.3 built July 5, 2008 17:20:55.\n Copyright (C) 2007"
" University of Texas at Austin\n ACL2 comes with ABSOLUTELY NO WARRANTY."
" This is free software and you\n are welcome to redistribute it under"
" certain conditions. For details,\n see the GNU General Public License."
"\n\n Initialized with (INITIALIZE-ACL2 'INCLUDE-BOOK *ACL2-PASS-2-FILES*)."
"\n See the documentation topic note-3-3 for recent changes.\n"
" Note: We have modified the prompt in some underlying Lisps to further\n"
" distinguish it from the ACL2 prompt.\n\n NOTE!! Proof trees are disabled"
" in ACL2. To enable them in emacs,\n look under the ACL2 source directory"
" in interface/emacs/README.doc; \n and, to turn on proof trees, execute :"
"START-PROOF-TREE in the ACL2 \n command loop. Look in the ACL2"
" documentation under PROOF-TREE.\n\nACL2 Version 3.3. Level 1."
" Cbd \"/Users/sky/current/acl2/wrapper/\".\nDistributed books directory"
" \"/Users/sky/acl2-sources/books/\".\nType :help for help.\n"
"Type (good-bye) to quit completely out of ACL2.\n\n"))
(define preamble+prompt (string-append preamble prompt))
(define preamble/parsed
(make-parse-state prompt-block (list (make-text-block preamble))))
;; The results of "(defthm x=x (equal x x) :rule-classes nil)\n",
;; without proof trees.
(define x=x
"\nBut we reduce the conjecture to T, by primitive type reasoning.\n\n"
"Q.E.D.\n\nSummary\nForm: ( DEFTHM X=X ...)\nRules: "
"((:FAKE-RUNE-FOR-TYPE-SET NIL))\nWarnings: None\nTime: 0.00 seconds"
" (prove: 0.00, print: 0.00, other: 0.00)\n X=X\n"))
(define x=x+prompt (string-append x=x prompt))
(define x=x/parsed
(make-parse-state prompt-block (list (make-text-block x=x))))
;; The results of ":u\n", after the x=x theorem above.
(define undo " 0:x(EXIT-BOOT-STRAP-MODE)\n")
(define undo+prompt (string-append undo prompt))
(define undo/parsed
(make-parse-state prompt-block (list (make-text-block undo))))
;; The results of ":start-proof-tree\n".
(define start-proof-tree
"\nProof tree output is now enabled. Note that :START-PROOF-TREE works\n"
"by removing 'proof-tree from the inhibit-output-lst; see :DOC "
(define start-proof-tree+prompt (string-append start-proof-tree prompt))
(define start-proof-tree/parsed
prompt-block (list (make-text-block start-proof-tree))))
;; The results of "(defthm x=x (equal x x) :rule-classes nil)\n",
;; with proof trees.
(define x=x/before
"\n<< Starting proof tree logging >>\n")
(define x=x/tree
"( DEFTHM X=X ...)\nQ.E.D.")
(define x=x/after
"\nBut we reduce the conjecture to T, by primitive type reasoning.\n\n"
"Q.E.D.\n\nSummary\nForm: ( DEFTHM X=X ...)\nRules: "
"((:FAKE-RUNE-FOR-TYPE-SET NIL))\nWarnings: None\nTime: 0.00 seconds"
" (prove: 0.00, print: 0.00, proof tree: 0.00, other: 0.00)\n X=X\n"))
(define x=x+trees
(string-append x=x/before tree-prefix x=x/tree tree-suffix x=x/after))
(define x=x+trees+prompt (string-append x=x+trees prompt))
(define x=x+trees/parsed
(list (make-text-block x=x/after)
(make-tree-block x=x/tree)
(make-text-block x=x/before))))
;; The results of "(set-guard-checking nil)".
(define guards-off
"\nMasking guard violations but still checking guards except for"
" self-\nrecursive calls. To avoid guard checking entirely, "
":SET-GUARD-CHECKING\n:NONE. See :DOC set-guard-checking.\n\n"))
(define guards-off+prompt-no-! (string-append guards-off prompt-no-!))
(define guards-off/parsed
(make-parse-state prompt-no-!-block (list (make-text-block guards-off))))
;; The results of "(defthm bad (< x 0))\n"
(define incorrect/before "\n<< Starting proof tree logging >>")
(define incorrect/tree1
"( DEFTHM BAD ...)\nc 0 Goal PUSH *1\n")
(define incorrect/middle
"\nName the formula above *1.\n\nNo induction schemes are suggested by *1."
" Consequently, the proof\nattempt has failed.\n\nSummary\nForm:"
" ( DEFTHM BAD ...)\nRules: NIL\nWarnings: None\nTime: 0.00 seconds"
" (prove: 0.00, print: 0.00, proof tree: 0.00, other: 0.00)\n\n---\n"
"The key checkpoint goal, below, may help you to debug this failure.\n"
"See :DOC failure and see :DOC set-checkpoint-summary-limit.\n---\n\n*** "
"Key checkpoint at the top level: ***\n\nGoal\n(< X 0)\n\n"
"******** FAILED ******** See :DOC failure ******** FAILED ********\n"))
(define incorrect/tree2
"( DEFTHM BAD ...)\n******** FAILED ******** See :DOC failure "
"******** FAILED ********\nc 0 Goal PUSH *1\n"))
(define incorrect+trees
(string-append incorrect/before
(define incorrect+trees+prompt (string-append incorrect+trees prompt))
(define incorrect+trees/parsed
(list (make-text-block "")
(make-tree-block incorrect/tree2)
(make-text-block incorrect/middle)
(make-tree-block incorrect/tree1)
(make-text-block incorrect/before))))
(define incorrect/incomplete/parsed
(make-partial-block "" (make-text-block ""))
(list (make-tree-block incorrect/tree2)
(make-text-block incorrect/middle)
(make-tree-block incorrect/tree1)
(make-text-block incorrect/before))))
;; result of sending "(+ 1 x)\n"
(define unbound-variable
"\n\nACL2 Error in TOP-LEVEL: Global variables, such as X, are not"
" allowed.\nSee :DOC ASSIGN and :DOC @.\n\n"))
(define unbound-variable+prompt (string-append unbound-variable prompt))
(define unbound-variable/parsed
prompt-block (list (make-text-block unbound-variable))))
;; Result of sending the following three theorems:
(defthm app/associative
(equal (append (append x y) z)
(append x (append y z))))
(defthm app/associative2
(equal (append x (append y z))
(append (append x y) z)))
(defthm app/associotive-len
(= (len (append (append x y) z))
(len (append x (append y z)))))
(define app/associative
"\nName the formula above *1.\n"
"\nPerhaps we can prove *1 by induction. Three induction schemes are\n"
"suggested by this conjecture. Subsumption reduces that number to two.\n"
"However, one of these is flawed and so we are left with one viable\n"
"candidate. \n\nWe will induct according to a scheme suggested by"
" (BINARY-APPEND X Y).\nThis suggestion was produced using the :induction"
" rule BINARY-APPEND.\nIf we let (:P X Y Z) denote *1 above then the"
" induction scheme we'll\nuse is\n(AND (IMPLIES (AND (NOT (ENDP X)) "
"(:P (CDR X) Y Z))\n (:P X Y Z))\n (IMPLIES (ENDP X) "
"(:P X Y Z))).\nThis induction is justified by the same argument used to"
" admit BINARY-\nAPPEND. When applied to the goal at hand the above"
" induction scheme\nproduces two nontautological subgoals.\n\n"
"Subgoal *1/2\n(IMPLIES (AND (NOT (ENDP X))\n "
"(EQUAL (APPEND (APPEND (CDR X) Y) Z)\n "
"(APPEND (CDR X) Y Z)))\n (EQUAL (APPEND (APPEND X Y) Z)\n"
" (APPEND X Y Z))).\n\nBy the simple :definition ENDP we"
" reduce the conjecture to\n\nSubgoal *1/2'\n(IMPLIES (AND (CONSP X)\n"
" (EQUAL (APPEND (APPEND (CDR X) Y) Z)\n "
"(APPEND (CDR X) Y Z)))\n (EQUAL (APPEND (APPEND X Y) Z)\n"
" (APPEND X Y Z))).\n\nBut simplification reduces this to T,"
" using the :definition BINARY-\nAPPEND, primitive type reasoning and the"
" :rewrite rules CAR-CONS and\nCDR-CONS.\n\nSubgoal *1/1\n"
"(IMPLIES (ENDP X)\n (EQUAL (APPEND (APPEND X Y) Z)\n"
" (APPEND X Y Z))).\n\nBy the simple :definition ENDP we"
" reduce the conjecture to\n\nSubgoal *1/1'\n(IMPLIES (NOT (CONSP X))\n"
" (EQUAL (APPEND (APPEND X Y) Z)\n (APPEND X Y Z)))."
"\n\nBut simplification reduces this to T, using the :definition BINARY-\n"
"APPEND and primitive type reasoning.\n\nThat completes the proof of *1.\n\n"
"Q.E.D.\n\nSummary\nForm: ( DEFTHM APP/ASSOCIATIVE ...)\nRules:"
" ((:DEFINITION BINARY-APPEND)\n (:DEFINITION ENDP)\n"
" (:DEFINITION NOT)\n (:FAKE-RUNE-FOR-TYPE-SET NIL)\n"
" (:INDUCTION BINARY-APPEND)\n (:REWRITE CAR-CONS)\n"
" (:REWRITE CDR-CONS))\nWarnings: None\nTime: 0.01 seconds"
" (prove: 0.00, print: 0.01, other: 0.00)\n APP/ASSOCIATIVE\n"))
(define app/associative+prompt (string-append app/associative prompt))
(define app/associative2
"\nBut we reduce the conjecture to T, by the simple :rewrite rule"
" APP/ASSOCIATIVE\\\n.\n\nQ.E.D.\n\nSummary\nForm: "
"( DEFTHM APP/ASSOCIATIVE2 ...)\nRules: ((:REWRITE APP/ASSOCIATIVE))\n"
"Warnings: None\nTime: 0.00 seconds "
"(prove: 0.00, print: 0.00, other: 0.00)\n"
" APP/ASSOCIATIVE2\n"))
(define app/associative2+prompt (string-append app/associative2 prompt))
(define app/associative-len
"HARD ACL2 ERROR in PREPROCESS: The call depth limit of 1000 has been\n"
"exceeded in the ACL2 preprocessor (a sort of rewriter)."
" There is probably\na loop caused by some set of enabled simple rules."
" To see why the\nlimit was exceeded, execute\n :brr t\nand next retry"
" the proof with :hints\n :do-not '(preprocess)\nand then follow the"
" directions in the resulting error message. See\n:DOC rewrite-stack-limit."
"\n\n\n\nACL2 Error in TOP-LEVEL: Evaluation aborted. See :DOC wet"
" for how\nyou might be able to get an error backtrace.\n\n"))
(define app/associative-len+prompt (string-append app/associative-len prompt))
(define app/associative-len/parsed
(list (make-text-block app/associative-len))))
;; The result of sending app/associative from above, with proof trees:
(define app/associative/before
"\n<< Starting proof tree logging >>\n"))
(define app/associative/tree1
"( DEFTHM APP/ASSOCIATIVE ...)\nc 0 Goal PUSH *1\n\n"))
(define app/associative/body1
"Name the formula above *1.\n\nPerhaps we can prove *1 by induction."
" Three induction schemes are\nsuggested by this conjecture. Subsumption"
" reduces that number to two.\nHowever, one of these is flawed and so we"
" are left with one viable\ncandidate. \n\nWe will induct according to a"
" scheme suggested by (BINARY-APPEND X Y).\nThis suggestion was produced"
" using the :induction rule BINARY-APPEND.\nIf we let (:P X Y Z) denote *1"
" above then the induction scheme we'll\nuse is\n"
"(AND (IMPLIES (AND (NOT (ENDP X)) (:P (CDR X) Y Z))\n"
" (:P X Y Z))\n (IMPLIES (ENDP X) (:P X Y Z))).\n"
"This induction is justified by the same argument used to admit BINARY-\n"
"APPEND. When applied to the goal at hand the above induction scheme\n"
"produces two nontautological subgoals.\n\nSubgoal *1/2\n"
"(IMPLIES (AND (NOT (ENDP X))\n "
"(EQUAL (APPEND (APPEND (CDR X) Y) Z)\n "
"(APPEND (CDR X) Y Z)))\n (EQUAL (APPEND (APPEND X Y) Z)\n"
" (APPEND X Y Z))).\n"))
(define app/associative/tree2
"( DEFTHM APP/ASSOCIATIVE ...)\nc 0 Goal PUSH *1\n"
"++++++++++++++++++++++++++++++\nc 2 *1 INDUCT\n"
" 1 | Subgoal *1/2 preprocess\n | | <1 subgoal>\n"
" | <1 more subgoal>\n\n"))
(define app/associative/body2
"By the simple :definition ENDP we reduce the conjecture to\n\n"
"Subgoal *1/2'\n(IMPLIES (AND (CONSP X)\n "
"(EQUAL (APPEND (APPEND (CDR X) Y) Z)\n "
"(APPEND (CDR X) Y Z)))\n (EQUAL (APPEND (APPEND X Y) Z)\n"
" (APPEND X Y Z))).\n"))
(define app/associative/tree3
"( DEFTHM APP/ASSOCIATIVE ...)\nc 0 Goal PUSH *1\n"
"++++++++++++++++++++++++++++++\nc 2 *1 INDUCT\n"
" | <1 more subgoal>\n\n"))
(define app/associative/body3
"But simplification reduces this to T, using the :definition BINARY-\n"
"APPEND, primitive type reasoning and the :rewrite rules CAR-CONS and\n"
"CDR-CONS.\n\nSubgoal *1/1\n(IMPLIES (ENDP X)\n "
"(EQUAL (APPEND (APPEND X Y) Z)\n (APPEND X Y Z))).\n"))
(define app/associative/tree4
"( DEFTHM APP/ASSOCIATIVE ...)\nc 0 Goal PUSH *1\n"
"++++++++++++++++++++++++++++++\nc 2 *1 INDUCT\n"
" 1 | Subgoal *1/1 preprocess\n | | <1 subgoal>\n\n"))
(define app/associative/body4
"By the simple :definition ENDP we reduce the conjecture to\n\n"
"Subgoal *1/1'\n(IMPLIES (NOT (CONSP X))\n "
"(EQUAL (APPEND (APPEND X Y) Z)\n (APPEND X Y Z))).\n"))
(define app/associative/tree5
"( DEFTHM APP/ASSOCIATIVE ...)\nQ.E.D.\n"))
(define app/associative/conclusion
"But simplification reduces this to T, using the :definition BINARY-\n"
"APPEND and primitive type reasoning.\n\nThat completes the proof of *1.\n\n"
"Q.E.D.\n\nSummary\nForm: ( DEFTHM APP/ASSOCIATIVE ...)\n"
"Rules: ((:DEFINITION BINARY-APPEND)\n (:DEFINITION ENDP)\n"
" (:DEFINITION NOT)\n (:FAKE-RUNE-FOR-TYPE-SET NIL)\n"
" (:INDUCTION BINARY-APPEND)\n (:REWRITE CAR-CONS)\n"
" (:REWRITE CDR-CONS))\nWarnings: None\nTime: 0.01 seconds"
" (prove: 0.00, print: 0.00, proof tree: 0.00, other: 0.00)\n"
" APP/ASSOCIATIVE\n"))
(define app/associative+trees
(string-append app/associative/before
tree-prefix app/associative/tree1 tree-suffix
tree-prefix app/associative/tree2 tree-suffix
tree-prefix app/associative/tree3 tree-suffix
tree-prefix app/associative/tree4 tree-suffix
tree-prefix app/associative/tree5 tree-suffix
(define app/associative+trees+prompt
(string-append app/associative+trees prompt))
(define app/associative+trees/parsed
(list (make-text-block app/associative/conclusion)
(make-tree-block app/associative/tree5)
(make-text-block app/associative/body4)
(make-tree-block app/associative/tree4)
(make-text-block app/associative/body3)
(make-tree-block app/associative/tree3)
(make-text-block app/associative/body2)
(make-tree-block app/associative/tree2)
(make-text-block app/associative/body1)
(make-tree-block app/associative/tree1)
(make-text-block app/associative/before))))
;; Proof Trees
(define one-subgoal-tree
"( DEFTHM APP/ASSOCIATIVE ...)\nc 0 Goal PUSH *1\n\n")
(define one-subgoal-goals
(list (list "Goal" 30 47)))
(define three-subgoal-tree
"( DEFTHM APP/ASSOCIATIVE ...)\n"
"c 0 Goal PUSH *1\n"
"c 2 *1 INDUCT\n"
" 1 | Subgoal *1/2 preprocess\n"
" | | <1 subgoal>\n"
" | <1 more subgoal>\n\n"))
(define three-subgoal-goals
(list (list "Goal" 30 47)
(list "*1" 79 93)
(list "Subgoal *1/2" 94 125)))
(define qed-tree
"( DEFTHM APP/ASSOCIATIVE ...)\nQ.E.D.\n")
(define failed-tree1
"( DEFTHM BAD ...)\n"
"******** FAILED ******** See :DOC failure ******** FAILED ********\n"
"c 0 Goal PUSH *1\n"))
(define failed1-goals
(list (list "Goal" 87 104)))
(define failed-tree2
"( DEFTHM STEP-PLAYERS-PRESERVES-NON-DIAGONAL ...)\n"
"******** FAILED ******** See :DOC failure ******** FAILED ********\n"
" 1 Goal preprocess\n"
"c 2 | Goal' ELIM\n"
" 1 | | Subgoal 1 preprocess\n"
"c 0 | | | Subgoal 1' PUSH (reverting)\n"))
(define failed2-goals
(list (list "Goal" 119 139)
(list "Goal'" 140 158)
(list "Subgoal 1" 159 190)
(list "Subgoal 1'" 191 232)))
|
{"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/8/3/test/data-parse.ss","timestamp":"2014-04-17T21:34:02Z","content_type":null,"content_length":"38763","record_id":"<urn:uuid:ef7730b1-0b2a-46cd-8f67-0c25601a5ac5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
HELP pleasse please please!!!: Where's the third line??
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I'm supposed to write the piecewise function represented by the graph; I already know how to do that. I just can't find the third line, though. I know it must start in the center since there's a
point there, but I can't tell which direction it's in.
Best Response
You've already chosen the best response.
on the Y-axis
Best Response
You've already chosen the best response.
how'd u know
Best Response
You've already chosen the best response.
from the picture
Best Response
You've already chosen the best response.
...any other way?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so you guessed?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5085f691e4b0b7c30c8e487e","timestamp":"2014-04-17T21:42:09Z","content_type":null,"content_length":"45272","record_id":"<urn:uuid:58b62f05-200a-461f-b81b-39136c1c033c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cheyney Statistics Tutor
Find a Cheyney Statistics Tutor
I have taught middle school and high school mathematics in northern Virginia for 8 years. I have tutored privately most of that time as well. I know that everyone learns in a different way and I
try to use real world objects, models and examples to help students understand abstract concepts with which they may be struggling.
28 Subjects: including statistics, calculus, ASVAB, algebra 1
...While I have mostly taught all levels of calculus and statistics, I can also teach college algebra and pre-calculus as well as contemporary math. My background is in engineering and business,
so I use an applied math approach to teaching. I find knowing why the math is important goes a long way towards helping students retain information.
13 Subjects: including statistics, calculus, geometry, algebra 1
...I have used these principles to solve business problems and would be happy to help anyone seeking to use these decision making tools to improve their business Acumen. I am an MBA from top 10
University with diverse experience of Marketing Strategies on different Industries – esp. Healthcare.
18 Subjects: including statistics, physics, calculus, finance
...All were pretty good, but my instruction incorporates the best techniques from each place. There are a lot of SAT coaches who CLAIM to be experts. I PROVE my expertise by showing you my perfect
800 score on the SAT Subject Test, mathematics level 2.
23 Subjects: including statistics, English, calculus, algebra 1
...I have teaching experience at the High School level. I have taught Algebra I, Algebra II, Geometry, and Precalculus at 2 very diverse public high schools in Chester County. Throughout this
time, I grew as an educator, and I learned new and exciting ways to teach diverse audiences.
16 Subjects: including statistics, French, calculus, algebra 2
|
{"url":"http://www.purplemath.com/Cheyney_Statistics_tutors.php","timestamp":"2014-04-21T05:00:28Z","content_type":null,"content_length":"23961","record_id":"<urn:uuid:62fa3ce7-56f2-44e9-a40e-b48ecf1427e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elmhurst, IL Geometry Tutor
Find an Elmhurst, IL Geometry Tutor
...I was valedictorian in elementary school, achieved AP scores of 5/5 in French Language, French Literature, AB Calculus, and Biology in high school, and graduated from college as J.N. Honors
Scholar Cum Laude with a BS in Elementary Education. I am certified to teach grades K-8 in Michigan and grades 1-6 in NY.
19 Subjects: including geometry, English, writing, reading
...This type of work helped me to communicate better and to make people understand at all levels. My goal in teaching is to provide the nurturing environment that allows children to learn and
grow intellectually. Regards, Kiran A.I did lot of Matlab programming during my Master's program, primary using Digital signal processing tools to analyze the vibration profile of sound
16 Subjects: including geometry, chemistry, physics, calculus
...Why such as small price? I will be starting dental school next year, and am looking for some extra spending cash before I leave home. I also enjoy helping people out whenever I can.
11 Subjects: including geometry, biology, precalculus, algebra 2
I have ten years of experience in tutoring Math in Harper College and lecturing in Northwestern University. My former students were of different ages, nationalities, and levels of mathematical
skills. All of them significantly improved their mathematical knowledge and grades.
8 Subjects: including geometry, calculus, algebra 1, algebra 2
...I have also written a chapter on probability in several books. I can definitely help you learn this subject. The ACT seeks to test thinking skills, and therefore it's impossible to teach to
the test.
25 Subjects: including geometry, writing, GRE, calculus
Related Elmhurst, IL Tutors
Elmhurst, IL Accounting Tutors
Elmhurst, IL ACT Tutors
Elmhurst, IL Algebra Tutors
Elmhurst, IL Algebra 2 Tutors
Elmhurst, IL Calculus Tutors
Elmhurst, IL Geometry Tutors
Elmhurst, IL Math Tutors
Elmhurst, IL Prealgebra Tutors
Elmhurst, IL Precalculus Tutors
Elmhurst, IL SAT Tutors
Elmhurst, IL SAT Math Tutors
Elmhurst, IL Science Tutors
Elmhurst, IL Statistics Tutors
Elmhurst, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/elmhurst_il_geometry_tutors.php","timestamp":"2014-04-18T19:04:06Z","content_type":null,"content_length":"24001","record_id":"<urn:uuid:56ef1999-8dcb-4110-949d-c19ca7cfd5d8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An (almost) terminological question: could one shorten the phrase 'the spectrum of the residue field of a point'?
up vote 0 down vote favorite
For a scheme S I want to consider the spectra of the residue fields of points of S. Is there any way to make this phrase shorter? Is there a term for the morphism that connects such a spectrum with
schemes ag.algebraic-geometry
9 How about using simply "the point"; in the preliminaries of the paper you could specify that you consider points as schemes by identifying them with spectra of their residue fields. The morphism
could be called "the tautological morphism". – Angelo Dec 16 '10 at 10:32
I think it's short enough if you don't have to repeat it all the time. – Martin Brandenburg Dec 16 '10 at 10:37
2 The residual spectrum ? – Chandan Singh Dalawat Dec 16 '10 at 11:18
Dear Angelo, I thought about "the point"; yet is it natural (and usual) to identify the point with the spectrum of the residue field and not with the corresponding closed subscheme? – Mikhail
Bondarko Dec 16 '10 at 16:28
2 Dear Mikhail, I don't know if it usual, but I do it all the time, and people seem to get it. I certainly would not identify a point with its closure, which is not a very pointlike object. – Angelo
Dec 16 '10 at 16:32
show 1 more comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged schemes ag.algebraic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/49626/an-almost-terminological-question-could-one-shorten-the-phrase-the-spectrum","timestamp":"2014-04-16T16:09:05Z","content_type":null,"content_length":"51885","record_id":"<urn:uuid:5f72e38e-e958-4e00-9da5-801b0d6725f2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
th Grade Interactive Math Skill
7th Grade Interactive Math Skill Builders
Links verified on 3/25/2013
1. Adding with Decimals - Addition of numbers with Two Decimals and Three Decimal digits.
2. Comparing Fractions and Decimals - (Grades 5-7) Write the fraction as a decimal or vice versa to answer the question correctly.
3. Conversions: Practice converting Decimals to Percents, Fractions to Decimals and Percents to Decimals.
4. Decimal Addition - Push the green blocks into the holes to make the target number.
5. Decimals of the Caribbean - Read the decimal at the top of the screen and shoot the boat that matches the decimal message (that has the numerical version of the message) with your decimal
cannonball. Try to destroy all of the ships.
6. Dividing Decimals - Practice with 2 Decimals and 3 Decimals.
7. Fractions and Decimals - 14 question self-checking chapter test.
8. Fractions, Decimals or Percent - For each of the 9 problems fill in the missing value. The answer will be a fraction, a decimal or a percent, depending on the problem.
9. Multiplying with Decimals - Multiplying numbers with two decimals.
10. Multiplying and Dividing Decimals - 12 question self-checking chapter test.
11. Spy Guys Interactive - Decimals - (Grade 6) Watch the video and respond at various places Lesson 1.
12. Spy Guys Interactive - Solving Problems with Decimals - (Grade 6) Watch the video and respond at various places Lesson 5.
13. Subtracting with Decimals - Practice with one decimal digit or three decimal digits.
14. Tests on Decimals: Self-Checking Chapter Tests
15. Testing Your Decimal Knowledge with these 5-question self-checking quizzes:
|
{"url":"http://www.internet4classrooms.com/skill_builders/decimals_math_seventh_7th_grade.htm","timestamp":"2014-04-16T18:57:46Z","content_type":null,"content_length":"26487","record_id":"<urn:uuid:de657699-9c8d-4622-ace4-518047f34d93>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Formal Predicate Logic
October 4th 2011, 08:35 PM
Formal Predicate Logic
I need help with part (b) of the following question:
To solve this I'm trying to follow a similar worked example here. And the definition of the formal system $K_L$ is here.
So, we know that $((p \to q) \to (eg q \to eg p))$ is a tautology since:
$(p \to q) \iff (eg p \vee q)$
$\iff (q \vee eg p)$
$\iff (eg eg q \vee eg p)$
$\iff (eg q \to eg p)$
Now, I guess the first step is to show that $\{ \forall x (A \to B), eg(\forall x eg B) \} \vdash_{K_L} eg \forall x eg A$.
1. $\forall x (A \to B)$ .....(Hyp)
2. $(\forall x (A \to B)) \to (\forall x (eg B \to eg A))$ .....(instance of tautology $((p \to q) \to (eg q \to eg p))$)
3. $(\forall x (eg B \to eg A))$ .....(1,2, Modus Ponen)
4. ...
I'm stuck at this point... How should I continue? Any help is appreciated.
October 6th 2011, 07:59 AM
Re: Formal Predicate Logic
A quick question: are you allowed to use the deduction theorem?
October 6th 2011, 01:37 PM
Re: Formal Predicate Logic
I should have looked at the example. If you can use the deduction theorem and generalization, it's pretty easy. Let me post a derivation in natural deduction. It is straightforward to convert it
to a derivation in $K_L$.
October 6th 2011, 04:36 PM
Re: Formal Predicate Logic
Hello, demode!
$\text{Verify that }\,(p \to q) \to (\sim\!q\to\:\sim\!p)\,\text{ is a tautology.}$
$\begin{array}{cccccccccc}1. & (p \to q) \to (\sim\!q \to\: \sim\!p) && 1. & \text{Given} \\ 2. & (\sim\!p \vee q) \to (q\: \vee \sim\! p) && 2. & \text{Def. impl'n} \\ 3. & \sim(\sim\!p \vee q)
\vee (q\: \vee \sim\!p) && 3. & \text{Def. Impl'n} \\ 4. & (p \:\wedge \sim\!q) \vee (q\: \vee \sim\!p) && 4. & \text{DeMorgan} \\ 5. & [(p\: \wedge \sim\!q) \vee q]\: \vee \sim\!p && 5. & \text
{Assoc.} \\ 6. & [(p \vee q) \wedge (\sim\!q \vee q)]\: \vee \sim\!p && 6. & \text{Distr.} \\ 7. & [(p\vee q) \wedge T]\: \vee \sim\!p && 7. & s\: \vee\! \sim\!s \:=\:T \\ 8. & (p \vee q)\: \vee
\sim\!p && 8. & s \wedge T \:=\:s \\ 9. & (p\: \vee \sim\!p) \vee q && 9. & \text{Comm, Assoc.} \\ 10. & T \vee q && 10. & s\: \vee\!\sim\!s \:=\:T \\ 11. & T && 11. & T \vee s \:=\:T \end{array}
October 8th 2011, 12:43 PM
Re: Formal Predicate Logic
I think we are required to follow the given example very closely. But yes we can use the Deduction Theorem. The deduction theorem for $K_L$ goes like this: for $\Sigma \cup \{A\} \vdash_{K_L} B$,
the derivation of $B$ from $\Sigma \cup \{A\}$ does not use an application of generalization to a free variable of $A$. Then $\Sigma \vdash_{K_L} (A \to B)$.
October 8th 2011, 12:54 PM
Re: Formal Predicate Logic
Well, as my derivation shows, $\forall x\,(A\to B),\forall x\,eg B\vdasheg A(x)$. Then you use generalization on x. Note that x does not occur free in the assumptions, so you can apply the
deduction theorem.
October 8th 2011, 09:56 PM
Re: Formal Predicate Logic
I really appreciate your response. But I'm having some trouble following your derivation in post #3, could you please show it in the simple format (like the one in my post)?
October 9th 2011, 06:58 AM
Re: Formal Predicate Logic
1. $\forall x\,(A\to B)$ Hyp
2. $(\forall x\,(A\to B))\to(A\to B)$ K4
3. $A\to B$ 1, 2, MP
4. $(A\to B)\to(eg B\toeg A)$ instance of tautology
5. $eg B\toeg$ A 3, 4, MP
6. $\forall x\,eg B$ Hyp
7. $(\forall x\,eg B)\toeg B$ K4
8. $eg B$ 6, 7, MP
9. $eg A$ 5, 8, MP
10. $\forall x\,eg A$ 9, generalization applied to x
11. $(\forall x\,eg B)\to\forall x\,eg A$ 6, 10, deduction theorem
12. $(\forall x\,(A\to B))\to(\forall x\,eg B)\to\forall x\,eg A$ 1, 11, deduction theorem
In step 10, similar to part (c) of the example, generalization is a rule that must have been derived using K5. It applies to x, which is not free in either of the assumptions, so one can use the
deduction theorem in steps 11 and 12.
October 13th 2011, 03:51 AM
Re: Formal Predicate Logic
I'm also stuck on a very similar question where I'm required to deduce that $\{\forall x eg (A \to B) \} \vdash_{K_L} eg (\forall x A \to B)$. And I have already shown that $\{\forall x eg (A \to
B) \} \vdash_{K_L} eg B$ and $\{\forall x eg (A \to B) \} \vdash_{K_L} \forall x A$. [And I want to use an instance of the tautology $(p \to (eg q \to eg (p \to q)))$]. So we have
1. $eg(A \to B)$ Hyp
2. $eg B$ Hyp
3. $\forall x A$ Hyp
4. $A$ K4
5. $A \to (eg B \to eg (A \to B))$ Taut
Any ideas how to proceed? I even tried K5, but to no avail... I don't see a way to get the $\forall x$ part inside the bracket with next to A in ~(A -> B)...
October 13th 2011, 05:02 AM
Re: Formal Predicate Logic
Since $(\forall x\,A)\toeg B\toeg((\forall x\,A)\to B)$ is an instance of the tautology and you have shown $\{\forall x eg (A \to B) \} \vdash_{K_L} \forall x A$ and $\{\forall x eg (A \to B) \}
\vdash_{K_L} eg B$, then what's the problem? Just use MP twice. Or do you need to derive $\forall x\,(A\to B)$?
October 13th 2011, 10:36 AM
Re: Formal Predicate Logic
Since $(\forall x\,A)\toeg B\toeg((\forall x\,A)\to B)$ is an instance of the tautology and you have shown $\{\forall x eg (A \to B) \} \vdash_{K_L} \forall x A$ and $\{\forall x eg (A \to B) \}
\vdash_{K_L} eg B$, then what's the problem? Just use MP twice. Or do you need to derive $\forall x\,(A\to B)$?
Edit: I get it. Thank You!
|
{"url":"http://mathhelpforum.com/discrete-math/189590-formal-predicate-logic-print.html","timestamp":"2014-04-17T11:16:08Z","content_type":null,"content_length":"24390","record_id":"<urn:uuid:5c9039e8-fb3c-4e78-a214-fe3714a36fc4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
RE: st: Detection of disease
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Detection of disease
From "Seed, Paul" <paul.seed@kcl.ac.uk>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject RE: st: Detection of disease
Date Fri, 15 Aug 2008 14:57:55 +0100
Carlo George poses an interesting problem.
To deal with a few epidemiological issues first -
"...to be 95% certain that the population is free from disease,"
he must assume
- that the disease level is either 20% [Null hypothesis: H0] or 0% [Alternative hypothesis Ha], (a lower rate might well be missed.)
- that the sample is representative of the population (a local outbreak outside the sampling area would certainly be missed)
- that the test used is 100% sensitive
A more realistic goal might be "...to be 95% certain that the test-positive rate is less than 20% in the population represented by the sample."
He is interested in a onesided test at the 95% level, as probabilities < 0 have no meaning; so the standard Stata command is
sampsi .2 0 , onesample onesided
This gives n=11 (not 16), which is still different from the n=14 from the freeware package "Winepiscope" that Carlo uses. The reason is that -sampsi- uses Normal approximations for percentages, which tend to give smaller values than exact tests. To replicate Carlo's result, another approach is needed. This is made much easier by the fact that the disease level is 0% under Ha, so no events are expected.
We can perform both tests in Stata; using -bitesti- for the exact test & -prtesti- for the Normal approximation (or Chi-sq test).
foreach n of numlist 10/15 {
bitesti `n' 0
prtesti `n' 0
Concentrating on the onesided p-values (Ha: p < 0.2), it is clear that 14 subjects is the smallest number to give a significant test by the exact test; and 11 by the Normal approximation. The first figure confirms the Winepiscope result.
An added level of sophistication is to look at the confidence intervals. Stata offers several:
Wald (a version of the Normal approximation), "exact" (Clopper-Pearson), Wilson, Agresti-Coull, Jeffreys. 90% CI are needed to give a one-sided 95% interval. Both the Wald & Jeffreys intervals perform poorly in this case; but Wilson, "exact" and Agresti-Coull are worth considering. In particular, the Wilson interval seems to fit with the results of -prtestri-, which may be of interest, as there are arguments that the "exact" test is in fact over-conservative (hence the quotation marks). (I could dig out the references if anyone's interested.
cii 14 0 , exact level(90)
-- Binomial Exact --
Variable | Obs Mean Std. Err. [90% Conf. Interval]
| 14 0 0 0 .1926362*
(*) one-sided, 95% confidence interval
cii 14 0 , wald level(90)
-- Binomial Wald ---
Variable | Obs Mean Std. Err. [90% Conf. Interval]
| 14 0 0 0 0
cii 14 0 , wilson level(90)
------ Wilson ------
Variable | Obs Mean Std. Err. [90% Conf. Interval]
| 14 0 0 0 .1619548
cii 14 0 , agresti level(90)
-- Agresti-Coull ---
Variable | Obs Mean Std. Err. [90% Conf. Interval]
| 14 0 0 0 .1907622
The Agresti-Coull interval was clipped at the lower endpoint.
cii 14 0 , jeffreys level(90)
----- Jeffreys -----
Variable | Obs Mean Std. Err. [90% Conf. Interval]
| 14 0 0 0 .1260576
Date: Thu, 14 Aug 2008 11:53:33 +0200
From: "Carlo Georges" <georgesc@pt.lu>
Subject: st: Detection of disease
I tried to reproduce in stata the calculation needed for the following case:
I need to determine the sample size, required to detct the presence of
disease in a population.
The formula is rather complex so it is difficult to paste in here,
For example i need to detect with 95% confidence the abscence of disease in
a population where the presumed prevalence would be 20%. How lrge a sample
size do I need to be 95% certain that the population is free from disease.
I used a program "Winepiscope" freeware, that calculated a samplesize of 14.
in stata i tried : sampsi 0.2 0, power(0.9) onesample
and I get a result of :16
Can stata handle this type of calculation?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-08/msg00616.html","timestamp":"2014-04-18T08:19:44Z","content_type":null,"content_length":"10332","record_id":"<urn:uuid:b1d24c48-76cf-4b3a-9a3d-1d757ee72fd1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Essay on The First Book of Euclid's Elements
It is a tragedy of mathematics that what we know of Euclid's life is so meager. It is because of Euclid's work that mathematics was able progress so rapidly in classical times. Euclid's masterpiece
The Elements was so popular that it became the most widely read book until the twentieth century (with the exception of The Bible). However, our tragedy lies in what we actually know about Euclid.
The sources of Euclid's life are but a few passages from commentators who claim they knew who Euclid was. To this day, there have been only five commentators of Euclid, which are Proclus, Heron,
Porphyry, Pappus, and Simplicius (Proclus being the most prominent).1 Pre-twentieth century The Elements was considered the authoritative standard of basic concepts in geometry , number theory, and
It is speculated that Euclid flourished around 300BCE. This has been estimated by one of the passages that Proclus wrote in his Commentary on the First Book of Euclid's Elements. The passage is as
"Not much younger than these(sc. Hermotimus of Colophon and Philippus of Medma) is Euclid, who put together the Elements, Collecting many of Eudoxus' theorems, perfecting many of Theaetetus', and
also bringing to irrefragable demonstration the things which were only somewhat loosely proved by his predecessors. This man lived in the time of the first Ptolemy. For Archimedes, who came
immediately after the first (Ptolemy), makes mention of EuclidKHe is then younger than the pupils of Plato but older than Eratosthenes and Archimedes; for the latter were contemporary with one
another, as Eratosthenes somewhere says."2
Here Proclus mentions people of which we are more clear of their life span, such as Plato (d. 347/6BCE), Archimedes (d. 212BCE), Eratosthenes (d. 204BCE), and Ptolemy Soter (d. 283BCE). So here we
are able to say that Euclid flourished in 300BCE but the exact date of Euclid's birth and death cannot be... Continues...
The First Book of Euclid's Elements. (1969, December 31). In DirectEssays.com. Retrieved 06:21, April 16, 2014, from http://www.directessays.com/viewpaper/88172.html
|
{"url":"http://www.directessays.com/viewpaper/88172.html","timestamp":"2014-04-16T10:21:55Z","content_type":null,"content_length":"25208","record_id":"<urn:uuid:ea9f095f-5cd4-4079-b6c9-9f84459d60ed>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hangman 1
Re: Hangman 1
That x is a multiplication sign isn't it?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=248240","timestamp":"2014-04-16T10:39:42Z","content_type":null,"content_length":"33633","record_id":"<urn:uuid:877a051a-9c7b-45a5-baed-284732c92c49>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: how to assign multiple structure elements at once
Replies: 5 Last Post: Nov 17, 2012 10:01 AM
Messages: [ Previous | Next ]
Fatih Re: how to assign multiple structure elements at once
Posted: Nov 16, 2012 6:53 AM
Posts: 2
Registered: "Matt J" wrote in message <k83cl8$6ft$1@newscl01ah.mathworks.com>...
11/16/12 > "Fatih" wrote in message <k83bfk$1rf$1@newscl01ah.mathworks.com>...
> > Hi,
> >
> > I'm trying to assign multiple values to second element of a field of multiple structure elements. Better I give you an example to explain that:
> >
> > let's say the values are = 1,2,3
> > structure is = a
> > field is = s
> >
> > then the result should look like this:
> > a(1).s(2) = 1
> > a(2).s(2) = 2
> > a(3).s(2) = 3
> =================
> My answer to posts like these is always to question the way the data is being organized, so I'll do that again here.
> Given what you have above, it looks lik all a(i).s are the same size. If some a(j).s is of length 1, for example, this makes no sense. Assuming this to be the case, it probably makes
sense for you to make 'a' a scalar struct instead and do
> a.s(:,2)=[1;2;3];
> Either that, or hold s in a separate array.
Hi Mat,
Thanks for your post, however that does not help me much and the reason is the structure organization should not be like the one you gave with your post.
Say I have 10 buses (a) operating daily in a route and I am sampling every hour the number of passengers (s) riding each of those buses. Some of those buses can be broken at any sampling
time, and I am only recording that data for good buses. So if that bus does not work any more the number of passengers field will not be updated anymore.
So by looking at this example a.s(:,2) = [1;2;3] does not work. Maybe I should change the organization or just use a for loop.
I was thinking there can be a trick like deal function that fits here nicely.
Date Subject Author
11/15/12 how to assign multiple structure elements at once Fatih
11/15/12 Re: how to assign multiple structure elements at once Matt J
11/16/12 Re: how to assign multiple structure elements at once Fatih
11/16/12 Re: how to assign multiple structure elements at once Matt J
11/17/12 Re: how to assign multiple structure elements at once Fatih
11/17/12 How to ADD additional 'coins' on one image that you copy Jean Marc
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=7924089","timestamp":"2014-04-18T00:15:17Z","content_type":null,"content_length":"23763","record_id":"<urn:uuid:81d1e1e8-acf6-43f6-877c-08792af7e6cc>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dear members, I have to present a report on best overtime calculation practices in pakistan that include overtime calculation techniques and addition of allowances in the calculation of overtime.
What are the normal practices here. I request members to provide guidance. Thanks, Saleem
Umer Raza Bhutta
Nice discussion point. I think the best way is to go through the law. There is a calculation farmula which i can send you in email. Please mail your email ID.
Muhammad Usman Muhammad Ayoob
Dear Saleem,
best overtime practice in industries is calculate with month i.e. salary/month/8*hours, this formula entitled in around 70% industries,
in multinational firm or best firm calculate with this formula i.e. salary/26/8*2*hours.
these aboves are main formula.
Naveed Ansari
Well said Mr. Usman
best calculation for OT i.e. salary/26/8*2*hours.
saleem uddin
Dear Mr. Usman,Naveed and Umar how u doin all. Thanks for precious guidance and contribution to the discussion. Hi Umar ! my email is saleem_dastagir@hotmail.com
Abid Hussain
That's right almost all companies (if go so) follow the above mentioned formula that is
Any query regarding to overtime is welcome
Babur Yaqub
The legal side of it can be covered by using (Gross Salary) / (30*8)*2 * OT Hours.
Anyway, there are lots of companies which use the above mentioned formula as well. Regards
Mahboob Ahmed
Why are we dividing the salary by 26 ? and why not 30 or 31 ?
Babur Yaqub
30 - 4 weekend holidays = 26, hope that answers your question.
For people we pay on daily basis (usually superannuated expert level individuals) we use the same sort of formula i.e. 30 - 8 weekends = 22 / 8. To the second part of your question, why not 31 i.e.
because accounting month is set as 30.
Mahboob Ahmed
Vow we have two different options as per Naveed andAbid Hussain Gross Salary/26 and as per Babur Yaqoob Gross Salary/30 for legal covering ?????.
Dear Friends, why do we have to have two different standards ? The 4 holidays are paid leave which have been made mandatory by law and if an employee is asked to work on these days he is paid extra.
These extra day are paid at the rate of Gross/30 where as overtime is paid at double the rate op pay per hour. Think about it ?
Awais Arshad
Mahboob sb over time is excluded with sunday. If employee works on sunday it means he will be rewarded one day salary. standard formula is salary/26/8*2 = per hour overtime
Babur Yaqub
OT means working on Hours 'other then the prescribed/announced hours of work' so don't confuse it with normal working day/hours. Hence the OT is payable for work after office hours, and or on
weekends. Some employers (e.g. my employer) pay minimum of 8 hours of OT for work on weekends (we have sat/sun off, and it is being in practice since the company came into being in 1958), also this
is way above the minimum which law require us to do so; in case the OT hours are more than 8 hours, the employee is paid accordingly.
The 26 day formula which is being stated by many informed people here is in general use, but it means that these employers are 'paying above' the minimum required rate payable under the law, which I
stated above.
One more factor is CBA agreements with Employees Unions, because they can influence several of the pay and allied benefits and or how they are awarded / calculated.
|
{"url":"http://saleem-creativewriter.blogspot.com/2010/09/what-is-best-overtime-calculation.html","timestamp":"2014-04-19T23:47:37Z","content_type":null,"content_length":"103814","record_id":"<urn:uuid:ac410d67-b274-4d8b-b870-f5db800af776>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Guttenberg, NJ SAT Math Tutor
Find a Guttenberg, NJ SAT Math Tutor
...I find that building a relationship with students and discovering their unique learning strengths and weaknesses is the best way to tutor. It has been my experience that a student can learn
best when he/she understands how he/she can best learn. I first began to tutor at an after school center.
27 Subjects: including SAT math, English, reading, writing
...I engage my students in an active learning style which favors asking them questions so that they piece together their understanding of new material from prior knowledge. I do not simply
explain material while my students nod their heads. It is important to relate material to examples from everyday life.
24 Subjects: including SAT math, chemistry, physics, geometry
...This includes answering 85% correct on the Math portion of the test. Additionally, I excel in English, and this allowed me to score well on sentence structure and reading comprehension
questions. I've always been strong in Spelling, averaging 95-100 in grammar school.
41 Subjects: including SAT math, reading, English, physics
...This started as a twice a week commitment with a couple of students and quickly grew into a full fledged undertaking with five to eight tutoring sessions a week for 4 or 5 students. I always
enjoyed helping students achieve their goals and found myself caring deeply about their success. I have...
19 Subjects: including SAT math, chemistry, calculus, geometry
I graduated with a Bachelor of Science with a major in Biomedical Sciences and a minor in Psychology. I am currently wrapping up graduate school in Dental Hygiene at New York University and will
be applying to dental school for the 2015 school year. I am currently on the Dean's list and in an hono...
20 Subjects: including SAT math, reading, chemistry, biology
Related Guttenberg, NJ Tutors
Guttenberg, NJ Accounting Tutors
Guttenberg, NJ ACT Tutors
Guttenberg, NJ Algebra Tutors
Guttenberg, NJ Algebra 2 Tutors
Guttenberg, NJ Calculus Tutors
Guttenberg, NJ Geometry Tutors
Guttenberg, NJ Math Tutors
Guttenberg, NJ Prealgebra Tutors
Guttenberg, NJ Precalculus Tutors
Guttenberg, NJ SAT Tutors
Guttenberg, NJ SAT Math Tutors
Guttenberg, NJ Science Tutors
Guttenberg, NJ Statistics Tutors
Guttenberg, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/guttenberg_nj_sat_math_tutors.php","timestamp":"2014-04-18T15:42:51Z","content_type":null,"content_length":"24178","record_id":"<urn:uuid:2efb35bc-de2b-442a-bf1e-490ba3ccc5d3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
|
seawater package
The seawater package
The seawater package provides basic functions in Python for physical properties of sea water. The scope is similar to the MATLAB toolboxes SEAWATER from CSIRO and parts of OCEANS from Woods Hole
Note added 2011-02-07. With the new Gibbs function based equation of state TEOS-10 this seawater package is becoming obsolete. It is retained here under the MIT-license for backwards compability.
Work is underway, headed by Filipe Fernandes, to implement the new equation of state in python. See the python-gsw page.
Most of the formulas used are taken from Unesco's joint panel on oceanographic tables and standards, UNESCO 1981 and UNESCO 1983.
The present version is 1.1, released 07 November 2004.
User documentation
The functions are mainly polynomials, with an occasional fractional exponent. The functions can therefore be used traditionally without NumPy to compute a single value. With NumPy the functions act
like "ufuncs" in the NumPy sence. This means that they can take array arguments and return an array of the same shape.
Public functions:
Density related
dens(S,T,P) Density of sea water kg/m**3
svan(S,T,P) Specific volume anomaly m**3/kg
sigma(S,T,P) Density anomaly kg/m**3
drhodt(S,T,P) Temperature derivative of density kg/(K*m**3)
alpha(S,T,P) Thermal expansion coefficient 1/K
drhods(S,T,P) Salinity derivative of density kg/m**3
beta(S,T,P) Salinity expansion coefficient
Salinity related
salt(R,T,P) Salinity
cond(S,T,P) Conductivity ratio
Heat related
heatcap(S,T,P) Heat capacity J/(kg*K)
adtgrad(S,T,P) Adiabatic lapse rate K/dbar
temppot(S,T,P,Pref) Potential temperature °C
temppot0(S,T,P) Potential temperature °C
freezept(S,P) Freezing point °C
soundvel(S,T,P) Sound velocity m/s
depth(P,lat) Depth m
S Salinity
T Temperature °C
P Pressure dbar
R Conductivity ratio
Pref Reference pressure dbar
lat Latitude deg
Test scripts
The test directory contains several test scripts. The script test.py tests some standard values in the UNESCO 1983 report. The script testwater.py prints out part of table A3.1 in the book by Gill
(1982). To recreate the potential temperature values, the alternative function temppot0 implementing the algorithm from Bryden (1983) is used.
The scripts testdens, testsalt, testheat, and testmisc computes and prints tables from the UNESCO 1983 report.
The seawater package is written in pure Python and should work on any system. NumPy is not necessary, but highly recommended. To install the module, download the file seawater.tar.gz, unpack, and
install by python's standard distribution utility, setup.py.
Bryden, 1973
New polynomials for thermal expansion, adiabatic temperature gradient and potential temperature gradient of sea water, Deep-Sea Res. 20, 401-408
A. E. Gill, 1982
Atmosphere-Ocean Dynamics, Academic Press.
UNESCO 1981
Tenth report of the joint panel on oceanographic tables and standards, Unesco technical papers in marine science, 36.
UNESCO 1983
N.P. Fofonoff and R.C. Millard Jr., Algorithms for computation of fundamental properties of seawater, Unesco technical papers in marine science, 44. [pdf]
Bjørn Ådlandsvik (bjorn@imr.no)
Institute of Marine Research
Last modified: 19 December 2012
|
{"url":"http://www.imr.no/~bjorn/python/seawater/index.html","timestamp":"2014-04-20T11:04:13Z","content_type":null,"content_length":"5717","record_id":"<urn:uuid:cc7cc681-caf1-4d11-959b-70a81d32dcd9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Adelphi, MD SAT Math Tutor
Find an Adelphi, MD SAT Math Tutor
...Since graduating I have begun to tutor again and being new to the area I am currently working to expand my number of students. I believe that everyone can learn math and value it as a problem
solving tool. I expect my students to be open about their needs, concerns and communicate what works well for them.
22 Subjects: including SAT math, calculus, geometry, GRE
...Private residence within 15 mile radius of Bowie (someone 18 or over must be home)... in home tutoring is generally available for the first or last scheduled sessions of the day. The location
of slots before and after open slots dictate location of sessions for open slots. WHAT YOU SHOULD EXPEC...
33 Subjects: including SAT math, English, reading, geometry
...My name is Sonja. I'm happy to say that 95% of my WyzAnt students report a 1 to 3 letter grade increase (those students who already have an "A" or "4" maintain their grade). This data is
up-to-date as of 11/24/2013. Over 80% of my students who improve achieve better grades in a month.
10 Subjects: including SAT math, geometry, algebra 1, GED
...I can tutor all high school math including calculus, pre-calculus, trigonometry, geometry and Algebra I&II; plus SAT, GRE, ACT and other standardized test preparation. I can also tutor Physics
and first year Chemistry.I have tutored this subject successfully in the past. In the past I was both a faculty high school mathematics instructor and a junior college instructor.
28 Subjects: including SAT math, chemistry, calculus, physics
...As a highly qualified and certified teacher in math, English, social studies, and ESOL, I have worked with students in providing the organizational skills, goal setting, and study skills to
achieve the best results in the academic setting. In addition, I have also worked with students in the tut...
24 Subjects: including SAT math, English, reading, writing
Related Adelphi, MD Tutors
Adelphi, MD Accounting Tutors
Adelphi, MD ACT Tutors
Adelphi, MD Algebra Tutors
Adelphi, MD Algebra 2 Tutors
Adelphi, MD Calculus Tutors
Adelphi, MD Geometry Tutors
Adelphi, MD Math Tutors
Adelphi, MD Prealgebra Tutors
Adelphi, MD Precalculus Tutors
Adelphi, MD SAT Tutors
Adelphi, MD SAT Math Tutors
Adelphi, MD Science Tutors
Adelphi, MD Statistics Tutors
Adelphi, MD Trigonometry Tutors
Nearby Cities With SAT math Tutor
Aspen Hill, MD SAT math Tutors
Avondale, MD SAT math Tutors
Berwyn, MD SAT math Tutors
Chillum, MD SAT math Tutors
Colesville, MD SAT math Tutors
College Park SAT math Tutors
Glenmont, MD SAT math Tutors
Green Meadow, MD SAT math Tutors
Hillandale, MD SAT math Tutors
Landover, MD SAT math Tutors
Langley Park, MD SAT math Tutors
Lewisdale, MD SAT math Tutors
North Bethesda, MD SAT math Tutors
Takoma Park SAT math Tutors
Wheaton, MD SAT math Tutors
|
{"url":"http://www.purplemath.com/Adelphi_MD_SAT_Math_tutors.php","timestamp":"2014-04-17T13:39:27Z","content_type":null,"content_length":"24119","record_id":"<urn:uuid:0a656931-d1dc-48e7-8539-935d1727dc83>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rolling Meadows Statistics Tutor
Find a Rolling Meadows Statistics Tutor
...Finally, I have been using statistical concepts ever since, from teaching college-level physics to building courses for professional auditors. I was a student of standardized tests, achieving
the highest possible score on the math section of the ACT, as well as high scores on the equivalent sect...
12 Subjects: including statistics, geometry, algebra 1, GED
...I have tutored students of varying levels and ages for more than six years. While I specialize in high school and college level mathematics, I have had success tutoring elementary and middle
school students as well. I have experience working with ACT College Readiness Standards and have been successful improving the ACT scores of students.
19 Subjects: including statistics, calculus, geometry, algebra 1
...Both the student's, parent's, and my time is valuable, and I don't want to compromise any potential for students to learn. I greatly appreciate your understanding. I am very excited to start
tutoring students again to help them increase their knowledge and excel in school.
15 Subjects: including statistics, chemistry, Spanish, biology
...Exponential and Logarithmic Models. Trigonometric Functions OF ANGLES. Angle Measure.
17 Subjects: including statistics, reading, calculus, geometry
...I use both analytical as well as graphical methods or a combination of the two as needed to cater to each student. Having both an Engineering and Architecture background, I am able to explain
difficult concepts to either a left or right-brained student, verbally or with visual representations. ...
34 Subjects: including statistics, reading, writing, English
|
{"url":"http://www.purplemath.com/rolling_meadows_statistics_tutors.php","timestamp":"2014-04-17T21:56:12Z","content_type":null,"content_length":"24218","record_id":"<urn:uuid:078cc7d8-c288-4217-8032-bd220d07dd46>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Retrieving and Organizing Web Pages by
Retrieving and Organizing Web Pages by "Information Unit"
Wen-Syan Li K. Selçuk Candan Quoc Vu Divyakant Agrawal
C&C Research Laboratories, NEC USA, Inc.
110 Rio Robles, M/S SJ100, San Jose, CA 95134, USA
Tel:(408)943-3008 Fax:(408)943-3099
Copyright is held by the author/owner(s).
WWW10, May 1-5, 2001, Hong Kong.
ACM 1-58113-348-0/01/0005.
Since WWW encourages hypertext and hypermedia document authoring (e.g., HTML or XML), Web authors tend to create documents that are composed of multiple pages connected with hyperlinks or frames. A
Web document may be authored in multiple ways, such as (1) all information in one physical page, or (2) a main page and the related information in separate linked pages. Existing Web search engines,
however, return only physical pages. In this paper, we introduce and describe the use of the concept of information unit, which can be viewed as a logical Web document consisting of multiple physical
pages as one atomic retrieval unit. We present an algorithm to efficiently retrieve information units. Our algorithm can perform progressive query processing over a Web index by considering both
document semantic similarity and link structures. Experimental results on synthetic graphs and real Web data show the effectiveness and usefulness of the proposed information unit retrieval
Keyword: Web proximity search, link structures, query relaxation, progressive processing
To find the announcements for conferences whose topics of interests include WWW, a user may issue a query ``retrieve Web documents which contain keywords WWW, conference, and topics.'' We issued the
above query to an Internet search engine and surprisingly found that the returned results did not include many relevant conferences. The main reason for this type of false drops is that the contents
of HTML documents are often distributed among multiple physical pages and are connected through links or frames. With the current indexing and search technology, many search engines retrieve only
those physical pages that have all the query keywords. It is crucial that the structure of Web documents (which may consist of multiple pages) is taken into consideration for information retrieval.
In this paper, we introduce the concept of an information unit, which can be viewed as a logical Web document, which may consist of multiple physical pages as one atomic retrieval unit. The concept
of information units does not attempt to produce additional results that are likely to be more than users can browse. Instead, our approach is to keep those exactly matched pages at the top of the
ranked list, while merging a group of partially matched pages into one unit in a more organized manner for easy visualization and access for the users. In other words, unlike the traditional
keyword-based query relaxation, the information unit concept enables Web-structure based query relaxation in conjunction with the keyword-based relaxation.
Let us denote the set of Web pages containing a given keyword, 1,
Figure 2 illustrates the query results that are generated progressively for a query with two keywords. In this figure, solid lines indicate reuse and dotted lines indicate derive. We denote class 0
information units as documents which contain both class i information units are a pair of documents such that one contains (1) the intermediate query results of Class 0, Class 1; (2) and Class 0
results (indicated by dashed arrows); and (3) if necessary, computation of (
Figure 1: Examples for notations
Note, however, that this example considers only queries with two keywords, which are very common. Processing queries with three keywords is a substantial difficult task, since this involves graphs
instead of paths. A study [1] has shown that most user queries on the Web typically involve two words. With query expansion, however, query lengths increase substantially. As a result, most existing
search engines on the Web do not provide query expansion functionality. The information unit concept and technique provide an alternative to query relaxation not by keyword semantics, but by link
Figure 2: Query plan for finding information units for queries with two keywords
This example highlights the three essential requirements of information unit-based retrieval on the Web:
• Users are not just interested in a single result, but the top
• While generating
• And, since the Web is large and usually a simple search results in thousands of hits, preprocessing and any computation which requires touching or enumerating all pages is not feasible.
Note that this does not mean that we do not know of the structure of the Web. In our implementation, we used a Web index we maintain at NEC CCRL, to identify logical Web documents. The index contains
information regarding the Web pages, as well as the incoming and outgoing links to them. In this paper, we present our research on progressive query processing for retrieving information units
without pre-computation or the knowledge of the whole search space. We present experimental results on both synthetic graphs and real Web data.
The rest of the paper is organized as follows. In Section 2, we provide formal definitions of the more general problem of information unit-based retrieval. In Section 3, we describe the query
processing techniques and generalize the framework to more complex scenario. In Section 4 we describe how our technique can be extended to deal with fuzziness in keyword-based retrieval, such as
partial match and different importance of query keywords. In Section 5 we present experimental results for evaluating the proposed algorithm on actual Web data. In Section 6, we review related work
and compare it with our work. Finally, we present concluding remarks in Section 7.
Data and Query Models for Retrieval by Information Unit
In this section, we describe the query model for information unit-based retrieval and start with the definitions for the terms used in the rest of the paper:
• The Web is modeled as a directed graph
• The dictionary,
• There is a page-to-keyword mapping
• Finally, we also assume that there is a cost function,
We denote the set of Web pages containing a given keyword,
Given the Web,
A minimal answer to
Figure 3: Three answers (
Let us assume that the user issues a query, 3. In this figure, the edge weights may describe the importance of their association. Such association could depend on the type of connections. For
example, the association or weights can be calculated based on the number of actual links between the two pages.
The solid lines in this figure denote the links that are going to be used for computing the cost of the information units (these are the lines on the smallest tree) and the dashed lines denote the
links that are being ignored since their cost is not the minimum. For instance, there are at least two ways to connect all three vertices in cluster A. One of these ways is shown with dark lines, and
the sum of the corresponding edge weights is 12. Another possible way to connect all three vertices would be to use the dashed edges with weights 7 and 8. Note that if we were to use the second
option, the total edge weights would be 15; i.e., larger than 12 that we can achieve using the first option. Consequently, the cost of the best minimal answer is 12, not 15. In this example,
The above query formulation, when instantiated with two keywords, can be easily solved by using the all-pairs shortest path solution (however, this algorithm would require a complete knowledge of the
graph). In the next section, we examine algorithms for query processing that find the results by discovering the graph (or the Web) incrementally.
Query Processing
Conventionally, answers to a Web query is an ordered list of pages, where the order reflects the rank of a page with respect to the given query. Consequently, we expect that the answers to an
information unit query be an ordered set of logical Web documents; i.e., an ordered list of sets of Web pages. The ranks of the information units are computed by aggregating the cost of the edges
involved in the graph representing the query result connected via link-in and link-out pages as described in the previous section. There are two ways to generate such an ordered list:
1. generate all possible results and, then, sort them based on their individual ranks, or
2. generate the results in the decreasing order of their individual ranks.
Clearly, the second option is more desirable since it generates the higher ranked results earlier thereby reducing the delay in responding to the user query. Furthermore, users on the Web in general
specify queries in the form: ``Give me the top
In this section, we develop a progressive query processing algorithm for retrieving information units on the web. Since the search space is very large, the progressive algorithm relies on local
information. The implication of relying on local information to produce answers is that the ranking of query results is approximate.
Abstract Formulation of the Problem
The problem of finding the minimum weighted connected subgraph, Steiner tree problem [2]. If it exists, group Steiner tree problem. The problem of finding the best answer with the minimal cost
information unit to a query, 2] and minimum weight group Steiner tree problems [3] are known to be NP-hard.
As a result of this NP-completeness result, the minimum weight group Steiner tree problem (and consequently, the minimum cost information unit problem) is not likely to have a polynomial time
solution, except in certain special cases, such as when vertex degrees are bounded by4] or the number of groups is less than or equal to5]. This particular algorithm provides a randomized6] had a
Note, however, that none of the above algorithms satisfy our essential requirements:
• We are not only interested in the minimum-weight group Steiner tree. Instead, what we need is
• We would prefer to generate
• Since the Web is large, we can not perform any preprocessing or any computation which would require us to touch or enumerate all pages on the Web.
The solution proposed by Garg et al., for instance, does not satisfy any of these requirements: it can only provide the best solution (3], shortest path heuristic [3], and shortest path with origin
heuristic [4]. None of these heuristics, however, satisfy our
Algorithm for Information Unit Retrieval
In this section, we develop a heuristic query processing algorithm, to retrieve information units, that adheres to the stated requirements. Unlike the other minimum spanning tree based algorithms, we
do not generate the minimum spanning tree for the entire graph (or the entire Web). Furthermore, unlike other shortest-path based algorithms, we refrain ourselves from generating all possible
shortest paths. Note that this does not mean that we do not know of the structure of the Web. In fact, in our implementation, we used the proposed algorithm in conjunction with a Web search index we
maintain at NEC CCRL. The search index contains information regarding the Web pages, as well as the incoming and outgoing links to them.
The general idea of the algorithm is follows. Given a query 4 depicts the essential parts of this algorithm.
Figure 4: A generalized algorithm for progressive querying of information unit
Figure 5: The subroutines used by the algorithm
As shown in the figure, the algorithm, RetrieveInformationUnit, assumes as input the graph,
Lines 5-7 of Figure 4 are the initialization steps for the control variables. The main loop is depicted in lines 8-35 of the algorithm. The algorithm effectively grows a forest of MSTs. During each
iteration of the loop, we choose a function, 5), to choose among different ways to grow the forest. Note that depending on the goal, we can use different choice strategies. In this paper, we describe
two strategies: minimum edge-based strategy and balanced MST strategy. These two strategies will result in different degree of error and complexities. In Sections 3.3.1 and 3.3.2, we evaluate the two
strategies and discuss their quality and cost trade-offs.
Note that, in this algorithm, we assume that the costs of all neighboring vertices to be given, but in the real implementation this cost can be computed on-the-fly. The cost may be based on a variety
of factors. For example, the neighboring vertex is in the same domain versus outside the domain or relevancy of links based on anchor text and/or URL strings. Much research on this issue has been
done in the scope of efficient crawling [7].
After choosing an MST and an edge for growth, the algorithm checks if (line 10) the inclusion of this edge causes two MSTs to merge. If this is indeed the case, the algorithm next checks if the
resulting subgraph can satisfy the query (lines 17-22). Essentially, this check determines if the new MST has a set of vertices (pages) such that the pages collectively include each keyword in the
query EnumerateFirst and EnumerateNext) query results are output (lines 25-31); the algorithm terminates if
Note that the minimum spanning tree computation (lines 11 and 14) of this algorithm is incremental. Since the algorithm visits edges in the increasing order of the costs, the tree is constructed
incrementally by adding neighboring edges while growing a forest (multiple trees); consequently, it overcomes the weaknesses of both Kruskal's and Prim's algorithm when applied to the group Steiner
tree generation on the Web. In particular, the algorithm proposed here does not require complete knowledge of the Web and it does not get stuck at one non-promising seed by growing a single tree.
Next, we use an example to further illustrate the details of the algorithm. In this example, we will assume that the chooseGrowthTarget subroutine simply chooses the least cost edge for growth.
Figure 6: Group Steiner tree heuristic execution
Figure 7: Use of minimum spanning trees and cleanup operation for Steiner tree heuristic
Figure 6(a) shows an undirected graph (the undirected representation of the Web and the keyword to page mapping,
1. Figure 6(b) shows the first step in the algorithm. An edge with the smallest cost has been identified and the endpoints of the edge is inserted into
2. Figure 6(c) shows the step in which the algorithm identifies the first information unit. At this stage, the algorithm first identifies a new possible solution, and it verifies that the solution
is within a single connected subgraph as a whole. Next, the algorithm identifies the corresponding Steiner tree through a sequence of minimum spanning tree computations and clean-ups (these steps
are not shown in the figure). The resulting Steiner tree is shown with thicker lines. Note that the dark nodes denote the physical pages which form the information unit. Hence, at the end of this
step, the algorithm outputs the solution, which has the total cost
3. Finally, Figure 6(d) shows the algorithm finding the second and last Steiner tree. In this step, the newly added edge reduces the number of individual connected components in the graph to one.
Consequently, the algorithm identifies a new solution. The algorithm again identifies the corresponding Steiner tree through a sequence of minimum spanning tree computations and clean-ups; these
steps are shown in the Figure 7:
1. Figure 7(a) shows the state of the graph at the beginning of this step. Note that the graph denoted with darker lines is connected and subsumes all vertices, denoted darker, in
2. Figure 7(b) shows the minimum spanning tree,
3. Figure 7(c) shows the Steiner tree,
At the end of this step, the algorithm outputs
Evaluation of the Algorithm
Since the proposed algorithm is based on local analysis and incomplete information, it is not surprising that it has limitations when compared to the optimal solution. Note that the algorithm is
polynomial whereas, as we discussed earlier, finding the optimal solution is NP-hard. We now point to the cases in which the solution of our algorithm differs from the optimal solution through a
series of examples. In this section, in order to show the trade-off between quality and cost of the heuristic, we will use two different 8. Intuitively, the first function chooses the smallest
weighted edge at each iteration, whereas the second one aims at growing the MSTs at a balanced fashion. In Section 5, we will provide experimental evaluations of the algorithm and investigate the
degree of divergence of both heuristics from the optimality.
Figure 8: Two subroutines for choosing among growth candidates
The algorithm is complete in the sense that, given enough time, it can identify all information units in a given graph and sound in the sense that it does not generate incorrect information units.
The completeness is due to the fact that, once an MST is generated, all information units on it are discovered. Since, given enough time, the algorithm will generate an MST that covers the entire
graph, the algorithm is complete. Note that completeness does not imply that the information units will be discovered in the correct order.
Evaluation of the Minimum Edge-based Strategy
Let us consider the graph shown in Figure 9(a). As per the chooseGrowthTarget subroutine, the edges that will be chosen will be 9(b). At this state, the algorithm will generate the one and only one
information unit for the three keywords; the cost or rank associated with this information unit is the sum of the weights of the three darkened edges which is 12. However, there exists a Steiner tree
of lower cost (actually there are three possible Steiner trees:
Figure 9: Sub-optimal solutions
Next consider the example illustrated in Figure 10(a) from which we want to extract three keyword information units. Figure 10(b) illustrates the state of the system after the first edge (with cost
2) is added by the algorithm. After this edge, the next three edges that are included are all the edges of cost 5 as shown in Figure 10(c). At this point, the algorithm generates the information unit
shown by the dotted region and this information unit has cost 15 associated with it. In the next step, the algorithm adds the edge with cost 6 connecting node 10(d), the next information unit is
output after this step is shown by the dotted region. However, the cost associated with this information unit is 13 which is smaller than the information unit that was generated earlier by the
algorithm. This example illustrates that the proposed algorithm may not generate the results in the increasing order of the ranks.
Figure 10: Out-of-order solutions
Figure 11: (a) A case where the heuristic fails. (b) and (c) worst case quality estimation scenarios: (b) two groups and
Figure 11(a), the simplest case in which the heuristic does not return best solutions. Figure 11(b) shows the worst case scenario of quality estimation of the heuristic: when 11(b), the
overestimation ratio,
where 11(c) and 9(b)), the overestimation ratio becomes
Consequently, every solution, range, 5, we provide experimental evaluation of the algorithm to see how much it diverges from the optimal results. The results show that the divergence in reality is
much less compared to the worst case analysis provided above (i.e. it is almost a constant factor) for information unit retrieval purposes.
Complexity. Let us assume that the maximum number of edges incident on a vertex is bounded by a constant es) is
The worst case execution time of the proposed algorithm, then, is
• Using a heap, maintaining the list of incident edges (maximum number of edges is
• Since the spanning tree is incrementally constructed, it takes
• For
• It takes
Note that, due to the large size of the Web, if the user wants to get all possible results (
• Splitting the Web into domains, and limiting the search into only intra-domain Web documents. Since, the input graph is divided into multiple, relatively small, sub-graphs, this significantly
reduces the search space. As we pointed out in Section 1, the average size of a domain is around 100 documents.
• Assigning an upper bound on the total cost or on the number of vertices of an acceptable Steiner tree. For example, we may restrict the search of related pages to form an information unit within
a radius of
• Limiting the fan-out of every vertex in the graph to a predetermined small constant. This causes the graph to be sparse thereby reducing the search time. Of course, it is a major challenge to
identify which of the outgoing or incoming edges should be kept.
In Section 5, we provide experimental evaluation of the complexity of the algorithm (in terms of edges and nodes visited during the execution). Empirical evaluation is needed to determine the
efficacy of the above constraints.
Evaluation of the Balanced MST based Strategy
In the previous section, we have seen that the overestimation ratio is proportional to the number of vertices in the MST. Consequently, in order to minimize the overestimation ratio, it is important
to prevent the formation of long MSTs. Furthermore, in order to minimize the absolute value of the overestimation, we need to prevent very large MSTs to form. The next strategy that we describe
improves on the previous one on these aspects. Later in Section 4, we show that the balanced MST based strategy is essential to the extension for dealing with fuzziness in keyword-based retrieval.
The subroutine chooseGrowthTarget performs a look ahead before choosing the next MST. It identifies all possible ways to grow the forest and it chooses the growth option where the operation will
result in the smallest growth. Consequently, the minimum spanning trees are created in the increasing order of their cost, however since MST to Steiner tree conversion is suboptimal, results may
still be suboptimal.
Let us reconsider the graph shown in Figure 9(a). As per the chooseGrowthTarget subroutine, the edges that will be chosen will again be
This strategy prevents the formation of long chains and instead favors the creation of balanced size minimum spanning tree clusters. Note that since the overestimation ratio is proportional to the
number of vertices in the MST, this results in a reduction in the amount of overestimation. This reduction is especially large when the edge weights in the graph are mostly the same, as when two
edges have the same weight, the first strategy does not dictate a preference among them.
Complexity. The complexity of the second strategy is similar to the complexity of the first strategy: Let bs) is
• Using a heap structure, maintaining the list of the MSTs (maximum number of MSTs is
• Since the spanning tree is incrementally constructed, it takes
• For
• It takes
Comparing the worst-case complexities of the balanced MST strategy,
• Sorting and returning newly found results takes the same amount of time in both strategies.
• The amount of time spent for creating MSTs are the same in both algorithms:
• The time spent for trimming MSTs are the same in both algorithms:
Consequently, in the worst case, the complexity of the balanced MST based growth strategy is less than the complexity of the edge based strategy.
Note, however, among the four components that we identified above, the time spent for creating the MSTs is the most important one, as it describes the number of web-pages that may need to be visited
and processed (unless the information is readily available as an index). For this component, both strategies have the same worst-case complexity
Dealing with Partial Matches and Fuzziness in Retrieval
In this section, we describe how to extend the algorithm to deal with fuzzy and partial keyword-based retrieval problems. More specifically, we address the following issues:
• Disjunctive queries: In this paper, we have formulated the Web query as a set of keywords, denoting a conjunctive query asking for all keywords. One extension to our system is to handle
disjunctive queries. A disjunctive query is a query where any one of the keywords is enough to satisfy the query criterion. Combinations of conjunctions and disjunctions can also be handled
through queries in conjunctive normal or disjunctive normal forms. The actual process for handling combinations of conjunctions and disjunctions is not discussed further in this paper.
• Partial matches (or missing keywords): In some cases, a user may issue a conjunctive query and may be willing to accept query results which do not have all the query terms. Clearly, for such a
query, the user will prefer results which contains more keywords to the results which contain less keywords. One solution to such an extension is to translate a conjunction query
• Similarity-based keyword matching and keyword: Because of the mismatch between authors' vocabularies and users' query terms, in some cases, users may be interested in not only results which
exactly match with the query terms, but also results which contains keywords related to the query terms. For example, a logical document which contains keywords, ``Web'' and ``symposium'' may be
an acceptable result to a query 8].
• Keyword importance: In the problem formulated in this paper, we assumed that each keyword in a given query, Y2K may be more important than the two other terms. We can also assume that there is an
importance function,
We now describe the proposed solutions.
Partial Matches (Missing Keywords)
In order to consider query results which contain only a partial set of query terms, we need to adapt our technique to include group Steiner trees that do not fully cover all keywords. Instead of
modifying the algorithm, we apply a graph transformation, 3.2 can be used to handle partial matches. The transformation assumes that the second growth strategy is used by the algorithm.
Figure 12: Partial match transformation
The transformation, 12. The input to the transformation are
Description of the transformation: Let us assume that the set of nodes that contain keyword,
Figure 13: Examples of query results considering partial matches (
Note that the order in which these results will be discovered depends on the value of 13, we show a query example where the value of 3.2, the system produces query results in Figures 13(a) and 13(b)
before the result in Figure 13(c). However, if we assign value of 13(c) first.
Fuzzy Keyword Matching
Both fuzzy keyword matching and keyword importance require preference-based processing: In the case of fuzzy keyword matching, we are given a set of keywords and we want to accept the result even if
some of the keywords are not exactly matching, but are similar. Of course, we are trying to maximize the overall similarity. In the case of importance-based retrieval, on the other hand, each keyword
has a different importance and we are trying to maximize the overall importance of the keywords. To handle this, we again use a graph transformation,
Experimental Evaluation
As discussed in Section 3, the proposed heuristic does not always generate information units with their optimal cost and it can also provide out-of-order solutions. However, if we consider the fact
that the underlying problem of finding an information unit in the Web is NP-hard and that the graph has to be constructed incrementally, this is not unexpected. In this section, we use empirical
analysis to evaluate the overall quality of the proposed algorithm.
Evaluation Criterion
One of the first experiments we conduct is on a set of synthetic graphs that are generated randomly. The parameters of these synthetic graphs are listed in Table 1. In order to have a yardstick to
compare our results, we first perform an exhaustive search to find all information units along with their optimal costs. Next, we run our algorithm incrementally. We visualize the results of this
experiment in three ways.
In the first measure, we compute the average cost of information units under both schemes (exhaustive and heuristic) as a function of top
The second plot in this experiment shows the percentage of nodes and edges used in generating top-
The third plot shows the recall ratio, which captures the performance of the top-
Thus, we also visualize another parameter called adjusted recall ratio. The adjusted recall better reflects the utility of the results for information unit retrieval. The adjusted recall is
calculated as follows: Obviously, since the query results is supposed to be a sorted list of information units, the information retrieval system should be penalized by providing the
Let the user ask for the top
• Recall ratio: The algorithm returns
• Adjusted recall ratio: The adjusted recall ratio is:
Table 1: Parameters used to generate the synthetic
│ Description │ Name │ Value │
│ Number of Nodes │ NumNodes │ 100/500/1000 │
│ Number of Edges │ NumEdges │ 440/2461/4774 │
│ Minimum Node Degree │ MinDegree │ 4 │
│ Maximum Node Degree │ MaxDegree │ 12 │
│ Minimum Edge Weight │ MinWeight │ 1 │
│ Maximum Edge Weight │ MaxWeight │ 5 │
│ Number of Keyword │ │ 3 │
│ Occurrence of each kwd. │ │ 10 │
The experiments are conducted for graphs of three different sizes: 100 nodes, 500 nodes, and 1000 nodes. The degree of each node (the total number of incoming and outgoing edges to a node) is
uniformly distributed between 1. Note that we conducted all our experiments for three keywords since it is infeasible to conduct the exhaustive search that we use for comparison for a larger number
of keywords. The number of occurrences of each keyword in the graph is set to 10. The above set up allows us to empirically evaluate the sub-optimality of the proposed heuristic when compared to the
exhaustive search.
In the second experiment, instead of using synthetic data, we conducted experiments with real Web data. We downloaded the pages from www-db.stanford.edu/people/. The graph corresponding to the Web
data consists of 236 nodes and 414 edges. Presentation slides and technical reports in postscript format were excluded. The weight associated with each edge was set to 1. On this data, we ran a
three-keyword query involving keywords:
Experimental Results on Synthetic Graphs
The experiment results show that, in a neighborhood with 100 nodes, it takes on the order of 300ms (only 10-20ms of which is the system time) to generate the top 10 retrieval results. When the user
request is increased to top-25 retrieval, it takes on the order of 1700ms (only 20-30ms of which is the system time) to generate the results. Note that, since the process is progressive, top-25
generation time can be hidden while user is reviewing the top-10 results list.
Figures 14, 15, and 16, depict the result of performing three keyword queries over the 100 node, 500 node, and 1000 node synthetic graphs, respectively. Figures 14(a), 15(a), and 16(a) report the
average cost of the information unit in the answer set of size from 10 to 100 in the increments of 10. As expected, due to the sub-optimal nature of the proposed heuristic, the average cost of
information units is inferior to the one produced by the exhaustive search. In the case of 100 node graph the cost inflation is within two times the optimal cost and in the case of 500 nodes it is
approximately 5.3).
Figure 14: Experimental results on synthetic data with 100 nodes and 440 edges
Figure 15: Experimental results on synthetic data with 500 nodes and 2461 edges
Figure 16: Experimental results on synthetic data with 1000 nodes and 4774 edges
Figure 17: Comparisons between the costs of the results using the progressive information unit retrieval algorithm and the optimal solutions on synthetic data with (a) 100 nodes and 440 edges; (b)
500 nodes and 2461 edges; and (c) 1000 nodes and 4774 edges
In Figures 14(b), 15(b), and 16(b), we report the percentage of nodes and edges visited to generate the answer sets under the proposed heuristic. Note that for the exhaustive search, the entire graph
needs to be examined for each solution to ensure the minimal cost requirements (certain optimizations are, of course, possible). In contrast, we see that in the case of 100 nodes, we only explore
about 45% of the nodes to produce up to 100 answers. The more dramatic observation is that only about 10% of edges are explored by the heuristic to produce top-
Finally, Figures 14(c), 15(c), and 16(c) report the recall ratio and the adjusted recall ratio of the proposed heuristic. As discussed in Section 3, the proposed heuristic generates result not
necessarily in the ranked order. Furthermore, the ranking of results itself is not always as specified by the optimal order. Due to the combination of these two limitations of the proposed heuristic,
we achieve less than perfect recall. As observed, the recall ratio for 100 nodes is in the range of 10% (when the size of the answer set is very small) to 50%. For 500 nodes the range is 20% to 55%
and for 1000 nodes the range is 0% (for top-
Figures 14(a), 15(a), and 16(a), report the average cost of the information unit in the answer set of size from 10 to 100 in the increments of 10. As we explained earlier, due to the sub-optimal
nature of the proposed heuristic, the average cost of information units is inferior to the one produced by the exhaustive search. Another reason is that our algorithm only explores on an average less
than 10% of edges. To identify how well our solutions are compared with the optimal solution if they are derived based on the same graph, we conducted additional experiments to find the optimal
solutions for the same subgraphs our algorithm has explored rather than the whole graphs. We then compare the average cost of our sub-optimal solutions with the optimal solutions in Figure 17. The
experimental results show that the average costs of our solutions are closer to the cost of the optimal solutions compared with the experimental results shown in Figures 14(a), 15(a), and 16(a),
especially for larger graphs with 500 and 1000 nodes.
Evaluation on Real Web Data Set
Figure 18 reports the results of our experiments with actual Web data. Figure 18(a) illustrates the average costs under the two schemes. Here we see that the average cost of the information units is
within 30% of that computed by the exhaustive algorithm. The reason for the small error in the cost inflation is because the edges in the Stanford Web data have a unit cost. As shown in Figure 18(b),
a larger percentage of the edges are visited because the low connectivity of the Web pages in the chosen dataset. Finally, Figure 18(c) reports the recall ratio which is in the range of 30% to 60%.
The decline in recall between 50 and 100 results (x-axis) can be explained as follows: Note that from Figure 18(b) we can observe that the visited portion of the graph does not change much indicating
that the graph is large enough to compute the answers in this range. Due to the greedy approach of the heuristic, when a given connected component has multiple answers, these answers are produced in
a random order and not necessarily in the order of their costs. This contributes to the drop in recall. However, the adjusted recall ratio reaches almost 70% and the curve remains flat, which
validates the performance of our heuristics algorithm since it provides the same level of recall utility to the users. More interestingly, our algorithm, shown in Figure 18(c), can provide 70% of
adjusted recall ratio by exploring only about 30% of nodes and 25% of edges.
In summary, our experiments on synthetic data are validated with actual data and are promising. In particular, the proposed heuristic generates information units of acceptable quality by exploring a
very small part of the graph. By comparing the experimental results on the real Web data and on the synthetic graphs, we found that our heuristic algorithm performs much better on the real Web data
in all categories. We examined the Web site connectivity of the Stanford site and found that the link fanout is around 4 on average. We exclude the search on presentation slides and technical reports
in PDF or Postscript format. Another reason for such low fanout is that the personal home pages usually have low depth and the leaf nodes reduce the average fanout. We also found the link structures
of real Web sites are more like "trees", rather than highly connected "graphs" used in experiments on synthetic data. We observe that the algorithm performs better in searching a tree-like structure
with lower connectivity. We are pleased to see the heuristic algorithm performs better on the real Web data.
Figure 18: Experimental results on real Web data with 236 nodes and 409 edges
Related Work
We have discussed existing work on group Steiner trees in Section 3. In this section, we give an overview of work in the area of integrating content search on the Web and Web structure analysis.
Although search engines are one of the most popular methods to retrieve information of interest from the Web, they usually return thousands of URLs that match the user specified query terms. Many
prototype systems are built to perform clustering or ranking based on link structures [9,10] or links and context [11,12]. Tajima et al. [13] presented a technique which uses cuts (results of Wen
structure analysis) as querying units for WWW. [14] first present the concept of information unit and [15] extends to rank query results involved multiple keywords by (1) finding minimal subgraphs of
links and pages including all keywords; and (2) computing the score of each subgraph based on locality of the keywords within it.
Another solution to the above problem is the topic distillation [16,17,11,9] approach. This approach aims at selecting small subsets of the authoritative and hub pages from the much larger set of
domain pages. An authoritative page is a page with many inward links and a hub page is a page with many outward links. Authoritative pages and hub pages are mutually reinforcing: a good authoritative
page is linked by good hub pages and vice versa. In [18], Bharat et al. present improvements on the basic topic distillation algorithm [16]. They introduce additional heuristics, such as considering
only those pages which are in different domains and using page similarity for mutual authority/hub reinforcement.
Similar techniques to improve the effectiveness of search results are also investigated for database systems. In [19], Goldman et al. propose techniques to perform proximity searches over databases.
In this work, proximity is defined as the shortest path between vertices (objects) in a given graph (database). In order to increase the efficiency of the algorithm, the authors also propose
techniques to construct indexes that help in finding shortest distances between vertices. In our work, in the special case of two keywords we also use shortest distances. In the more general case,
however, we use minimal group Steiner trees to gather results. Note that minimal group Steiner trees reduce to the shortest paths when the number of groups, that is, keywords, is limited to two.
DataSpot, described in [20], aims at providing ranked results in a database which uses a schema-less semi-structured graph called a Web View for data representation.
Compared with existing work, our work aims at providing more efficient graph search capability. Our work focuses on progressive query processing without the assumption of that the complete graph is
known. Our framework considers queries with more than 2 keywords, which is significantly more complex. In addition, we also present how to deal with partial matches and fuzziness in retrieval.
Concluding Remarks
In this paper, we introduced the concept of information unit, which is as a logical document consisting of multiple physical pages. We proposed a novel framework for document retrieval by information
units. In addition to the results generated by existing search engines, our approach further benefits from the link structure to retrieve results consisting of multiple relevant pages associated by
linkage and keyword semantics. We proposed appropriate data and query models and algorithms that efficiently solve the retrieval by information unit problem. The proposed algorithms satisfy the
essential requirement of progressive query processing, which ensures that the system does not enumerate an unnecessarily large set of results when users are interested only in top matches. We
presented a set of experiment results conducted on synthetic as well as real data. These experiments show that although the algorithm we propose is suboptimal (the optimal version of the problem is
NP-hard), it is efficient and provides adequate accuracy.
This work by Quoc Wu was performed when the author was with NEC USA Inc.
Bruce Croft, R. Cook and D. Wilder. Providing Government Information on the Internet: Experiences with THOMAS.
In Proceedings of Digital Libraries (DL'95), 1995.
S.L. Hakimi.
Steiner's Problem in Graphs and its Implications.
Networks, 1:113-131, 1971.
G. Reich and P. Widmayer.
Approximate Minimum Spanning Trees for Vertex Classes.
In Technical Report, Inst. fur Informatik, Freiburg Univ., 1991.
E. Ihler.
Bounds on the Quality of Approximate Solutions to the Group Steiner Tree Problem.
In Proceedings of the 16, pages 109-118, 1991.
N. Garg, G. Konjevod, and R. Ravi.
A Polylogarithmic Approximation Algorithm for the Group Steiner Tree Problem.
In Proceedings of the 9, pages 253-259, 1998.
C.D. Bateman, C.S. Helvig, G. Robins, and A. Zelikovsky.
Provably Good Routing Tree Construction with Multi-port Terminals.
In Proceedings of the ACM/SIGDA International Symposium on Physical Design, pages 96-102, Napa Valley, CA, April 1997.
J. Cho, H. Garcia-Molina, and L. Page.
Efficient Crawling through URL ordering.
Computer Networks and ISDN Systems. Special Issue on the Seventh International World-Wide Web Conference, Brisbane, Australia, 30(1-7):161-172, April 1998.
R. Richardson, Alan Smeaton, and John Murphy.
Using Wordnet as a Knowledge base for Measuring Conceptual Similarity between Words.
In Proceedings of Artificial Intelligence and Cognitive Science Conference, Trinity College, Dublin, 1994.
David Gibson, Jon M. Kleinberg, and Prabhakar Raghavan.
Inferring Web Communities from Link Topology.
In Proceedings of the 1998 ACM Hypertext Conference, pages 225-234, Pittsburgh, PA, USA, June 1998.
Lawrence Page and Sergey Brin.
The Anatomy of a Large-Scale Hypertextual Web Search Engine.
In Proceedings of the 7th World-Wide Web Conference, pages 107-117, Brisbane, Queensland, Australia, April 1998.
David Gibson, Jon Kleinberg, and Prabhakar Raghavan.
Clustering Categorical Data: An Approach Based on Dynamic Systems.
In Proceedings of the 24th International Conference on Very Large Databases, pages 311-322, September 1998.
Sougata Mukherjea and Yoshinori Hara.
Focus+Context Views of World-Wide Web Nodes.
In Proceedings of the 1997 ACM Hypertext'97 Conference, pages 187-196, Southampton, UK, March 1997.
Keishi Tajima, Yoshiaki Mizuuchi, Masatsugu Kitagawa, and Katsumi Tanaka.
Cut as a Querying Unit for WWW, Netnews, e-mail.
In Proceedings of the 1998 ACM Hypertext Conference, pages 235-244, Pittsburgh, PA, USA, June 1998.
Wen-Syan Li and Yi-Leh Wu.
Query Relaxation By Structure for Document Retrieval on the Web.
In Proceedings of 1998 Advanced Database Symposium, Shinjuku, Japan, December 1999.
Keishi Tajima and Kenji Hatano and Takeshi Matsukura and Ryoichi Sano and Katsumi Tanaka.
Discovery and Retrieval of Logical Information Units in Web.
In Proceedings of the 1999 ACM Digital Libraries Workshop on Organizing Web Space, Berkeley, CA, USA, August 1999.
Jon Kleinberg.
Authoritative Sources in a Hyperlinked Environment.
In Proceedings of the 9th ACM-SIAM Symposium on Discrete Algorithms, 1998.
Soumen Chakrabarti, Byron Dom, Prabhakar Raghavan, Sridhar Rajagopalan, David Gibson, and Jon Kleinberg.
Automatic Resource Compilation by Analyzing Hyperlink Structure and Associated Text.
In Proceedings of the 7th World-Wide Web Conference, pages 65-74, Brisbane, Queensland, Australia, April 1998.
Krishna Bharat and Monika Henzinger.
Improved Algorithms for Topic Distillation in a Hyperlinked Environment.
In Proceedings of the 21th Annual International ACM SIGIR Conference, pages 104-111, Melbourne, Australia, August 1998.
Roy Goldman, Narayanan Shivakumar, Suresh Venkatasubramanian, and Hector Garcia-Molina.
Proximity Search in Databases.
In Proceedings of the 24th International Conference on Very Large Data Bases, pages 26-37, New York City, New York, August 1998. VLDB.
Shaul Dar, Gadi Entin, Shai Geva, and Eran Palmon.
DTL's DataSpot: Database Exploration Using Plain Language.
In Proceedings of the 24th International Conference on Very Large Data Bases, pages 645-649, New York City, New York, August 1998. VLDB.
|
{"url":"http://www10.org/cdrom/papers/466/","timestamp":"2014-04-21T07:05:40Z","content_type":null,"content_length":"112387","record_id":"<urn:uuid:bc660d3f-fbe7-4798-9700-e6a58d9c9c6e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
|
BLS Handbook of Methods
Chapter 7.
National Longitudinal Surveys
The NLS surveys are based upon stratified multistage random samples, with oversamples of blacks in all cohorts, oversamples of Hispanics in the NLSY79 and NLSY97, and additional oversamples of
disadvantaged nonblack non-Hispanics and youths in the military in the NLSY79. Data from each interview year include a weight specific to that year. When this weight is applied, the number of sample
cases is translated into the number of persons in the population that those observations represent.
The assignment of individual respondent weights involves at least three stages. The first stage involves the reciprocal of the probability of selection at the baseline interview. Specifically, this
probability of selection is a function of the probability of selection associated with the household in which the respondent was located, as well as the subsampling (if any) applied to individuals
identified in screening. The second stage of weighting adjusts for differential response (cooperation) rates in the screening phase. Differential cooperation rates are computed (and adjusted) on the
basis of geographic location and group membership, as well as by group subclassification. The third stage of weighting attempts to adjust for certain types of random variation associated with
sampling, as well as sample "undercoverage." The estimated ratios are used to conform the sample to independently derived population totals.
Subsequent to the initial interview of each cohort, reductions in sample size have occurred due to noninterviews (the failure, for one reason or another, of the person to be interviewed). In order to
compensate for these losses, the sampling weights of the individuals who were interviewed had to be revised. A revised weight for each respondent was calculated for each interview year, using the
method just described.
In the event that one wishes to tabulate characteristics of the sample for a single interview year in order to describe the population being represented, it is necessary to weight the observations by
using the weights provided. For example, to compute the average hours worked in 1987 by individuals in the NLSY79 (persons born in 1957 1964 and living in the United States in 1978), one simply
weights the average hours worked by the 1987 sample weight. The weights are correct when used in this way.
Often, users confine their analyses to subsamples for which respondents provide valid answers to certain questions. Weighted means here will represent, not the entire population, but rather those
persons in the population who would have given a valid response to the specified questions. Nonresponse to any item because of refusals or invalid skips is generally quite small, so the degree to
which the weights are incorrect also is probably quite small. In these instances, although the population estimates may be moderately in error, the population distributions (including means, medians,
and proportions) are reasonably accurate. Exceptions to this assumption might occur for data items that have relatively high nonresponse rates, such as family income.
Next: Uses
|
{"url":"http://data.bls.gov/cgi-bin/print.pl/opub/hom/homch7_e.htm","timestamp":"2014-04-21T02:28:24Z","content_type":null,"content_length":"6888","record_id":"<urn:uuid:a104f3fd-5f8d-4c18-826a-67753e8647dd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Statistical Field Theory Addison-Wesley
Results 1 - 10 of 56
- Advances in Computational Mathematics , 2000
"... Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate
function from sparse data. Radial Basis Functions, for example, are a special case of both regularization a ..."
Cited by 266 (33 self)
Add to MetaCart
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate
function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik’s theory of
statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special
- Journal of Artificial Intelligence Research , 1996
"... We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. ..."
, 1998
"... Introduction Graphical models provide a formalism in which to express and manipulate conditional independence statements. Inference algorithms for graphical models exploit these independence
statements, using them to compute conditional probabilities while avoiding brute force marginalization over ..."
Cited by 38 (0 self)
Add to MetaCart
Introduction Graphical models provide a formalism in which to express and manipulate conditional independence statements. Inference algorithms for graphical models exploit these independence
statements, using them to compute conditional probabilities while avoiding brute force marginalization over the joint probability table. Many inference algorithms, in particular the clustering
algorithms, make explicit their usage of conditional independence by constructing a data structure that captures the essential Markov properties underlying the graph. That is, the algorithm groups
interacting variables into clusters, such that the hypergraph of clusters has Markov properties that allow simple local algorithms to be employed for inference. In the best case, in which the
original graph is sparse and without long cycles, the clusters are small and inference is efficient. In the worst case, such as the case of a dense graph, the clusters are large and inference is
inefficient (complexity
, 2002
"... We review results concerning the critical behavior of spin systems at equilibrium. We consider the Ising and the general O(N)-symmetric universality class. For each of them, we review the
estimates of the critical exponents, of the equation of state, of several amplitude ratios, and of the two-point ..."
Cited by 26 (14 self)
Add to MetaCart
We review results concerning the critical behavior of spin systems at equilibrium. We consider the Ising and the general O(N)-symmetric universality class. For each of them, we review the estimates
of the critical exponents, of the equation of state, of several amplitude ratios, and of the two-point function of the order parameter. We report results in three and two dimensions. We discuss the
crossover phenomena that are observed in this class of systems. In particular, we review the field-theoretical and numerical studies of systems with medium-range interactions. Moreover, we consider
several examples of magnetic and structural phase transitions, which are described by more complex Landau-Ginzburg-Wilson Hamiltonians, such as N-component systems with cubic anisotropy, O(N)
-symmetric systems in the presence of quenched disorder, frustrated spin systems with noncollinear or canted order, and finally, a class of systems described by the tetragonal Landau-Ginzburg-Wilson
Hamiltonian with three quartic couplings. The results for the tetragonal Hamiltonian are original, in particular we present the six-loop perturbative series for the β-functions and the critical
, 1999
"... . We introduce a learning algorithm for unsupervised neural networks based on ideas from statistical mechanics. The algorithm is derived from a mean field approximation for large, layered
sigmoid belief networks. We show how to (approximately) infer the statistics of these networks without resort to ..."
Cited by 11 (2 self)
Add to MetaCart
. We introduce a learning algorithm for unsupervised neural networks based on ideas from statistical mechanics. The algorithm is derived from a mean field approximation for large, layered sigmoid
belief networks. We show how to (approximately) infer the statistics of these networks without resort to sampling. This is done by solving the mean field equations, which relate the statistics of
each unit to those of its Markov blanket. Using these statistics as target values, the weights in the network are adapted by a local delta rule. We evaluate the strengths and weaknesses of these
networks for problems in statistical pattern recognition. 1. Introduction Multilayer neural networks trained by backpropagation provide a versatile framework for statistical pattern recognition. They
are popular for many reasons, including the simplicity of the learning rule and the potential for discovering hidden, distributed representations of the problem space. Nevertheless, there are many
issues that are...
"... Abstract. Loops are subgraphs responsible for the multiplicity of paths going from one to another generic node in a given network. In this paper we present an analytic approach for the
evaluation of the average number of loops in random scale-free networks valid at fixed number of nodes N and for an ..."
Cited by 8 (0 self)
Add to MetaCart
Abstract. Loops are subgraphs responsible for the multiplicity of paths going from one to another generic node in a given network. In this paper we present an analytic approach for the evaluation of
the average number of loops in random scale-free networks valid at fixed number of nodes N and for any length L of the loops. We bring evidence that the most frequent loop size in a scale-free
network of N nodes is of the order of N like in random regular graphs while small loops are more frequent when the second moment of the degree distribution diverges. In particular, we find that
finite loops of sizes larger than a critical one almost surely pass from any node, thus casting some doubts on the validity of the random tree approximation for the solution of lattice models on
these graphs. Moreover we show that Hamiltonian cycles are rare in random scale-free networks and may fail to appear if the power-law exponent of the degree distribution is close to 2 even for
minimal connectivity kmin ≥ 3.
, 707
"... This review is focused on the borderline region of theoretical physics and mathematics. First, we describe numerical methods for the acceleration of the convergence of series. These provide a
useful toolbox for theoretical physics which has hitherto not received the attention it actually deserves. T ..."
Cited by 3 (0 self)
Add to MetaCart
This review is focused on the borderline region of theoretical physics and mathematics. First, we describe numerical methods for the acceleration of the convergence of series. These provide a useful
toolbox for theoretical physics which has hitherto not received the attention it actually deserves. The unifying concept for convergence acceleration methods is that in many cases, one can reach much
faster convergence than by adding a particular series term by term. In some cases, it is even possible to use a divergent input series, together with a suitable sequence transformation, for the
construction of numerical methods that can be applied to the calculation of special functions. This review both aims to provide some practical guidance as well as a groundwork for the study of
specialized literature. As a second topic, we review some recent developments in the field of Borel resummation, which is generally recognized as one of the most versatile methods for the summation
of factorially divergent (perturbation) series. Here, the focus is on algorithms which make optimal use of all information contained in a finite set of perturbative coefficients. The unifying concept
for the various aspects of the Borel method investigated here is
, 1995
"... The Elastic Net Algorithm (ENA) for solving the Traveling Salesman Problem is analyzed applying statistical mechanics. Using some general properties of the free energy function of stochastic
Hopfield Neural Networks, we argue why Simic's derivation of the ENA from a Hopfield network is incorrect. ..."
Cited by 2 (2 self)
Add to MetaCart
The Elastic Net Algorithm (ENA) for solving the Traveling Salesman Problem is analyzed applying statistical mechanics. Using some general properties of the free energy function of stochastic Hopfield
Neural Networks, we argue why Simic's derivation of the ENA from a Hopfield network is incorrect. However, like the Hopfield-Lagrange method, the ENA may be considered a specific dynamic penalty
method , where, in this case, the weights of the various penalty terms decrease during execution of the algorithm. This view on the ENA corresponds to the view resulting from the theory on
`deformable templates', where the term stochastic penalty method seems to be most appropriate. Next, the ENA is analyzed both on the level of the energy function as well as on the level of the motion
equations. It will be proven and shown experimentally, why a non-feasible solution is sometimes found. It can be caused either by a too rapid lowering of the temperature parameter (which is
avoidable), or...
- Rev. B , 1999
"... We present extensive Monte-Carlo spin dynamics simulations of the classical XY model in three dimensions on a simple cubic lattice with periodic boundary conditions. A recently developed
efficient integration algorithm for the equations of motion is used, which allows a substantial improvement of st ..."
Cited by 2 (0 self)
Add to MetaCart
We present extensive Monte-Carlo spin dynamics simulations of the classical XY model in three dimensions on a simple cubic lattice with periodic boundary conditions. A recently developed efficient
integration algorithm for the equations of motion is used, which allows a substantial improvement of statistics and large integration times. We find spin wave peaks in a wide range around the
critical point and spin diffusion for all temperatures. At the critical point we find evidence for a violation of dynamic scaling in the sense that independent components of the dynamic structure
factor S(q,ω) require different dynamic exponents in order to obtain scaling. Below the critical point we investigate the dispersion relation of the spin waves and the linewidths of S(q,ω) and find
agreement with mode coupling theory. Apart from strong spin wave peaks we observe additional peaks in S(q,ω) which can be attributed to two-spin wave interactions. The overall lineshapes are also
discussed and compared to mode coupling predictions. Finally, we present first results for the transport coefficient D(q,ω) of the out-of-plane magnetization component at the critical point, which is
related to the thermal conductivity of 4 He near the superfluid-normal transition.
, 1997
"... A new kind of fundamental superfield is proposed, with an Ising-like Euclidean action. Near the Planck energy it undergoes its first stage of symmetry-breaking, and the ordered phase is assumed
to support specific kinds of topological defects. This picture leads to a low-energy Lagrangian which is s ..."
Cited by 2 (0 self)
Add to MetaCart
A new kind of fundamental superfield is proposed, with an Ising-like Euclidean action. Near the Planck energy it undergoes its first stage of symmetry-breaking, and the ordered phase is assumed to
support specific kinds of topological defects. This picture leads to a low-energy Lagrangian which is similar to that of standard physics, but there are interesting and observable di#erences. For
example, the cosmological constant vanishes, fermions have an extra coupling to gravity, the gravitational interaction of W-bosons is modified, and Higgs bosons have an unconventional equation of
motion. e-mail: allen@phys.tamu.edu tel.: (409) 845-4341 fax: (409) 845-2590 International Journal of Modern Physics A, Vol. 12, No. 13 (1997) 2385-2412 CTP-TAMU-15/96 1 1 Introduction The terms
"superfield" and "supersymmetry" are ordinarily used in a context which presupposes local Lorentz invariance. 1-3 It is far from clear, however, that Lorentz invariance is still valid near the Planck
scale, fift...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2393286","timestamp":"2014-04-16T06:19:21Z","content_type":null,"content_length":"38950","record_id":"<urn:uuid:3da1ae73-7382-4374-8b56-346ffb52c75b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Immaculata Geometry Tutor
Find a Immaculata Geometry Tutor
...My first teaching job was in a school that specifically serviced special needs students. Each teacher received special training on how to aide students with a variety of differences, including
ADD and ADHD. There and since I have worked with several students with ADD and ADHD both in their math content areas and with executive skills to help them succeed in all areas of their life.
58 Subjects: including geometry, reading, GRE, biology
...I taught a 1st/2nd grade class and a 3rd/4th grade class of children with multiple disabilities including dyslexia, ADHD, ADD, autism, and developmental disabilities. I find that children with
ADHD often struggle organizing information and often prefer to learn about something in a way that rela...
20 Subjects: including geometry, reading, dyslexia, algebra 1
...I have been using word processors since 1994 including Professional Writer, MS Word, MS Word for MACs, and Word Perfect. In addition to years of writing programs, I also use text (and hence
text editing) in AutoCAD. I wrote my book "Bear bags" using MS Word on a laptop.
25 Subjects: including geometry, physics, algebra 1, algebra 2
...I have taken the GRE in August of 2012 and know the new format and how the test works. I have experience in tutoring all subject fields that are included on the ACT math test. As an
undergraduate student at Jacksonville University, I studied both ordinary differential equations and partial differential equations obtaining A's in both courses.
13 Subjects: including geometry, calculus, GRE, algebra 1
...The most important part is that your student will gain self-confidence and be able to apply these problem-solving skills to other subjects, and every area of life, for that matter. I have
experience with Paul A. Foerster's Algebra and Trigonometry that thoroughly covers intermediate/advanced Algebra and Trigonometry.
23 Subjects: including geometry, reading, writing, algebra 1
|
{"url":"http://www.purplemath.com/Immaculata_geometry_tutors.php","timestamp":"2014-04-18T08:46:36Z","content_type":null,"content_length":"24072","record_id":"<urn:uuid:52664237-76ce-4ac5-a84d-4b769dc3bbc4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00049-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Darko Z
Darko Zubrinic
Darko Zubrinic
Faculty of Electrical Engineering and Computing
Dpt. of Applied Mathematics
Unska 3
10000 Zagreb
e-mail: darko.zubrinicYY@YYfer.hr (remove Y's)
Born in Zagreb (1956). Graduated from the Faculty of Natural Sciences and Mathematics in Zagreb (Department of Mathematics) in 1978, with the diploma work "Pontryagin duality theory for locally
compact Abelian groups." Master thesis: "Optimal control" (1982). Ph. degree defended in 1986 with the thesis "Nonresonant elliptic equations and applications to control theory."
Fields of interest: nonlinear functional analysis, in particular nonlinear elliptic equations, fine properties of Sobolev functions, fractal analysis, spiral trajectories of dynamical systems.
Editorial board service:
Graduate courses:
• Faculty of Electrical Engineering: Variational methods for differential equations;
• Faculty of Natural Sciences and Math. (Department of mathematics): Nonlinear functional analysis, Sobolev spaces,
• Faculty of Technology: Mathematics (selected topics).
Spent two months on the International Center for Theoretical Physics, ICTP, Trieste in 1982, and four months in Praha, Charles University at prof. J.Necas (1985/6). I will never forget the
hospitality of the people I met in Praha.
Leader of the former Yugoslav secondary school national team (with dr. Uros Milutinovic), that participated:
• the International Mathematical Olympiads (IMO) held in Warsaw (1987), Havana (1988), Canberra (1989), and
• the Balkan Mathematical Olympiads held in Athens (1988) and Nicosia (1989).
Initiated and participated the organization of the Balkan Mathematical Competition held in Split in 1989.
Selected papers
1. Uvod u varijacione metode za diferencijalne jednadzzbe, Zagreb 1991,
2. with dr. Uross Milutinovich:
3. Matematiccki susreti (prepared by D.Zz), Beli Manastir 1991 (the whole edition has been destroyed during the Serbian aggression on Croatia), new edition: Zagreb 1995 - published by Element.
4. Diskretna matematika, Element, Zagreb, 1997.
5. Fundamentals of Applied Functional Analysis
(with Dragissa Mitrovich, Prof. Emeritus of the University of Zagreb)
Pitman Monograph and Surveys in Pure and Applied Mathematics 91, Longman, 1998 (400 pages)
ISSN 0269-3666
ISBN 0 582 24694 6
Orders at Amazon.com or at CRC Press.
The book provides a gentle introduction to modern concepts of linear and nolinear functional analysis. It is intended mainly for new graduate students in mathematics and physics specializing in
partial differential equations and also for engineers who need a deeper understanding of modern methods of applied functional analysis. It may serve as a preparation for study of the abundant
scientific literature on the theory of distributions, Sobolev spaces, elliptic equations and nonlinear analysis. Its purpose is also to provide an insight into the fascinating variety of deeply
interlaced mathematical tools applied in the study of nonlinear problems.
It starts with a review of the basics of Lebesgue spaces and Nemytzki operators. Special attention is paid to distributions, Sobolev spaces, and their applications to the study of elliptic
boundary value problems. Some of the methods of nonlinear analysis are illustrated by resonant and nonresonant problems, lower semicontinuity and convexity, energy functionals, monotone
operators, reduction method, Landesman - Lazer problems, jumping nonlinearities, the Mountain-Pass theorem, and topological degree. The book contains numerous examples and exercises, most of them
with detailed solutions and hints.
Darko Zubrinic, FER
To help Croatian math teachers and pupils:
Matematika za osnovnu i srednju sskolu
Author of the booklet "Faculty of Electrical Engineering" (1992), which provides basic information about our institution and about Croatia. Its second part served as a basis for the preparation of
HTML files about the Croatian history, culture and science.
Now Associate professor at the Faculty of Electrical Engineering and Computing.
Member of the Croatian Mathematical Society.
|
{"url":"http://www.croatianhistory.net/etf/darko.html","timestamp":"2014-04-20T05:42:37Z","content_type":null,"content_length":"6468","record_id":"<urn:uuid:572cb9f7-6c1f-4d48-b99c-98a219cf009b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
|
X has a set of 9 different numbers Y is a set containing 8
Author Message
desiguy X has a set of 9 different numbers Y is a set containing 8 [#permalink] 10 Dec 2005, 19:28
Senior Manager X has a set of 9 different numbers
Y is a set containing 8 different numbers, all of which are members of X.
Joined: 11 May 2004 Which of the following cannot be true?
Posts: 460 A. The mean of X is equal to the mean of Y
B. The median of X is equal to the median of Y
Location: New York C. The range of X is equal to the range of Y
D. The mean of X is greater than the mean of Y
Followers: 1 E. The range of X is less that the range of Y
Kudos [?]: 8 [0], given: 0
What is your thought process for this question?
Director E
Joined: 21 Aug 2005 Since Y is a subset of X, it's range cannot be more than that of X.
Posts: 799
Followers: 2
Kudos [?]: 4 [0], given: 0
|
{"url":"http://gmatclub.com/forum/x-has-a-set-of-9-different-numbers-y-is-a-set-containing-24353.html?fl=similar","timestamp":"2014-04-16T04:21:52Z","content_type":null,"content_length":"137756","record_id":"<urn:uuid:9a959f15-bda7-48be-b219-8ab7c499aef5>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Strengthening the Compression Lemma in homotopy theory
up vote 1 down vote favorite
The following result is sometimes known as the Compression Lemma:
Let $(X,A)$ be a CW pair and let $(Y,B)$ be a topological pair with $B\neq\emptyset$. For all $n$ such that $X-A$ has $n$-cells, assume that $\pi_n(Y,B,b_0)=0$ for all choices of basepoint $b_0\
in B$. Then every map $f\colon\thinspace (X,A)\to (Y,B)$ is homotopic rel $A$ to a map with image in $B$.
See Hatcher's "Algebraic Topology", Section 4.1, or G.W. Whitehead's "Elements of Homotopy Theory", Section II.3. Using this result gives a quick route to J.H.C. Whitehead's theorem that a weak
homotopy equivalence between CW complexes is a homotopy equivalence.
I am wondering whether there is a strengthening of this result which applies to individual maps, along the following lines:
Let $(X,A)$ be a CW pair with $A\neq \emptyset$ and let $(Y,B)$ be a topological pair. Let $f\colon\thinspace (X,A)\to (Y,B)$ be a map of pairs. For all $n$ such that $X-A$ has $n$-cells, assume
that the induced map $f_\ast\colon\thinspace\pi_n(X,A,a_0)\to \pi_n(Y,B,f(a_0))$ is zero for all choices of basepoint $a_0\in A$, and [insert some extra conditions, possibly homological]. Then
$f$ is homotopic rel $A$ to a map with image in $B$.
Does anyone know of such a result? Perhaps this is asking too much, and that the answer may well be "this is what obstruction theory does", but I thought I'd ask just in case.
at.algebraic-topology homotopy-theory
1 I think this is just straightforward obstruction theory. If I'm right, this will be in Whitehead's account of obstruction theory. I'll check when I have the book and some time, if I remember to. –
Jeff Strom Feb 19 '13 at 13:57
If we convert the map $B \rightarrow Y$ to a fibration without changing the homotopy type, then this should follow from the discussion after Prop 4.74 in Hatcher (really that whole section.)
Theorem 1 in section 4 of Spanier is also relevant... – Dylan Wilson Feb 19 '13 at 14:14
Some non-trivial extra conditions would certainly be required, as there exists a map $f:X\to Y$ between simply connected spaces which is not null-homotopic, yet induces the trivial homomorphism on
1 all homotopy groups. The simplest example I know is the map $K({\mathbb Z},2)\to K({\mathbb Z},4)$ classifying the cohomology class $u^2\in H^4(K({\mathbb Z},2),{\mathbb Z})$ given by the square
of a generator $u\in H^2(K({\mathbb Z},2),{\mathbb Z})\simeq {\mathbb Z}$. – Ricardo Andrade Feb 20 '13 at 6:36
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged at.algebraic-topology homotopy-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/122305/strengthening-the-compression-lemma-in-homotopy-theory","timestamp":"2014-04-19T18:00:48Z","content_type":null,"content_length":"51302","record_id":"<urn:uuid:bf3aab28-ce1c-4ae4-8ba5-44ee5a8d6baa>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ADO Chile
137 Comments
1. Very neat article post.Really looking forward to read more.
2. Xv7yro I truly appreciate this blog.Much thanks again. Want more.
3. Im thankful for the article.Really thank you! Awesome.
4. A round of applause for your post.Really thank you! Awesome.
5. I truly appreciate this blog article.Thanks Again. Keep writing.
6. Muchos Gracias for your blog. Awesome.
7. wow, awesome article.Thanks Again. Will read on…
8. Fantastic blog article. Much obliged.
9. Thank you for your blog post.Really looking forward to read more. Fantastic.
10. Hey, thanks for the post.Much thanks again. Much obliged.
11. I really liked your blog article.Really thank you! Much obliged.
12. Im thankful for the post.Really looking forward to read more. Really Great.
13. s3H7uD I cannot thank you enough for the post.Thanks Again. Will read on…
14. I think this is a real great blog.Really looking forward to read more. Really Great.
15. It’s onerous to search out knowledgeable folks on this topic, however you sound like you realize what you’re speaking about! Thanks
16. Looking forward to reading more. Great article post.Really looking forward to read more. Great.
17. I really enjoy the article post.Thanks Again. Really Great.
18. This is one awesome article.Much thanks again. Fantastic.
19. xatRih A big thank you for your blog post.Really thank you! Much obliged.
20. Very neat post.Much thanks again.
21. Fantastic blog article.Thanks Again. Want more.
22. Thanks for the sensible critique. Me and my neighbor were just preparing to do a little research about this. We got a grab a book from our area library but I think I learned more from this post.
I’m very glad to see such magnificent information being shared freely out there.
23. Thanks again for the blog.Really looking forward to read more. Much obliged.
24. I really enjoy the blog article.Really looking forward to read more. Great.
25. Fantastic post.Really looking forward to read more. Keep writing.
26. I really like and appreciate your blog post.Really looking forward to read more. Really Great.
27. I appreciate you sharing this post.Really looking forward to read more. Cool.
28. Thanks so much for the blog post.Thanks Again. Really Great.
29. Looking forward to reading more. Great blog post.Really thank you! Keep writing.
30. Thank you ever so for you blog article.Much thanks again. Great.
31. Fantastic blog.Thanks Again. Fantastic.
32. ZmDKL8 Thanks for the blog article. Really Cool.
33. Hello there! This blog post could not be written any better! Looking through this article reminds me of my previous roommate! He always kept preaching about this. I am going to forward this post
to him. Pretty sure he will have a great read. Thanks for sharing!…I will be happy if you visit my blog and say something http://twitter-guide-1.blogspot.com/
34. Thanks so much for the blog.Really looking forward to read more. Great.
35. This is such a great resource that you are providing and you give it away for free.
36. I loved your blog article.Thanks Again. Fantastic.
37. I cannot thank you enough for the blog article. Awesome.
38. I loved your blog.Thanks Again. Much obliged.
39. I really liked your post.Much thanks again. Keep writing.
40. Thanks again for the post.Thanks Again. Want more.
41. Very informative blog.Really thank you! Will read on…
42. Great, thanks for sharing this article post.Thanks Again. Cool.
43. Hey, thanks for the blog article.Really looking forward to read more. Keep writing.
44. Very good blog.Really thank you! Cool.
45. Awesome blog post.Really thank you! Keep writing.
46. Really enjoyed this blog post.Really looking forward to read more. Keep writing.
47. Great, thanks for sharing this post.Really looking forward to read more. Much obliged.
48. Thank you for your blog post.Really looking forward to read more. Keep writing.
49. I truly appreciate this article post.Really thank you! Awesome.
50. I value the blog post.Really looking forward to read more. Really Cool.
51. Very neat article post.Really looking forward to read more. Great.
52. I really liked your blog article.Really thank you! Really Cool.
53. Thanks for the blog post.Thanks Again. Awesome.
54. Really appreciate you sharing this article.Thanks Again. Fantastic.
55. wow, awesome blog post.Much thanks again. Really Cool.
56. I am so grateful for your blog.Much thanks again. Awesome.
57. Very good article.Much thanks again. Really Great.
58. Very good blog article.Thanks Again. Much obliged.
59. Thanks for sharing, this is a fantastic blog article.Thanks Again. Awesome.
60. Very neat blog post.Much thanks again. Fantastic.
61. Enjoyed every bit of your article. Will read on…
Great write-up, I am regular visitor of one’s blog, maintain up the excellent operate, and It’s going to be a regular visitor for a long time.
Very good article.Really looking forward to read more. Great.
Thanks for sharing, this is a fantastic article. Awesome.
Very neat blog.Much thanks again. Cool.
Very good blog post.
Very neat blog article.Really looking forward to read more. Really Cool.
Major thankies for the post.Much thanks again.
Thanks-a-mundo for the post.Really looking forward to read more. Great.
Thanks for sharing, this is a fantastic blog.Thanks Again. Fantastic.
Thanks so much for the article post.Much thanks again. Much obliged.
I appreciate you sharing this blog.Much thanks again. Awesome.
Great, thanks for sharing this blog post.Much thanks again. Fantastic.
I am so grateful for your article post.Thanks Again. Awesome.
Thank you for your blog post.Really thank you! Much obliged.
Thanks for the article post. Want more.
Thanks for the blog.Really thank you! Awesome.
Im grateful for the blog.Thanks Again. Fantastic.
Major thankies for the article.Really thank you! Cool.
Fantastic blog. Really Great.
Thanks again for the blog. Awesome.
A big thank you for your post.Much thanks again. Great.
Im thankful for the blog article. Great.
Im obliged for the article.Thanks Again. Awesome.
I really enjoy the post. Really Cool.
Looking forward to reading more. Great article.
Say, you got a nice blog post.Really looking forward to read more. Much obliged.
wow, awesome blog.Much thanks again. Really Great.
Wow, great post.Much thanks again. Awesome.
Say, you got a nice blog post.Thanks Again. Much obliged.
Im thankful for the blog post.Really looking forward to read more. Fantastic.
Really informative article post.Much thanks again. Fantastic.
I really like and appreciate your post. Fantastic.
I value the blog post.Much thanks again. Much obliged.
Hey, thanks for the blog.Much thanks again. Cool.
Major thanks for the post. Will read on…
wow, awesome article.Really thank you! Fantastic.
Major thanks for the blog. Fantastic.
Thanks a lot for the article.Really looking forward to read more. Really Cool.
I loved your blog post.Really looking forward to read more.
Looking forward to reading more. Great blog.Much thanks again. Really Great.
Thank you for your blog post.Really looking forward to read more. Cool.
Awesome blog.Really looking forward to read more.
Major thanks for the article post.Much thanks again. Cool.
Very informative article.Really thank you! Great.
Say, you got a nice blog article.Really looking forward to read more. Keep writing.
Im grateful for the blog.Really thank you! Really Great.
I am so grateful for your article. Really Great.
Major thanks for the blog.Really thank you! Great.
A big thank you for your blog.Much thanks again. Much obliged.
Awesome article post. Want more.
This is one awesome article.Really looking forward to read more. Really Cool.
Thanks for sharing, this is a fantastic article post.Much thanks again. Really Great.
Thanks again for the blog article. Cool.
I appreciate you sharing this blog article.Really looking forward to read more. Great.
Im obliged for the blog post.
Thank you for your article post.Really looking forward to read more. Much obliged.
Very good blog post.Really thank you! Cool.
Very good blog post.Thanks Again. Fantastic.
Really informative blog post.Really looking forward to read more. Awesome.
Thanks so much for the article post.Really looking forward to read more. Really Cool.
Major thankies for the blog.Thanks Again. Really Great.
Thanks-a-mundo for the article.Really thank you!
Major thankies for the blog.Really thank you! Much obliged.
Muchos Gracias for your article post.Thanks Again. Really Great.
Thanks for the blog post.Thanks Again. Cool.
Really appreciate you sharing this post.Really looking forward to read more. Cool.
Thanks so much for the article.Really looking forward to read more. Really Cool.
Im grateful for the article.Much thanks again. Really Cool.
A round of applause for your post.Thanks Again. Really Cool.
Im grateful for the article.Much thanks again.
Thanks so much for the blog post.Really thank you! Really Great.
This is one awesome blog.Really looking forward to read more. Fantastic.
Really enjoyed this post.Really thank you!
I truly appreciate this article post.Really looking forward to read more. Really Great.
Awesome blog post.Much thanks again. Will read on…
Say, you got a nice blog article.Really looking forward to read more. Much obliged.
|
{"url":"http://www.adochile.cl/?attachment_id=658","timestamp":"2014-04-16T13:03:14Z","content_type":null,"content_length":"191060","record_id":"<urn:uuid:1785792c-efa2-4ff6-9f7c-b3282dade7b5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bode Servo Analysis
A unity-feedback system with third-order plant is driven by a reference signal r(t) and a sensor noise signal n(t). Typical performance objectives involve tracking the reference signal while
rejecting the influence of the sensor noise.
The Laplace transform description of the closed-loop system is:
Shown below are the open-loop Bode magnitude and phase plots with the corner frequencies a and b marked on the magnitude plot and the gain and phase crossover frequencies labeled. The left mouse
button can be used to drag corner frequencies to desired locations, thereby adjusting the values of a, b, and k. The corresponding closed-loop Bode magnitude plot is also shown, with the closed-loop
bandwidth labeled.
A selection of reference inputs is available, and high-frequency sensor noise can be selected at two amplitude levels. Plots of r(t) and y(t) are shown, and the Table button provides numerical values
of important system parameters and response characteristics. (Rise time is the 10% - 90% rise time. Mean-square steady-state error is computed on the range 15 < t < 25.)
Sample Exercise on Closed-Loop Bandwidth: With no sensor noise, adjust the open-loop gain and corner frequencies so that the step response exhibits overshoot no greater than 20% and a rise time no
greater than 0.2 sec. Note the value of mean-square steady-state error in the Table. Then select low-gain sensor noise and adjust the open-loop parameters to achieve mean-square, steady-state error
no greater than 0.0020 with 20% overshoot bound and rise time as small as possible.
Sample Exercise on Stability Margins: Fix one of the open-loop corner frequencies at a = 2 rad/sec. Adjust the other corner frequency (thereby adjusting b and k ) to obtain a step response with rise
time less than 0.6 sec, gain margin at least 6 db, and phase margin at least 35 deg.
Sample Exercise on Closed-Loop Frequency Response: Adjust the open-loop corner frequencies and gain so that the closed-loop frequency response has the following properties: the maximum magnitude is 0
db, the 3 db bandwidth Wbw = 2 rad/sec, and the closed-loop magnitude at 8 rad/sec is less than - 20 db.
Applet by Steven Crutchfield.
|
{"url":"http://www.jhu.edu/~signals/bandwidth/index.html","timestamp":"2014-04-20T22:27:06Z","content_type":null,"content_length":"4328","record_id":"<urn:uuid:d7d84b94-439d-49f6-aab1-68bc11af7a64>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chemistry: Determining the Form of a Rate Law Video | MindBites
Chemistry: Determining the Form of a Rate Law
About this Lesson
• Type: Video Tutorial
• Length: 9:24
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 100 MB
• Posted: 07/14/2009
This lesson is part of the following series:
Chemistry: Full Course (303 lessons, $198.00)
Chemistry: Chemical Kinetics (18 lessons, $25.74)
Chemistry: Reaction Rates (3 lessons, $5.94)
This lesson was selected from a broader, comprehensive course, Chemistry, taught by Professor Harman, Professor Yee, and Professor Sammakia. This course and others are available from Thinkwell, Inc.
The full course can be found at http://www.thinkwell.com/student/product/chemistry. The full course covers atoms, molecules and ions, stoichiometry, reactions in aqueous solutions, gases,
thermochemistry, Modern Atomic Theory, electron configurations, periodicity, chemical bonding, molecular geometry, bonding theory, oxidation-reduction reactions, condensed phases, solution
properties, kinetics, acids and bases, organic reactions, thermodynamics, nuclear chemistry, metals, nonmetals, biochemistry, organic chemistry, and more.
Dean Harman is a professor of chemistry at the University of Virginia, where he has been honored with several teaching awards. He heads Harman Research Group, which specializes in the novel organic
transformations made possible by electron-rich metal centers such as Os(II), RE(I), AND W(0). He holds a Ph.D. from Stanford University.
Gordon Yee is an associate professor of chemistry at Virginia Tech in Blacksburg, VA. He received his Ph.D. from Stanford University and completed postdoctoral work at DuPont. A widely published
author, Professor Yee studies molecule-based magnetism.
Tarek Sammakia is a Professor of Chemistry at the University of Colorado at Boulder where he teaches organic chemistry to undergraduate and graduate students. He received his Ph.D. from Yale
University and carried out postdoctoral research at Harvard University. He has received several national awards for his work in synthetic and mechanistic organic chemistry.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
Let me remind you what we're trying to do. The balanced reaction is only a recipe for telling us how we make product from reactants. What we are trying to do now is find out how fast this reaction
proceeds. And, that has nothing to do with the recipe. What it has to do with is experimentally determining how fast the reaction goes and how it depends on the concentrations of the reactants.
So, for this reaction H[2] + Br[2] 2HBr, we are going to determine a rate law. We can relate these guys to aA moles of A + bB moles of B cC moles of C. In other words, a recipe. What we are looking
for is an expression that looks like the rate, which is times the change in concentration of C with respect to time or times the change of concentration of B with respect to time. In general, it is
equal to some rate constant times the concentration of A to some power m, and the concentration of B to some power of n.
Remember, m and n don't necessarily have anything to do with A and B. So, for a particular reaction of hydrogen plus bromine to hydrogen bromine, it turns out that the experimentally determined rate
law looks like rate, which is equal to the change in concentration of bromine with respect to time, is equal to some rate constant, times the concentration of hydrogen, and this is instantaneous
concentration, times the concentration of bromine to the power. So, in this case, m is equal to 1 and n is equal to .
We haven't talked about how you could get something to be , and we'll talk about that later on. The thing I am trying to remind you of is that these exponents don't necessarily have anything to do
with these stoichiometric coefficients.
Let's look at the first method for determining what k, m, and n are for a reaction. The reaction I want to look at is the reaction of nitric oxide plus oxygen to nitrogen dioxide. This is a reaction
that proceeds in the atmosphere to produce highly oxidized oxides of nitrogen on their way to nitric acid, which is one of the sources of acid rain.
We could plot the concentration of nitrogen dioxide as a function of time and imagine the curve looks something like this. The red curve is our complete data. Imagine that we could measure the
initial rate by checking just what the concentration of NO[2] is as a function time. For the first few data points, drawing a straight line through those first few data points, and assuming that the
slope of that line is essentially equal to the tangent at zero, the tangent of the red line at zero. And that is what we call determining what the initial rate is for the reaction.
Again, the initial rate exactly is the tangent to the red line at zero. And, we approximate it graphically by just measuring the concentration of nitrogen dioxide as a function of time over a few
data points. And, assuming that there is not too much curvature in the data close to zero, so we can just draw a straight line through it. If you think about it, what the slope of the green line
represents is the initial rate of the reaction times 2, because we make 2 nitrogen dioxides every time we run this reaction.
So, a typical initial rate data problem is going to look something like this. Here's, again, our balanced reaction. You are going to see a table. The table here represents four separate experiments.
Experiment one we will put in some known amount of nitric oxide, which is one of the reactants, and we will put in some known concentration of O[2], which is the other reactant, and we'll measure the
initial rate.
What does that mean? Well, that means what we will do is we will put these two things in. We'll collect data that looks like this. And, we'll measure the slope of the green line. Then what we will do
is we will repeat the experiment. But, repeat it with slight change in the concentration of one of the reactants leaving the other one the same.
Second experiment, we measure a new initial rate. What it means is when we double the concentration of this guy in our second experiment the data are going to be slightly steeper. In other words, the
tangent is going to be steeper. The reaction just goes faster at the beginning if we double the concentration of one of the reactants.
Now, we repeat the experiment. This time we double the concentration of the other one. We don't necessarily have to double, but it turns out that it is convenient that we just exactly double it. And,
we measure an initial rate again. Then we do it a fourth time. So you'll see tables that look just like this.
Again, what we are trying to determine is the rate law which looks like some rate constant times the concentration of one of the reactants to the m^th power times one of the other reactants to the n^
th power. And, m and n don't depend on the stoichiometric coefficients.
We can express the initial rate of experiment one, which you will recall was .048 molar per second, and that is equal to the rate constant times the concentration of nitric oxide to the m^th power,
times the concentration of oxygen to the n^th power. And, we can do the same thing for experiment two. The initial rate of experiment two was different. And, the concentrations of nitric oxide are
different, but we can express it like this. We could do this for each of the four experiments.
Now, what we are going to do is if the top equation is true and the second equation is true, then we can take a ratio of this to this and it is equal to the ratio of this to this. Right? If this is
an equality and this is an equality then this divided by this is going to be equal to this divided by this. Now, that's what we are going to do. We are going to actually do 2 over 1, because it is
convenient for the math. But, you could do 1 over 2.
If we do 2 over I, if we look back at what the actual initial rate for 2 was it is .192 molar per second and the initial rate for 1 was .048 molar per second. Here are those two rate law expressions.
Since we have exact same k in both the numerator and the denominator, and it is the same for both, that's 1. And, conveniently, we chose two data sets in which the concentration of oxygen was the
same for both experiments, because we have the same exponent here this is also 1. So this nasty looking expression here reduces actually to this over this, which is 4, and this over this, which gives
this expression. Then, knowing a little algebra, something to the power divided by something else to the power is this over this to that same power. So that is 2^m. And, by inspection, we can solve
that m equals 2.
Alternatively what we can do is take the natural log of both sides; that gives us this expression. When you take the natural log of 2^m, that is m times the natural log of 2. And, then divide through
by the natural log of 2 to get an expression for m, and m is equal to 2. So, what have we done? We found one of the three things that we are trying to find. We found the exponent m.
Now, we are going to repeat the exact same procedure. Except, instead of 2 over I, we are going to look at 3 over 1. If we look at 3 over 1, conveniently now it's the expression that has m in it that
cancels out. And so our rate expression reduces to this over this, which is 2. And, the k's cancel out again, and then it is this over this to the n power. And, this over this to the n power becomes
2 to the n. And so by inspection we can get n is equal to 1.
So now we've determined m and we've determined n. Again, we are determining it not from the stoichiometric equation, not from the balanced reaction, we are doing it from experimental data.
Finally, we can figure out what k is by taking any one of the data sets now. Because, if we take, for instance, the initial rate of experiment one. It is equal to k times the concentration of nitric
oxide. And now we know what that exponent is. It is a 2. And the initial concentration of oxygen, and we know what that exponent is. It is a 1, but we usually don't even write it. So we can now have
one equation and one unknown. And we can solve it. And we get k is equal to 1.4 times 10^4. And the units on k have to be such that k times this squared, times that are all going to cancel out such
that the final units are in molar per second, and that you can convince yourself that the units on k have to be inverse molar squared inverse seconds.
So, the complete rate law looks like rate - and we can say that rate is equal to, I'll remind you, is equal to the rate constant times the concentration of nitric oxide squared, times the
concentration of oxygen. And we say that this reaction is first order in oxygen, second order in nitric oxide, and third order overall. Remember you add the exponents to get the overall order of the
Now, what does this say? It says that if we double the concentration of oxygen, but keeping the concentration of nitric oxide fixed, we are going to double the rate because k doesn't change.
Similarly what it says is that if we hold the concentration of oxygen constant, and double the concentration of nitric oxide, we are going to quadruple the rate. If you go back and look at the data
that is exactly what you are going to see. So, this expression, again, derived from the experimental data, and you can go back and double check and make sure that it actually all hangs together.
Chemical Kinetics
Reaction Rates
Determining the Form of a Rate Law Page [2 of 2]
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet:
|
{"url":"http://www.mindbites.com/lesson/4923-chemistry-determining-the-form-of-a-rate-law","timestamp":"2014-04-18T20:47:41Z","content_type":null,"content_length":"61860","record_id":"<urn:uuid:1d9fac20-2c76-416c-bcce-9a2c0cf6c787>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 60
- ACTA NUMERICA , 1998
"... ..."
- IEEE Trans. Inform. Theory , 1998
"... In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory... ..."
Cited by 140 (24 self)
Add to MetaCart
In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory...
- University of Vienna , 1983
"... this paper, which is now better accessible, in the same way as to the original report.) Hans G. Feichtinger 1 ..."
, 2008
"... Abstract. It is known that the Continuous Wavelet Transform of a distribution f decays rapidly near the points where f is smooth, while it decays slowly near the irregular points. This property
allows the identification of the singular support of f. However, the Continuous Wavelet Transform is unabl ..."
Cited by 37 (20 self)
Add to MetaCart
Abstract. It is known that the Continuous Wavelet Transform of a distribution f decays rapidly near the points where f is smooth, while it decays slowly near the irregular points. This property
allows the identification of the singular support of f. However, the Continuous Wavelet Transform is unable to describe the geometry of the set of singularities of f and, in particular, identify the
wavefront set of a distribution. In this paper, we employ the same framework of affine systems which is at the core of the construction of the wavelet transform to introduce the Continuous Shearlet
Transform. This is defined by SHψf(a, s, t) = 〈f, ψast〉, where the analyzing elements ψast are dilated and translated copies of a single generating function ψ. The dilation matrices form a
two-parameter matrix group consisting of products of parabolic scaling and shear matrices. We show that the elements {ψast} form a system of smooth functions at continuous scales a> 0, locations t ∈
R 2, and oriented along lines of slope s ∈ R in the frequency domain. We then prove that the Continuous Shearlet Transform does exactly resolve the wavefront set of a distribution f. 1.
, 1993
"... : n-dimensional coherent states systems generated by translations, modulations, rotations and dilations are described. Starting from unitary irreducible representations of the n-dimen- sional
affine Weyl-Heisenberg group, which are not square-integrable, one is led to consider systems of coherent s ..."
Cited by 14 (1 self)
Add to MetaCart
: n-dimensional coherent states systems generated by translations, modulations, rotations and dilations are described. Starting from unitary irreducible representations of the n-dimen- sional affine
Weyl-Heisenberg group, which are not square-integrable, one is led to consider systems of coherent states labeled by the elements of quotients of the original group. Such systems can yield a
resolution of the identity, and then be used as alternatives to usual wavelet or windowed Fourier analysis. When the quotient space is the phase space of the representation, different embeddings of
it into the group provide different descriptions of the phase space. R'esum'e: On d'ecrit des syst`emes d"etats coh'erents engendr'es par des translations, dilatations et rotations en n-dimensions.
Partant de repr'esentations unitaires irr'eductibles du groupe de Weyl-Heisenberg affine en dimension n, qui ne sont pas de carr'e integrable, on est naturellement conduit `a consid'erer des syst
`emes d"etats coh...
- J. Funct. Anal , 1976
"... Let V be a multiplication operator, whose negative part, V- ( V- < 0) obeys-A + (1 + c)V->--c for some c, c> 0. Let W = Vx where x is the characteristic function of the exterior of a ball. Our
main result asserts that the scattering for-A + V is complete if and only if that for-A + W is complete. Ou ..."
Cited by 11 (4 self)
Add to MetaCart
Let V be a multiplication operator, whose negative part, V- ( V- < 0) obeys-A + (1 + c)V->--c for some c, c> 0. Let W = Vx where x is the characteristic function of the exterior of a ball. Our main
result asserts that the scattering for-A + V is complete if and only if that for-A + W is complete. Our technical estimates exploit Wiener integrals and the Feynman-Kac formula. We also make an
application to acoustical scattering. 1.
- J. Funct. Anal , 1991
"... Abstract: We prove a general result on the factorization of matrix-valued analytic functions. We deduce that if (E0, E1) and (F0, F1) are interpolation pairs with dense intersections, then under
some conditions on the spaces E0, E1, F0 and F1, we have [E0 ̂⊗F0, E1 ̂⊗F1]θ = [E0, E1]θ ̂⊗[F0, F1]θ, 0 < ..."
Cited by 11 (0 self)
Add to MetaCart
Abstract: We prove a general result on the factorization of matrix-valued analytic functions. We deduce that if (E0, E1) and (F0, F1) are interpolation pairs with dense intersections, then under some
conditions on the spaces E0, E1, F0 and F1, we have [E0 ̂⊗F0, E1 ̂⊗F1]θ = [E0, E1]θ ̂⊗[F0, F1]θ, 0 < θ < 1. We find also conditions on the spaces E0, E1, F0 and F1, so that the following holds
- CONSTR. APPROX , 2001
"... Some new conditions that arise naturally in the study of the Thresholding Greedy Algorithm are introduced for bases of Banach spaces. We relate these conditions to best n-term approximation and
we study their duality theory. In particular, we obtain a complete duality theory for greedy bases. ..."
Cited by 11 (6 self)
Add to MetaCart
Some new conditions that arise naturally in the study of the Thresholding Greedy Algorithm are introduced for bases of Banach spaces. We relate these conditions to best n-term approximation and we
study their duality theory. In particular, we obtain a complete duality theory for greedy bases.
- Funct. Anal , 1996
"... Following results of Bourgain and Gorelik we show that the spaces ` p , 1 ! p ! 1, as well as some related spaces have the following uniqueness property: If X is a Banach space uniformly
homeomorphic to one of these spaces then it is linearly isomorphic to the same space. We also prove that if a C(K ..."
Cited by 10 (3 self)
Add to MetaCart
Following results of Bourgain and Gorelik we show that the spaces ` p , 1 ! p ! 1, as well as some related spaces have the following uniqueness property: If X is a Banach space uniformly homeomorphic
to one of these spaces then it is linearly isomorphic to the same space. We also prove that if a C(K) space is uniformly homeomorphic to c 0 , then it is isomorphic to c 0 . We show also that there
are Banach spaces which are uniformly homeomorphic to exactly 2 isomorphically distinct spaces. Subject classification: 46B20, 54Hxx. Keywords: Banach spaces, Uniform homeomorphism, Lipschitz
homeomorphism * Erna and Jacob Michael Visiting Professor, The Weizmann Institute, 1994 y Supported in part by NSF DMS 93-06376 z Supported in part by the U.S.-Israel Binational Science Foundation +
Participant, Workshop in Linear Analysis and Probability, Texas A&M University 0. Introduction The first result in the subject we study is the Mazur-Ulam theorem which says that an isometry from one
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=35446","timestamp":"2014-04-17T23:29:47Z","content_type":null,"content_length":"33981","record_id":"<urn:uuid:a3753618-1439-4edc-b1f0-e20c2c001539>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: decidability of sets of axioms
Neil Tennant neilt at hums62.cohums.ohio-state.edu
Thu Dec 11 18:45:32 EST 1997
There's something that puzzles me about the orthodox conception of an
axiomatization after the foundational work of the 1930s, and especially since
Church's undecidability theorem for first-order logic. I'll confine myself
in these remarks to first-order logic and theories.
It's especially important to mathematicians for their set of axioms (in any
given area, such as arithmetic or geometry or set theory) to be decidable
[= recursive, assuming Church's Thesis]. The reason is obvious: one needs
to be able to *tell*, effectively, when a proof establishes a theorem of the
area in question; and for this, one needs to be able to determine, of each
premise of the proof, whether it is a permissible axiom. All this has to be
effective. (By a premise of a proof I mean an undischarged assumption, on
which its conclusion rests.)
But it's equally important (at least, I think it is) to be able to tell when
what has been proved is a genuinely informative, mathematical claim about the
domain in question, and is not simply a logical truth in its own right. So we
want at the very least (1) to avoid using as premises in a mathematical proof so-called `mathematical axioms' that are themselves logical truths.
We also need (2) to be vigilant about inadvertently
`slipping into logical truth' in the middle of a proof. This would happen if
one diligently proved some informative claim P and then inferred (by or-introduction) the claim (P or not-P). Failing to recognize the latter as logically true
would of course be stupid; but this is a degenerate example. One can entertain
the possibility of less glaring `lapses into logical truth' as a proof progresses.
But it's the first requirement (1) that I want to address here. Among the
infinitely many instances of the scheme of mathematical induction
P(0) --> ( (n)(P(n) --> P(n+1)) --> (m)P(m))
will be infinitely many for which (m)P(m) is logically true anyway. So those
instances do not really tell us anything about the natural numbers. Yet they
sit there within our so-called `axiomatization' of arithmetic, defying us
[by Church's Theorem] to winkle them out. They cannot be falsified in any
model, so they cannot tell us anything about the intended model N. So why should
they even qualify as members of our list of axioms *for arithmetic*? (After
all, we can get them by logic alone, and do not need any arithmetical axioms
to do so.)
The reason is: Church's undecidability theorem itself! Given any scheme, there
is no effective way of telling whether a given instance of it is or is not
logically true. Hence, so long as we wish to axiomatize mathematics via the use
of uniform axiom schemes, we have to simply put up with this problem.
My question, now, is a very general one for the logicians and mathematicians
on this list. Suppose we insist that the set X of axioms for any given area
be such that:
1) every member of X is true (and hopefully certain or self-evidently so)
when interpreted as a claim about the intended structure (the natural number
system; the real line; the complex numbers; the universe of sets; n-dimensional
Euclidean space; etc.);
2) given any sentence of the language, we can effectively decide whether it is
a member of X (i.e. given Church's Thesis, X is recursive); and
3) every member of X is LOGICALLY FALSIFIABLE (hence genuinely informative).
Requirement (3) is the punchy one.
Clearly, if one had an old-style axiomatization X_0 that failed to meet
condition (3), and a new-style axiomatization X that *did* meet condition (3),
one would want every informative (i.e. logically falsifiable) logical
consequence of X_0 to be also a logical consequence of X.
Could this be ensured in general? That is, can we prove something like the
Given any consistent decidable set X_0 of sentences true in M, there is
some consistent decidable set X of logically falsifiable sentences true in M
such that all logically falsifiable logical consequences of X_0 are logical
consequences of X ?
Forgive me if I don't know enough recursion theory to see an obvious answer
right away.
Answers, anyone?
Neil Tennant
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-December/000475.html","timestamp":"2014-04-18T08:05:55Z","content_type":null,"content_length":"6411","record_id":"<urn:uuid:08954ea5-e3e7-40b2-8c79-8fdcf49f19f4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
|