content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Recursion to Iteration exponents Help
You're doing way too much in your implementation; think of the exponent as a binary number; if bit i is one, the result should be multiplied by base^(i+1); e.g. if the exponent is 5, the result
should be b^4*b^1. In Java this means: Code: public double power(double base, int n) { double result= 1; for (; n > 0; n/= 2, base*= base) if ((n%2) != 0) result*= base; return result; }kind regards,
public double power(double base, int n) { double result= 1; for (; n > 0; n/= 2, base*= base) if ((n%2) != 0) result*= base; return result; } | {"url":"http://www.java-forums.org/new-java/68878-recursion-iteration-exponents-help-print.html","timestamp":"2014-04-21T05:17:07Z","content_type":null,"content_length":"4407","record_id":"<urn:uuid:7289280d-5577-4d93-b85a-8214c73357c0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Huntsville, AL 35801
High School and University Math (& H.S. Physics)
I hold two master's degrees -- one in applied
ematics, one in physics. I started teaching
in 1999. While many of the intervening years saw me as a part-time adjunct instructor of
ematics, I did enjoy a full-time appointment to the
faculty of Alabama...
Offering 10+ subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/Guntersville_Math_tutors.aspx","timestamp":"2014-04-16T19:42:15Z","content_type":null,"content_length":"57395","record_id":"<urn:uuid:d04cdb10-e0d7-4dce-a1e6-9e5cbec35bb9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
search results
Expand all Collapse all Results 376 - 400 of 408
376. CMB 2000 (vol 43 pp. 330)
Maximal Operators and Cantor Sets
We consider maximal operators in the plane, defined by Cantor sets of directions, and show such operators are not bounded on $L^2$ if the Cantor set has positive Hausdorff dimension.
Keywords:maximal functions, Cantor set, lacunary set
Categories:42B25, 43A46
377. CMB 2000 (vol 43 pp. 268)
Cockcroft Properties of Thompson's Group
In a study of the word problem for groups, R.~J.~Thompson considered a certain group $F$ of self-homeomorphisms of the Cantor set and showed, among other things, that $F$ is finitely presented.
Using results of K.~S.~Brown and R.~Geoghegan, M.~N.~Dyer showed that $F$ is the fundamental group of a finite two-complex $Z^2$ having Euler characteristic one and which is {\em Cockcroft}, in the
sense that each map of the two-sphere into $Z^2$ is homologically trivial. We show that no proper covering complex of $Z^2$ is Cockcroft. A general result on Cockcroft properties implies that no
proper regular covering complex of any finite two-complex with fundamental group $F$ is Cockcroft.
Keywords:two-complex, covering space, Cockcroft two-complex, Thompson's group
Categories:57M20, 20F38, 57M10, 20F34
378. CMB 2000 (vol 43 pp. 21)
The Commutant of an Abstract Backward Shift
A bounded linear operator $T$ on a Banach space $X$ is an abstract backward shift if the nullspace of $T$ is one dimensional, and the union of the null spaces of $T^k$ for all $k \geq 1$ is dense
in $X$. In this paper it is shown that the commutant of an abstract backward shift is an integral domain. This result is used to derive properties of operators in the commutant.
Keywords:backward shift, commutant
379. CMB 2000 (vol 43 pp. 60)
Trivial Units in Group Rings
Let $G$ be an arbitrary group and let $U$ be a subgroup of the normalized units in $\mathbb{Z}G$. We show that if $U$ contains $G$ as a subgroup of finite index, then $U = G$. This result can be
used to give an alternative proof of a recent result of Marciniak and Sehgal on units in the integral group ring of a crystallographic group.
Keywords:units, trace, finite conjugate subgroup
Categories:16S34, 16U60
380. CMB 2000 (vol 43 pp. 25)
Subdifferential Regularity of Directionally Lipschitzian Functions
Formulas for the Clarke subdifferential are always expressed in the form of inclusion. The equality form in these formulas generally requires the functions to be directionally regular. This paper
studies the directional regularity of the general class of extended-real-valued functions that are directionally Lipschitzian. Connections with the concept of subdifferential regularity are also
Keywords:subdifferential regularity, directional regularity, directionally Lipschitzian functions
Categories:49J52, 58C20, 49J50, 90C26
381. CMB 2000 (vol 43 pp. 3)
Resolutions of Associative and Lie Algebras
Certain canonical resolutions are described for free associative and free Lie algebras in the category of non-associative algebras. These resolutions derive in both cases from geometric objects,
which in turn reflect the combinatorics of suitable collections of leaf-labeled trees.
Keywords:resolutions, homology, Lie algebras, associative algebras, non-associative algebras, Jacobi identity, leaf-labeled trees, associahedron
Categories:18G10, 05C05, 16S10, 17B01, 17A50, 18G50
382. CMB 1999 (vol 42 pp. 478)
A Remark On the Moser-Aubin Inequality For Axially Symmetric Functions On the Sphere
Let $\scr S_r$ be the collection of all axially symmetric functions $f$ in the Sobolev space $H^1(\Sph^2)$ such that $\int_{\Sph^2} x_ie^{2f(\mathbf{x})} \, d\omega(\mathbf{x})$ vanishes for $i=
1,2,3$. We prove that $$ \inf_{f\in \scr S_r} \frac12 \int_{\Sph^2} |\nabla f|^2 \, d\omega + 2\int_{\Sph^2} f \, d\omega- \log \int_{\Sph^2} e^{2f} \, d\omega > -\oo, $$ and that this infimum is
attained. This complements recent work of Feldman, Froese, Ghoussoub and Gui on a conjecture of Chang and Yang concerning the Moser-Aubin inequality.
Keywords:Moser inequality, borderline Sobolev inequalities, axially symmetric functions
Categories:26D15, 58G30
383. CMB 1999 (vol 42 pp. 427)
Ramanujan and the Modular $j$-Invariant
A new infinite product $t_n$ was introduced by S.~Ramanujan on the last page of his third notebook. In this paper, we prove Ramanujan's assertions about $t_n$ by establishing new connections
between the modular $j$-invariant and Ramanujan's cubic theory of elliptic functions to alternative bases. We also show that for certain integers $n$, $t_n$ generates the Hilbert class field of $\
mathbb{Q} (\sqrt{-n})$. This shows that $t_n$ is a new class invariant according to H.~Weber's definition of class invariants.
Keywords:modular functions, the Borweins' cubic theta-functions, Hilbert class fields
Categories:33C05, 33E05, 11R20, 11R29
384. CMB 1999 (vol 42 pp. 335)
Cyclic Subgroup Separability of HNN-Extensions with Cyclic Associated Subgroups
We derive a necessary and sufficient condition for HNN-extensions of cyclic subgroup separable groups with cyclic associated subgroups to be cyclic subgroup separable. Applying this, we explicitly
characterize the residual finiteness and the cyclic subgroup separability of HNN-extensions of abelian groups with cyclic associated subgroups. We also consider these residual properties of
HNN-extensions of nilpotent groups with cyclic associated subgroups.
Keywords:HNN-extension, nilpotent groups, cyclic subgroup separable $(\pi_c)$, residually finite
Categories:20E26, 20E06, 20F10
385. CMB 1999 (vol 42 pp. 321)
Averaging Operators and Martingale Inequalities in Rearrangement Invariant Function Spaces
We shall study some connection between averaging operators and martingale inequalities in rearrangement invariant function spaces. In Section~2 the equivalence between Shimogaki's theorem and some
martingale inequalities will be established, and in Section~3 the equivalence between Boyd's theorem and martingale inequalities with change of probability measure will be established.
Keywords:martingale inequalities, rearrangement invariant function spaces
Categories:60G44, 60G46, 46E30
386. CMB 1999 (vol 42 pp. 285)
On Kloosterman Sums with Oscillating Coefficients
In this paper we prove: for any positive integers $a$ and $q$ with $(a,q) =1$, we have uniformly $$ \sum_{\substack{n \leq N \\ (n,q) = 1, \,n\on \equiv 1 (\mod q)}} \mu (n) e \left( \frac{a\on}{q}
\right) \ll Nd (q) \left\{ \frac{\log^{\frac52} N}{q^{\frac12}} + \frac{q^{\frac15} \log^{\frac{13}5} N}{N^{\frac15}} \right\}. $$ This improves the previous bound obtained by D.~Hajela,
A.~Pollington and B.~Smith~\cite{5}.
Keywords:Kloosterman sums, oscillating coefficients, estimate
387. CMB 1999 (vol 42 pp. 274)
The Bockstein Map is Necessary
We construct two non-isomorphic nuclear, stably finite, real rank zero $C^\ast$-algebras $E$ and $E'$ for which there is an isomorphism of ordered groups $\Theta\colon \bigoplus_{n \ge 0} K_\bullet
(E;\ZZ/n) \to \bigoplus_{n \ge 0} K_\bullet(E';\ZZ/n)$ which is compatible with all the coefficient transformations. The $C^\ast$-algebras $E$ and $E'$ are not isomorphic since there is no $\Theta$
as above which is also compatible with the Bockstein operations. By tensoring with Cuntz's algebra $\OO_\infty$ one obtains a pair of non-isomorphic, real rank zero, purely infinite $C^\
ast$-algebras with similar properties.
Keywords:$K$-theory, torsion coefficients, natural transformations, Bockstein maps, $C^\ast$-algebras, real rank zero, purely infinite, classification
Categories:46L35, 46L80, 19K14
388. CMB 1999 (vol 42 pp. 190)
Topological Quantum Field Theory and Strong Shift Equivalence
Given a TQFT in dimension $d+1,$ and an infinite cyclic covering of a closed $(d+1)$-dimensional manifold $M$, we define an invariant taking values in a strong shift equivalence class of matrices.
The notion of strong shift equivalence originated in R.~Williams' work in symbolic dynamics. The Turaev-Viro module associated to a TQFT and an infinite cyclic covering is then given by the Jordan
form of this matrix away from zero. This invariant is also defined if the boundary of $M$ has an $S^1$ factor and the infinite cyclic cover of the boundary is standard. We define a variant of a
TQFT associated to a finite group $G$ which has been studied by Quinn. In this way, we recover a link invariant due to D.~Silver and S.~Williams. We also obtain a variation on the Silver-Williams
invariant, by using the TQFT associated to $G$ in its unmodified form.
Keywords:knot, link, TQFT, symbolic dynamics, shift equivalence
Categories:57R99, 57M99, 54H20
389. CMB 1999 (vol 42 pp. 139)
Essential Norm and Weak Compactness of Composition Operators on Weighted Banach Spaces of Analytic Functions
Every weakly compact composition operator between weighted Banach spaces $H_v^{\infty}$ of analytic functions with weighted sup-norms is compact. Lower and upper estimates of the essential norm of
continuous composition operators are obtained. The norms of the point evaluation functionals on the Banach space $H_v^{\infty}$ are also estimated, thus permitting to get new characterizations of
compact composition operators between these spaces.
Keywords:weighted Banach spaces of holomorphic functions, composition operator, compact operator, weakly compact operator
Categories:47B38, 30D55, 46E15
390. CMB 1999 (vol 42 pp. 198)
Commutators and Analytic Dependence of Fourier-Bessel Series on $(0,\infty)$
In this paper we study the boundedness of the commutators $[b, S_n]$ where $b$ is a $\BMO$ function and $S_n$ denotes the $n$-th partial sum of the Fourier-Bessel series on $(0,\infty)$. Perturbing
the measure by $\exp(2b)$ we obtain that certain operators related to $S_n$ depend analytically on the functional parameter $b$.
Keywords:Fourier-Bessel series, commutators, BMO, $A_p$ weights
391. CMB 1999 (vol 42 pp. 104)
Instabilité de vecteurs propres d'opérateurs linéaires
We consider some geometric properties of eigenvectors of linear operators on infinite dimensional Hilbert space. It is proved that the property of a family of vectors $(x_n)$ to be eigenvectors
$Tx_n= \lambda_n x_n$ ($\lambda_n \noteq \lambda_k$ for $n\noteq k$) of a bounded operator $T$ (admissibility property) is very instable with respect to additive and linear perturbations. For
instance, (1)~for the sequence $(x_n+\epsilon_n v_n)_{n\geq k(\epsilon)}$ to be admissible for every admissible $(x_n)$ and for a suitable choice of small numbers $\epsilon_n\noteq 0$ it is
necessary and sufficient that the perturbation sequence be eventually scalar: there exist $\gamma_n\in \C$ such that $v_n= \gamma_n v_{k}$ for $n\geq k$ (Theorem~2); (2)~for a bounded operator $A$
to transform admissible families $(x_n)$ into admissible families $(Ax_n)$ it is necessary and sufficient that $A$ be left invertible (Theorem~4).
Keywords:eigenvectors, minimal families, reproducing kernels
Categories:47A10, 46B15
392. CMB 1999 (vol 42 pp. 97)
On Analytic Functions of Bergman $\BMO$ in the Ball
Let $B = B_n$ be the open unit ball of $\bbd C^n$ with volume measure $\nu$, $U = B_1$ and ${\cal B}$ be the Bloch space on $U$. ${\cal A}^{2, \alpha} (B)$, $1 \leq \alpha < \infty$, is defined as
the set of holomorphic $f\colon B \rightarrow \bbd C$ for which $$ \int_B \vert f(z) \vert^2 \left( \frac 1{\vert z\vert} \log \frac 1{1 - \vert z\vert } \right)^{-\alpha} \frac {d\nu (z)}{1-\vert
z\vert} < \infty $$ if $0 < \alpha <\infty$ and ${\cal A}^{2, 1} (B) = H^2(B)$, the Hardy space. Our objective of this note is to characterize, in terms of the Bergman distance, those holomorphic
$f\colon B \rightarrow U$ for which the composition operator $C_f \colon {\cal B} \rightarrow {\cal A}^{2, \alpha}(B)$ defined by $C_f (g) = g\circ f$, $g \in {\cal B}$, is bounded. Our result has
a corollary that characterize the set of analytic functions of bounded mean oscillation with respect to the Bergman metric.
Keywords:Bergman distance, \BMOA$, Hardy space, Bloch function
393. CMB 1999 (vol 42 pp. 13)
Dow's Principle and $Q$-Sets
A $Q$-set is a set of reals every subset of which is a relative $G_\delta$. We investigate the combinatorics of $Q$-sets and discuss a question of Miller and Zhou on the size $\qq$ of the smallest
set of reals which is not a $Q$-set. We show in particular that various natural lower bounds for $\qq$ are consistently strictly smaller than $\qq$.
Keywords:$Q$-set, cardinal invariants of the continuum, pseudointersection number, $\MA$($\sigma$-centered), Dow's principle, almost disjoint family, almost disjointness principle, iterated forcing
Categories:03E05, 03E35, 54A35
394. CMB 1999 (vol 42 pp. 125)
Modular Vector Invariants of Cyclic Permutation Representations
Vector invariants of finite groups (see the introduction for an explanation of the terminology) have often been used to illustrate the difficulties of invariant theory in the modular case: see, \
eg., \cite{Ber}, \cite{norway}, \cite{fossum}, \cite{MmeB}, \cite{poly} and \cite{survey}. It is therefore all the more surprising that the {\it unpleasant} properties of these invariants may be
derived from two unexpected, and remarkable, {\it nice} properties: namely for vector permutation invariants of the cyclic group $\mathbb{Z}/p$ of prime order in characteristic $p$ the image of the
transfer homomorphism $\Tr^{\mathbb{Z}/p} \colon \mathbb{F}[V] \lra \mathbb{F}[V]^{\mathbb{Z}/p}$ is a prime ideal, and the quotient algebra $\mathbb{F}[V]^{\mathbb{Z}/p}/ \Im (\Tr^{\mathbb{Z}/p})$
is a polynomial algebra on the top Chern classes of the action.
Keywords:polynomial invariants of finite groups
395. CMB 1999 (vol 42 pp. 118)
Points of Weak$^\ast$-Norm Continuity in the Unit Ball of the Space $\WC(K,X)^\ast$
For a compact Hausdorff space with a dense set of isolated points, we give a complete description of points of weak$^\ast$-norm continuity in the dual unit ball of the space of Banach space valued
functions that are continuous when the range has the weak topology. As an application we give a complete description of points of weak-norm continuity of the unit ball of the space of vector
measures when the underlying Banach space has the Radon-Nikodym property.
Keywords:Points of weak$^\ast$-norm continuity, space of vector valued weakly continuous functions, $M$-ideals
Categories:46B20, 46E40
396. CMB 1998 (vol 41 pp. 497)
On the construction of Hölder and Proximal Subderivatives
We construct Lipschitz functions such that for all $s>0$ they are $s$-H\"older, and so proximally, subdifferentiable only on dyadic rationals and nowhere else. As applications we construct
Lipschitz functions with prescribed H\"older and approximate subderivatives.
Keywords:Lipschitz functions, Hölder subdifferential, proximal subdifferential, approximate subdifferential, symmetric subdifferential, Hölder smooth, dyadic rationals
Categories:49J52, 26A16, 26A24
397. CMB 1998 (vol 41 pp. 348)
Characterizing continua by disconnection properties
We study Hausdorff continua in which every set of certain cardinality contains a subset which disconnects the space. We show that such continua are rim-finite. We give characterizations of this
class among metric continua. As an application of our methods, we show that continua in which each countably infinite set disconnects are generalized graphs. This extends a result of Nadler for
metric continua.
Keywords:disconnection properties, rim-finite continua, graphs
Categories:54D05, 54F20, 54F50
398. CMB 1998 (vol 41 pp. 267)
On the nonemptiness of the adjoint linear system of polarized manifold
Let $(X,L)$ be a polarized manifold over the complex number field with $\dim X=n$. In this paper, we consider a conjecture of M.~C.~Beltrametti and A.~J.~Sommese and we obtain that this conjecture
is true if $n=3$ and $h^{0}(L)\geq 2$, or $\dim \Bs |L|\leq 0$ for any $n\geq 3$. Moreover we can generalize the result of Sommese.
Keywords:Polarized manifold, adjoint bundle
Categories:14C20, 14J99
399. CMB 1998 (vol 41 pp. 207)
An oscillation criterion for first order linear delay differential equations
A new oscillation criterion is given for the delay differential equation $x'(t)+p(t)x \left(t-\tau(t)\right)=0$, where $p$, $\tau \in \C \left([0,\infty),[0,\infty)\right)$ and the function $T$
defined by $T(t)=t-\tau(t)$, $t\ge 0$ is increasing and such that $\lim_{t\to\infty}T(t)=\infty$. This criterion concerns the case where $\liminf_{t\to\infty} \int_{T(t)}^{t}p(s)\,ds\le \frac{1}{e}
Keywords:Delay differential equation, oscillation
400. CMB 1998 (vol 41 pp. 129)
Pluriharmonic symbols of commuting Toeplitz type operators on the weighted Bergman spaces
A class of Toeplitz type operators acting on the weighted Bergman spaces of the unit ball in the $n$-dimensional complex space is considered and two pluriharmonic symbols of commuting Toeplitz type
operators are completely characterized.
Keywords:Pluriharmonic functions, Weighted Bergman spaces, Toeplitz type operators.
Categories:47B38, 32A37
Previous 1 ... 14 15 16 17 Next | {"url":"http://cms.math.ca/cmb/kw/f?page=16","timestamp":"2014-04-21T14:42:58Z","content_type":null,"content_length":"66615","record_id":"<urn:uuid:b232a5f2-2197-4b04-84ff-867b4ec6cf32>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
C. Kwok, D. Fox and M. Meila
Real-time particle filters
Technical Report UW-CSE-02-07-01
A shorter version has been accepted for publication at NIPS 2002.
Particle filters estimate the state of dynamical systems from sensor information. In many real time applications of particle filters, however, sensor information arrives at a significantly higher
rate than the update rate of the filter. The prevalent approach to dealing with such situations is to update the particle filter as often as possible and to discard sensor information that cannot be
processed in time. In this paper we present real-time particle filters, which make use of all sensor information even when the filter update rate is below the update rate of the sensors. This is
achieved by distributing samples among the different observations arriving during a filter update. Hence the approach represents posteriors by mixtures of sample sets. The weights of the mixture
components are set so as to minimize the approximation error introduced by the mixture representation. Minimization is achieved by gradient descent using efficient Monte Carlo approximation of the
gradients. Thereby, our approach focuses computational resources (samples) on valuable sensor information. Experiments using data collected with a mobile robot show that our approach yields strong
improvements over other approaches.
Full paper [.ps.gz] (340 kb, 12 pages)
The following animations illustrate how the real time particle filters work.
The environment is an office floor 54x18m^2. The robot moves around the loop on the left, which is symmetrical except for a few boxes placed as "landmarks" along the walls. The dots in the animations
represent the samples, blue beams represent the laser sensor measurements, and the circle represents the true robot position. All sample sets in an estimation window are shown together in different
colors. At the end of each estimation window, the mixture weights of these sets are computed and used for the next estimation window (see report). Typically, the observations detecting the boxes get
very high weights, due to their high information content.
• Fixed window size 6
• Adaptive window - this animation shows our work-in-progress adaptive window algorithm. In the beginning we start with 12 sample sets in a window, but as time goes by, the robot is more certain
about its position and requires less samples to approximate the distribution, hence a smaller window size is required. In the end one sample set is all it needs.
Back to the MCL Home | {"url":"http://mobilerobotics.cs.washington.edu/mcl/rtpf/","timestamp":"2014-04-20T03:17:11Z","content_type":null,"content_length":"6927","record_id":"<urn:uuid:183af674-bde7-4250-b63a-72e21969f33a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Gas Absorption Computed Using the Successive Over-Relaxation (SOR) Method
Consider a tray absorption column used to remove an impurity from a gas stream. A pure solvent is used for this absorption operation. The solvent molar flow rate is and the gas molar flow rate is .
Both and are considered constant (i.e., the dilute system hypothesis remains valid). The number of equilibrium stages is , the value of the slope of the equilibrium line () is set to , the
solvent-to-gas molar flow rate ratio , and the mole fraction of the impurity in the gas fed to the absorption column is chosen to be .
This Demonstration computes the exact McCabe–Thiele diagram using matrix inversion. The horizontal lines represent the theoretical equilibrium stages in the absorption column. The successive
over-relaxation (SOR) method is compared to the exact solution by plotting the cumulative squared error versus the number of iterations. The parameter is chosen to be between and . For (see the last
snapshot showing large growing error), you can see that the SOR method fails to give good results. A theorem of Kahan states that the SOR method will converge only if is chosen in the interval . For
, the SOR method is identical to the Gauss–Seidel technique. | {"url":"http://demonstrations.wolfram.com/GasAbsorptionComputedUsingTheSuccessiveOverRelaxationSORMeth/","timestamp":"2014-04-20T11:31:16Z","content_type":null,"content_length":"45139","record_id":"<urn:uuid:1f91f531-140b-414c-9de1-a2b465478fd2>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Mathematics
Upcoming Special Events
Blyth Lecture Series (March 10, 12 and 13)
Speaker: Yair Minsky
Title: Complexity in surfaces and 3-manifolds
Abstract: 3-manifolds can be built up by a process of gluing along surfaces, and a good quantitative understanding of this can help us study the geometric structures that these manifolds admit. This
leads to a detailed study of the mapping class group of a surface and its coarse geometry. I will discuss the history of this area over recent (and not so recent) years, and hopefully give an
overview of the current state, where a number of question marks remain.
History: The R.A. Blyth Lectures are an annual distinguished lecture series in Mathematics and Mathematical Science, established by the Department of Mathematics, University of Toronto on the
occasion of the 150th anniversary of the first Professorship in Mathematics at the University. It consists of three lectures by a distinguished mathematician: the first for a general scientific
audience, the second for a general mathematical audience, and the third for specialists in the field.
Information on previous year's events can be found at the following link: http://www.math.toronto.edu/cms/blyth | {"url":"http://www.math.toronto.edu/cms/events/ics/401/20120720T141000-20120720T141000","timestamp":"2014-04-21T09:36:29Z","content_type":null,"content_length":"8700","record_id":"<urn:uuid:714d73a9-bbcd-436b-959c-76c65693791a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cantor space
algebraic topology
## Basic concepts *
topological space
continuous map
nice topological space
nice category of spaces
convenient category of topological spaces
* **
homotopy theory
** *
homotopy group
covering space
## Theorems *
Whitehead's theorem
Freudenthal suspension theorem
nerve theorem
## Extra stuff, structure, properties *
Hausdorff space
second-countable space
sober space
compact space
paracompact space
connected space
locally connected space
contractible space
locally contractible space
topological vector space
Banach space
Hilbert space
## Examples *
real line
loop space
path space
Cantor space
Sierpinski space
long line
Warsaw circle
Cantor space, named after Georg Cantor, is a famous space. Cantor studied it primarily as a subspace of the real line, but it is also important as a space in its own right.
Recall that a binary digit is either $0$ or $1$; the set (or discrete space) of binary digits is the Boolean domain $\mathbb{B}$.
A point in Cantor space is an infinite sequence of binary digits. Accordingly, Cantor space may be denoted $\mathbb{B}^{\mathbb{N}}$, since its set of points is a function set.
An open in Cantor space is a collection $G$ of finite sequences of binary digits (that is a subset of the free monoid $\mathbb{B}^*$) such that:
• If $u \in G$ and $v$ is an extension of $u$ (that is $u$ with possibly additional digits added to the end), then $v \in G$;
• If $u:0 \in G$ and $u:1 \in G$ (where $u:i$ is the immediate extension of $u$ by the digit $i$), then $u \in G$.
A point $\alpha$belongs to an open $G$ if, for some $u$ in $G$, $\alpha$ is an extension of $u$.
What kind of space?
Traditionally, Cantor space is understood as a topological space. We start with the points, as defined above, then specify which sets of points are open. Although there are other ways to state which
sets are open, we may define a set to be open if it is the set of points that belong to some open $G$ as defined above.
A newer approach is to understand Cantor space as a locale. Then we start with the opens and define an order relation on them to define a frame. In this case, the order relation is the obvious one,
that $G \leq H$ if $G \subseteq H$ as subsets of $\mathbb{B}^*$. Then the points come for free, and correspond precisely to the points as defined above.
In classical mathematics, these two approaches are equivalent; a point is determined by its opens, and an open is determined by its points. The theorem that a point is determined by its opens (so
that Cantor space, as a topological space, is sober) is valid internal to any pretopos with an exponentiable natural numbers object; as such, it applies even in predicative and constructive
mathematics. However, the theorem that an open is determined by its points (so that Cantor space, as a locale, is topological) is equivalent to the fan theorem; it is true in some pretoposes and
accepted by some schools of constructivism but false in other pretoposes and rejected, or even refuted, by other constructivists.
When the fan theorem is not valid, the localic approach is probably better; it allows more of the useful properties of Cantor space to hold.
As a subspace
Cantor space is usually conceived of as a subspace of the real line. Pointwise, it is easy to define the embedding from $\mathbb{B}^{\mathbb{N}}$ into $\mathbb{R}$; we map the infinite sequence $\
alpha$ to the real number
$\sum_{i=1}^{\infty} \frac { 2 \alpha_i } { 3^i } .$
One then checks that this function is in fact an embedding.
From the localic perspective, a continuous map is given by a homomorphism of frames in the opposite direction. Given an open $\sim$ in $\mathbb{R}$ (as a binary relation on rational numbers, as
described at locale of real numbers), this is mapped to the open $G$ in Cantor space such that $u \in G$ if and only if
$\sum_{i=1}^{len(u)} \frac { 2 u_i } { 3^i } \sim \sum_{i=1}^{len(u)} \frac { 2 u_i } { 3^i } + \frac 1 { 3^{-len(u)} } .$
One then checks that this is an embedding.
I should check this some day; for the moment, I am taking it on faith. β Toby
In either case, the idea is:
• A point of Cantor space corresponds to a number written in base $3$ with infinitely many digits, using only the digits $0$ and $2$; while
• An open corresponds to a union of intervals, each of which is given by approximating a number in base $3$ to a finite number of digits, using only the digits $0$ and $2$.
One sometimes speaks of the Cantor set to stress that one is considering Cantor space as a subspace of the real line.
Cantor space, especially in its guise as a subspace of the real line, is quite famous; see Wikipedia. Here are some headline properties: | {"url":"http://ncatlab.org/nlab/show/Cantor+space","timestamp":"2014-04-21T12:23:01Z","content_type":null,"content_length":"46731","record_id":"<urn:uuid:62581f51-8c29-4e0e-ac93-64f16667c1b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Support Forum: 'Help with creating matrix from another matrix' topic
Author Comment/Response
In Response To 'Re: Re: Re: Re: Re: Re: Re: Help with creating ...'
It's square matrix. So, the blank under diagonal matrix are same values. so [1,2]with[2,1] have same value. so does with [5,6] and [6,5]. for the column 13 and row 1..6 which zero in my
spreadsheet, look at your code, [1,13] should be zero, but you write it [nonzero].
do not search the whole matrix values in [16x24], but search it each rows and finished before moving into another rows.
then, for first rows, the only value are starting from [1,1]....[1,150],[2,1]....[2,150],[3,1]....[3,150] until [42,1]....[42,150] which only for the first row and the value of this row are
the non zero value in my spreadsheet.
actually, this code below maybe could help us. this code wasnt come from mathematica. it already sum the [150x150] matrix but only for upper triangle of square matrix. is there any way to
transform this code into mathematica? as you see, the table in my spreadsheet are for upper triangle square matrix.
for k in [1..16] do
for i in S[k] do
for j in S[k] do
if i<=j then
URL: , | {"url":"http://forums.wolfram.com/student-support/topics/450038","timestamp":"2014-04-20T10:56:51Z","content_type":null,"content_length":"35753","record_id":"<urn:uuid:d751d091-aebf-46b2-8a80-c4245ce3cb8c>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Biased Anisotropic Diffusion--A Unified Regularization and Diffusion Approach to Edge Detection
K. Niklas Nordstrom
EECS Department
University of California, Berkeley
Technical Report No. UCB/CSD-89-514
May 1989
We present and analyze a global edge detection algorithm based on variational regularization. The algorithm can also be viewed as an anisotropic diffusion method. We thereby unify these two, from the
original outlook, quite different methods. This puts anisotropic diffusion, as a method in early vision, on more solid grounds; it is just as well-founded as the well-accepted standard regularization
techniques. The unification also brings the anisotropic diffusion method an appealing sense of optimality, thereby intuitively explaining its extraordinary performance.
The algorithm to be presented moreover has the following attractive properties.
1. It only requires the solution of a single boundary value problem over the entire image domain -- almost always a very simple (rectangular) region.
2. It converges to the solution of interest.
The first of these properties implies very significant advantages over other existing regularization methods; the computation cost is typically cut by an order of magnitude or more. The second
property represents considerable advantages over the existing diffusion methods; it removes the problem of deciding when to stop, as well as that of actually stopping the diffusion process.
BibTeX citation:
Author = {Nordstrom, K. Niklas},
Title = {Biased Anisotropic Diffusion--A Unified Regularization and Diffusion Approach to Edge Detection},
Institution = {EECS Department, University of California, Berkeley},
Year = {1989},
Month = {May},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1989/5619.html},
Number = {UCB/CSD-89-514},
Abstract = {We present and analyze a global edge detection algorithm based on variational regularization. The algorithm can also be viewed as an anisotropic diffusion method. We thereby unify these two, from the original outlook, quite different methods. This puts anisotropic diffusion, as a method in early vision, on more solid grounds; it is just as well-founded as the well-accepted standard regularization techniques. The unification also brings the anisotropic diffusion method an appealing sense of optimality, thereby intuitively explaining its extraordinary performance. <p>The algorithm to be presented moreover has the following attractive properties. <p>1. It only requires the solution of a single boundary value problem over the entire image domain -- almost always a very simple (rectangular) region. <p>2. It converges to the solution of interest. <p>The first of these properties implies very significant advantages over other existing regularization methods; the computation cost is typically cut by an order of magnitude or more. The second property represents considerable advantages over the existing diffusion methods; it removes the problem of deciding when to stop, as well as that of actually stopping the diffusion process.}
EndNote citation:
%0 Report
%A Nordstrom, K. Niklas
%T Biased Anisotropic Diffusion--A Unified Regularization and Diffusion Approach to Edge Detection
%I EECS Department, University of California, Berkeley
%D 1989
%@ UCB/CSD-89-514
%U http://www.eecs.berkeley.edu/Pubs/TechRpts/1989/5619.html
%F Nordstrom:CSD-89-514 | {"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1989/5619.html","timestamp":"2014-04-21T14:45:58Z","content_type":null,"content_length":"7244","record_id":"<urn:uuid:459deeae-9ea0-4930-b383-14d0797868ec>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Car Travels At 47 Mi/h When The Brakes Are Suddenly ... | Chegg.com
A car travels at 47 mi/h when the brakes are suddenly applied. Consider how the tires of a moving car come in contact with the road. When the car goes into a skid (with wheels locked up), the rubber
of the tire is moving with respect to the road; otherwise, when the tires roll, normally the point where the tire contacts the road is stationary. Assume the coefficients of friction between the
tires and the road are
?K = 0.80 and ?S = 0.90.
(a) Calculate the distance required to bring the car to a full stop when the car is skidding.
Your value is too low. m
(b) Calculate the distance required to bring the car to a full stop when the wheels are not locked up.
Your value is too high. m
(c) How much farther does the car go if the wheels lock into a skidding stop? Give your answer as a distance in meters and as a percent of the nonskid stopping distance.
?xskid ? ?xno skid =
?xno skid
(d) Can antilock brakes make a big difference in emergency stops? Explain.
because the stopping distance with antilock brakes is
the stopping distance without antilock brakes. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/car-travels-47-mi-h-brakes-suddenly-applied-consider-tires-moving-car-come-contact-road-ca-q2655281","timestamp":"2014-04-23T23:32:47Z","content_type":null,"content_length":"23667","record_id":"<urn:uuid:620ebe84-d1b5-498f-8e5a-0c0e737c5864>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Irving Park, Chicago, IL
Chicago, IL 60657
Physics PhD tutor- Math and Science
...These include: classical mechanics, electricity and magnetism, and quantum mechanics. Mathematics: Mathematics education includes calculus at the undergraduate level: both single and
multi-variate, pre-calculus, trigonometry, and algebra. These subjects are a necessary...
Offering 10+ subjects including algebra 2 | {"url":"http://www.wyzant.com/geo_Irving_Park_Chicago_IL_College_Algebra_tutors.aspx?d=20&pagesize=5&pagenum=5","timestamp":"2014-04-24T09:12:06Z","content_type":null,"content_length":"60678","record_id":"<urn:uuid:b7b79bda-9e5c-4240-b0b0-99a84fb5b34b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
PHY 277
PHY 277: Computation for Physics and Astronomy
Fall 2011
Place and Time:
Monday, Wednesday, Friday 8:20 - 9:25 AM in the Math Lab (S 235)
Course Web Page:
An introduction to computing on UNIX/Linux computers. Fundamentals of using UNIX/Linux to write computer programs for numerical algorithms to solve computational physics and astronomy problems.
Assignments are carried out in a high-level compiler programming language such as Fortran 90 or C++ and require extensive use of SINC site computers outside the classroom. Prerequisite: PHY 125, 126,
127; or PHY 131, 132, 133, 134; or PHY 141, 142; AMS 151 or MAT 126 or 131 or 141 Advisory Prerequisite: AMS 161 or MAT 127 or 132 or 142 or 171. 3 credits.
Prof. Alan Calder
Email: acalder "at" mail.astro.sunysb.edu, Office: ESS 438, Phone: 632-1176
Lectures, homeworks, etc:
All lecture materials will be on Blackboard
Version 1.05 Available here and on Blackboard.
Useful links for PHY 277:
The University of Utah Unix Tutorial
The University of Surrey Unix Tutorial
Unix file system tutorial
Emacs Reference Card
Vi Reference Card
Configuring the bash shell
Note that students are encouraged to submit helpful web sites to the instructor for inclusion in this list!
updated: 11-December-2011 | {"url":"http://www.astro.sunysb.edu/acalder/phy277/","timestamp":"2014-04-19T12:11:12Z","content_type":null,"content_length":"3833","record_id":"<urn:uuid:0a9487a3-ce13-4be4-841f-bb4bb7e83873>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
Final Examination
Compiler Construction
Spring 2000
Final Examination
This is a take-home examination. Exams should be placed in my mailbox by
Mednesday May. 10, 5 pm. You are welcome to use any references and texts,
and to discuss the questions with your colleagues, but the work you hand
in should be strictly yours. Needless to say, this also applies to the
final project, which is due at the end of business, May 12.
Problem 1
Consider the language of the alphabet { 0, 1}, which consists of strings
with the same number of 0'a and 1's.
a) Is the grammar for this language regular? Explain.
b) Sketch a parsing algorithm for this language (No need to find a grammar
for the language).
c) (Optional) write a grammar for this language.
Problem 2
Your project implemented parameter-passing by value for scalar types.
Suppose we want to implement parameter-passing by reference for arrays, and
be able to write the usual:
type t is array (integer range <>) of integer;
function sum (arg : t) return integer is
result : integer := 0;
for J in arg'first .. arg'last loop
result := result + arg (J); -- line (A)
end loop;
return result;
end sum;
Somewhere else we call this function:
table : t (1..N) := ....
X : Integer;
X := sum (table); -- line (B)
a) Indicate what quadruples and what assembly (for your chosen target machine)
would be generated for line (A). You can assume that the bounds of the array
are stored at locations 0 and 1, so the data itself starts at offset 2 from
the array base.
b) What code (quadruples and assembly) needs to be generated for the
assignment at (B)?
Problem 3
Consider the following procedure, that computes the primes numbers up
to some number N.
with text_io; use text_io;
procedure Sieve (N : integer) is
table : array (1..10000) of boolean;
for I in table'range loop
table (I) := True;
end loop;
for I in 2 .. Integer (Sqrt (N)) loop
if table (I) then -- I is a prime
for J in 2 .. Table'Last loop
exit when I * J > Table'Last;
Table (I * J) := False; -- multiple of I
end loop;
end if;
end loop;
for I in 2 . Table'Last loop
if Table (I) then
put (I); put_line (" is a prime");
end if;
end loop;
end sieve;
a) Write the quadruples corresponding to this procedure. If your compiler
can process this example you can use it as a guide. Describe what needs to be
done for the function call to Sqrt.
b) Show the basic blocks of this procedure.
c) Show the flow graph for this procedure.
d) Identify the loop-invariant computations, if any, and move them out of
loops if possible.
e) are there induction variable present? If so, can their use be optimized?
Problem 4
An expression E is said to be very busy at point P if on all paths from P
the expression E is evaluated before any of its operands is redefined. Note
that this definition means that E itself is not necessarily evaluated at P,
only that E is evaluated downstream on all possible paths from P. If an
expression is very busy at P, it may be advantageous to compute it once at
P, rather than at each downstream place where it is currently computed (this
optimization is called hoisting. For example, if the expression is evaluated
on two branches of an if-statement, it saves space to evaluate it once before
the conditional).
Give a data-flow algorithm to determine whether an expression is very busy at
at a point. As usual, assume that you have the program graph and its basic
blocks, functions Busy_in and Busy_out, and local functions Gen and Kill.
Define in a few words what each of these represent, and how they are computed.
Is this a forward or backwards algorithm, and does it use union or intersection? | {"url":"http://cs.nyu.edu/courses/spring00/G22.2130-001/final.htm","timestamp":"2014-04-20T03:11:40Z","content_type":null,"content_length":"4658","record_id":"<urn:uuid:3c6e8bc5-e04d-47d8-818a-3070b052b9f3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Realizing groups as automorphism groups of graphs.
up vote 11 down vote favorite
Frucht showed that every finite group is the automorphism group of a finite graph. The paper is here.
The argument basically is that a group is the automorphism group of its (colored) Cayley graph and that the colors of edge in the Cayley graph can be coded into an uncolored graph that has the same
automorphism group.
This argument seems to carry over to the countably infinite case.
Does anybody know a reference for this?
In the uncountable, is it true that every group is the automorphism group of a graph? (Reference?) It seems like one has to code ordinals into rigid graphs in order to code the uncountably many
colors of the Cayley graph.
gr.group-theory graph-theory automorphism-groups
add comment
2 Answers
active oldest votes
According to the wikipedia page, every group is indeed the automorphism group of some graph. This was proven independently in
de Groot, J. (1959), Groups represented by homeomorphism groups, Mathematische Annalen 138
up vote 11 down vote and
Sabidussi, Gert (1960), Graphs with given infinite group, Monatshefte für Mathematik 64: 64–67.
Thanks. I somehow missed that wikipedia page. I only found the one with the Frucht graph and related paper. – Stefan Geschke Sep 1 '10 at 8:37
From de Groot's paper you can see that this argument you described goes pretty far, for graphs, topological spaces etc. Coding arbitrary number of colors is done through some
rigid spaces with no automorphisms. – Gjergji Zaimi Sep 1 '10 at 8:53
Yes, Stefan certainly had the right idea in mind. I'll just remark that Sabidussi's paper meets my requirements for recreational math reading; it's only three pages long. –
Tony Huynh Sep 1 '10 at 9:10
add comment
In the topological setting or if you want to relate the size of the graph to the size of the group, there are two relevant results:
(1) Any closed subgroup of $S_\infty$, i.e., of the group of all (not just finitary) permutations of $\mathbb N$, is topologically isomorphic to the automorphism group of a countable
up vote 9
down vote (2) The abstract group of increasing homeomorphisms of $\mathbb R$, ${\rm Homeo}_+(\mathbb R)$, has no non-trivial actions on a set of size $<2^{\aleph_0}$. So in particular, it cannot
be represented as the automorphism group of a graph with less than continuum many vertices.
1 Hi Christian! Nice to see you here. – Andres Caicedo Sep 1 '10 at 17:04
Hi Christian! Nice to see you! I fixed a small typo in your post. – François G. Dorais♦ Sep 1 '10 at 17:47
This is interesting, thank you. – Stefan Geschke Sep 2 '10 at 8:52
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory graph-theory automorphism-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/37356/realizing-groups-as-automorphism-groups-of-graphs?sort=votes","timestamp":"2014-04-21T08:01:49Z","content_type":null,"content_length":"62181","record_id":"<urn:uuid:a6493733-e61b-49a0-bb91-d06f0df562b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
n-th derivative of e^y and the partition function
September 13th 2009, 09:02 AM #1
MHF Contributor
May 2008
n-th derivative of e^y and the partition function
Suppose $y=f(x)$ is $n$-times differentiable on some interval. For any partition $\alpha: a_1 + \cdots + a_k = n, \ a_j \geq 1,$ define $y^{\alpha}=\prod_{j=1}^k \frac{d^{a_j}y}{dx^{a_j}}.$
Example: for $n=5$ and the partitions $\alpha: 1 + 1 + 1 + 2 = 5$ and $\beta: 2+3=5$ we have $y^{\alpha}= (y')^3y''$ and $y^{\beta}=y''y'''.$
True or false: $\frac{d^n e^y}{dx^n}=\left(\sum_{\alpha} c_{\alpha}y^{\alpha} \right)e^y,$ where the sum is over all the partitions of $n$ and $c_{\alpha}$ are some positive integers depending on
Last edited by NonCommAlg; September 19th 2009 at 11:37 PM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/math-challenge-problems/102052-n-th-derivative-e-y-partition-function.html","timestamp":"2014-04-20T14:13:50Z","content_type":null,"content_length":"32898","record_id":"<urn:uuid:cf13e631-bc39-4a41-9ad9-b7c99aa2e464>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Constructing an Angle Bisector - Problem 1
A common application of constructing angle bisectors is creating various types of angles. In this problem we’re going to talk about constructing a 30 degree angle. Remember the only two things you
can use are a compass and a straightedge. So what’s our game plan going to be?
Well, step one is going to be, construct a 60 degree angle. We’re going to do that by constructing three congruent line segments. Once we’ve constructed a 60 degree see if we divide that in half,
we’re going to get 30. So the second step is going to be, bisect the angle that we’ve created. So let’s do it.
We’re going to grab our compass, and it doesn’t really matter what you set your compass at as long as you’re consistent and I’m going to take an endpoint. So the first step I’ve already drawn a ray.
So at home if you’re following along, draw a ray. And I’m going to swing an arc from that endpoint.
Keeping my compass the same, I’m going to go over to this point of intersection and I’m going to swing another arc. So this point right here is the same distance from both these endpoints, which
would create an equilateral triangle. Now as you remember, equilateral triangles have angles that are all 60 degrees.
So if I connect these two points right here, then I’ve constructed a 60 degree angle. So if we’re following along our game plan, we’re done with constructing a 60 degree angle now we need to bisect
it. So take out your compass again, and I’m going to change my compass a little bit. I’m going to swing an arc from the vertex so that it intersects my angle in two places.
Now you can change your compass setting but you don’t have to, and from both of these points right here you’re going to swing two more arcs. You want to see where they intersect, because on that
point it will be equidistant from your rays.
So, I’m going to grab a red marker so you can tell the difference here. And I’m going to connect that point of intersection with my vertex, which will create a 30 degree angle. So the key is coming
up with a game plan. We know we construct, can construct a 60 degree angle and if you divide 60 in half, we would have 30 degrees.
So if you’re asked on a test to construct a 15 degree angle, what would you do? Well you would do the exact same procedure, except, now you would bisect that 30 degree angle.
construction perpendicular equidistant shortest distance to a line bisector 30 degree angle | {"url":"https://www.brightstorm.com/math/geometry/constructions/constructing-an-angle-bisector-problem-1/","timestamp":"2014-04-18T18:25:45Z","content_type":null,"content_length":"65287","record_id":"<urn:uuid:9bebf086-a4f1-413a-aa1b-8815cf0b40cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
two separate flywheels
D9 XTC
Haha you realized what I was thinking about. I'm just stretching my brain trying to think of how a flywheel could go from storage to generator.
Sounds like you read a post of mine, from a few years ago. If not, I'll say what was on my mind, might help your thoughts.
If your flywheel rim has a large mass of steel, a cavity of some size, a heat element, then dumping a weight of mercury will increase kinetic energy and drive a generator as speed slows. The mercury
boils and a vapor moves out to a condenser and becomes liquid again, returning to the hollow axel, the process repeats.
A liter weight wheel is increased to speed, then a heavier weight wheel passes energy to the generator. A 3600 rpm flywheel might fluxuate by no more than 2 or 3 hundred rpm.
Everything will be based on how long to boil whatever weight of mercury is removed at the low speed.
You'll likely have to come up with something other than mercury, I know I'll never try to go anywhere with the idea, too many easier things to look at. | {"url":"http://www.physicsforums.com/showthread.php?t=464681","timestamp":"2014-04-18T23:34:49Z","content_type":null,"content_length":"69792","record_id":"<urn:uuid:2e96906c-d92f-4130-99eb-15643a13df6d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
number of pair of integers (x,y) satisying an equation
July 29th 2009, 11:09 PM
number of pair of integers (x,y) satisying an equation
The equation is
$<br /> x^2-4xy+5y^2+2y-4=0$
What is the total number of pair of integers (x,y) that satisfy the above equation?
How do I solve it?
July 29th 2009, 11:12 PM
But $5=1+4=4+1$
Can you continue?
July 30th 2009, 12:04 AM
Do I have to test pairs like (2,0), (0,2), (6,2), etc. or there is some other methods?
July 30th 2009, 12:19 AM
From here you have to solve four systems:
$\left\{\begin{array}{ll}x-2y=1\\y+1=2\end{array}\right., \ \left\{\begin{array}{ll}x-2y=-1\\y+1=2\end{array}\right., \ \left\{\begin{array}{ll}x-2y=1\\y+1=-2\end{array}\right., \ \left\{\begin
and to keep the integer solutions.
Similarly for $\left\{\begin{array}{ll}(x-2y)^2=4\\(y+1)^2=1\end{array}\right.$ | {"url":"http://mathhelpforum.com/algebra/96482-number-pair-integers-x-y-satisying-equation-print.html","timestamp":"2014-04-17T11:06:24Z","content_type":null,"content_length":"6099","record_id":"<urn:uuid:10cf7493-f9a7-4ee7-a371-8ad9b43517f1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
TANH(3) BSD Programmer's Manual TANH(3)
tanh, tanhf - hyperbolic tangent function
#include <math.h>
tanh(double x);
tanhf(float x);
The tanh() and tanhf() functions compute the hyperbolic tangent of x. For
a discussion of error due to roundoff, see math(3).
Upon successful completion, these functions return the hyperbolic tangent
value. The following may also occur:
1. If x is +- 0, x is returned.
2. If x is NaN, a NaN is returned.
3. If x is positive infinity, a value 1 is returned; if x is
negative infinity, -1 is returned.
4. If x is subnormal, a range error can occur and x is returned.
acos(3), asin(3), atan(3), atan2(3), cos(3), cosh(3), math(3), sin(3),
sinh(3), tan(3)
The described functions conform to ISO/IEC 9899:1999 ("ISO C99").
MirOS BSD #10-current September 18, 2011 1
Generated on 2014-04-02 20:57:59 by $MirOS: src/scripts/roff2htm,v 1.79 2014/02/10 00:36:11 tg Exp $
These manual pages and other documentation are copyrighted by their respective writers; their source is available at our CVSweb, AnonCVS, and other mirrors. The rest is Copyright © 2002‒2014 The
MirOS Project, Germany.
This product includes material provided by Thorsten Glaser.
This manual page’s HTML representation is supposed to be valid XHTML/1.1; if not, please send a bug report – diffs preferred. | {"url":"http://www.mirbsd.org/htman/i386/man3/tanh.htm","timestamp":"2014-04-19T06:57:02Z","content_type":null,"content_length":"4220","record_id":"<urn:uuid:579d6262-e07d-4f5c-922f-4659a4f4235b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
March 16th 2011, 11:13 AM #1
Mar 2011
R&D Costs
A pharmaceutical firm can develop a patentable face cream using either of two independent processes, each of which has a 25 percent chance of costing $5 million, a 25 percent chance of costing
$10 million, and a 50 percent chance of costing $20 million. If the firm can determine the true cost of both processes after spending $2 million on each, what is the minimized expected cost of
developing the patent?
Solution: 12.313
Any ideas HOW to arrive at 12.313?
A pharmaceutical firm can develop a patentable face cream using either of two independent processes, each of which has a 25 percent chance of costing $5 million, a 25 percent chance of costing
$10 million, and a 50 percent chance of costing $20 million. If the firm can determine the true cost of both processes after spending $2 million on each, what is the minimized expected cost of
developing the patent?
Solution: 12.313
Any ideas HOW to arrive at 12.313?
Since it says mimize, you need to take the derivative of the $E[C]$ function and set it equal to zero.
Well, there's no explicit cost function here. Even if I go for the C(index) derivatives, the only thing left will be probabilities, which altogether add up to 1.
March 16th 2011, 08:53 PM #2
MHF Contributor
Mar 2010
March 16th 2011, 11:32 PM #3
Mar 2011
March 17th 2011, 01:27 PM #4
MHF Contributor
Mar 2010 | {"url":"http://mathhelpforum.com/statistics/174771-r-d-costs.html","timestamp":"2014-04-20T21:09:28Z","content_type":null,"content_length":"38153","record_id":"<urn:uuid:a14ecfa8-4437-4565-a0b9-cc9fb6a8be77>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Programming: Sensitivity Analysis and Interpretation of Solution
Linear Programming: Sensitivity Analysis and Interpretation of Solution
(The source is Quantitative Methods for Business, by Andersen, Sweeney, Williams, Camm, and Martin.)
1. Investment Advisors, Inc., is a brokerage firm that manages stock portfolios for a number of clients. A particular portfolio consists of U shares of U.S. Oil and H shares of Huber Steel. The
annual return for U.S. Oil is $3 per share and the annual return for Huber Steel is $5 per share. U.S. Oil sells for $25 per share and Huber Steel sells for $50 per share. The portfolio has $80,000
to be invested. The portfolio risk index (0.50 per share U.S. Oil and 0.25 per share for Huber Steel) has a maximum of 700. In addition, the portfolio is limited to a maximum of 1000 shares of U.S.
Oil. The linear programming formulation that will maximize the total annual return of the portfolio is as follows:
Max z = 3U + 5H
Subject to:
25U + 50H ≤ 80,000 Funds available
0.50U + 0.25H ≤ 700 Risk maximum
1U ≤ 1000 U.S. Oil maximum
U, H ≥ 0
Solve the problem using Excel Solver.
a) What is the optimal solution, and what is the value of the total annual return?
b) Which constraints are binding? What is your interpretation of these constraints in terms of the problem?
c) What are the shadow prices for the constraints? Interpret each.
d) Would it be beneficial to increase the maximum amount invested in U.S. Oil? Why or why not?
2. Refer to the printout of Problem 1.
a) How much would the return for U.S. Oil have to increase before it would be beneficial to increase the investment in this stock?
b) How much would the return for Huber Steel have to decrease before it would be beneficial to reduce the investment in this stock?
c) How much would the total annual return be reduced if the U.S. Oil maximum were reduced to 900 shares?
3. Innis Investments manages funds for a number of companies and wealthy clients. The investment strategy is tailored to each client’s needs. For a new client, Innis has been authorized to invest
up to $1.2 million in two investment funds: a stock fund and a money market fund. Each unit of the stock fund costs $50 and provides an annual rate of return of 10%; each unit of the money market
fund costs $100 and provides an annual rate of return of 4%.
The client wants to minimize risk subject to the requirement that the annual income from the investment be at least $60,000. According to Innis’s risk measurement system, each unit invested in
the stock fund has a risk index of 8, and each unit invested in the money market fund has a risk index of 3; the higher risk index associated with the stock fund simply indicates that it is the
riskier investment. Innis’s client has also specified that at least $300,000 be invested in the money market fund.
Letting S = units purchased in the stock fund
M = units purchased in the money market fund
leads to the following formulation:
Min z = 8S + 3M
Subject to:
50S + 100M ≤ 1,200,000 Funds available
5S + 4M > 60,000 Annual income
M ≥ 3,000 Units in money market
S, M ≥ 0
Solve the problem using Excel Solver.
a) What is the optimal solution, and what is the minimum risk?
b) What does s3 = 7,000 represent, a slack or a surplus? Explain what it means for this problem.
c) Specify the objective coefficient ranges.
d) How much annual income will be earned by the portfolio?
e) What is the rate of return for the portfolio?
f) What is the shadow price for the funds available constraint? Explain what it means for this problem.
g) Suppose the risk index for the money market fund increases from its current value of 3 to 3.5. How does the optimal solution change, if at all? What is the new total risk?
4. A company has to determine the best number of three models of a product to produce in order to maximize profits. The models are, an economy model, a standard model, and a deluxe model.
Constraints include production capacity limitations (time available in minutes) in each of three departments (cutting and dyeing, sewing, and inspection and packaging) as well as constraint that
requires the production of at least 1000 economy models. The linear programming model is shown here:
Max z = 3x[1] + 5x[2] + 4.5x[3]
Subject to:
12x[1] + 10x[2] + 8 x[3] ≤ 18,000 Cutting and dying
15x[1] + 15x[2] + 12 x[3] ≤ 18,000 Sewing
3x[1] + 4x[2] + 2x[3] ≤ 9,000 Inspection and modeling
1x[1] ≥ 1,000 Economy model
x[1], x[2], x[3] ≥ 0
Solve the problem using Excel Solver.
a) How many units of each model should be produced to maximize the total profit contribution?
b) Which constraints are binding?
c) Interpret slack and/or surplus in each constraint.
d) Overtime rates in the sewing department are $12 per hour. Would you recommend that the company consider using overtime in that department? Explain.
e) What is the shadow price for the fourth constraint? Interpret its value for management.
f) Suppose that the profit contribution of the economy model is increased by $1. How do you expect the solution to change? What is the new value of the objective function (profit)?
g) The profit contribution for the standard model is $5 per unit. How much would this profit contribution have to change to make it worthwhile to produce some units of standard model?
5. A company manufactures two products, identified as model A and model B. Each model has its lowest possible production cost when produced on the new production line. However, the new production
line does not have the capacity to handle the total production of both models. As a result, at least some of the production must be routed to a higher-cost, old production line. The minimum
production requirement for next month for model A is 50,000 units; and for model B, it is 70,000 units. The production line capacities in units per month are 80,000 and 60,000 for the new line and
the old line, respectively. The production cost for model A produced on the new line is $30/unit; on the old line, it is $50/unit. The production cost for model B produced on the new line is $25/
unit; on the old line, it is $40/unit.
Let AN = Units of model A produced on the new production line
AO = Units of model A produced on the old production line
BN = Units of model B produced on the new production line
BO = Units of model B produced on the old production line
The objective of the company is to determine the minimum cost production plan. The linear programming model has been formulated below.
Min z = 30AN + 50AO + 25BN + 40BO
Subject to:
AN + AO + > 50,000 Minimum production for model A
BN + BO ≥ 70,000 Minimum production for model B
AN + BN < 80,000 Capacity of the new production line
AO + BO < 60,000 Capacity of the old production line
AN, AO, BN, BO ≥ 0
Solve the problem using Excel Solver.
a) What is the optimal solution and what is the total production cost associated with this solution?
b) Which constraints are binding? Explain.
c) Would you recommend increasing the capacity of the old production line? Explain.
d) Would an increase in capacity for the new production line be desirable? Explain.
e) Suppose that the minimum production requirement for model B is reduced from 70,000 units to 60,000 units. What effect would this change have on the total production cost?
6. The Porsche Club of America sponsors driver education events that provide high-performance driving instruction on actual racetracks. Because safety is a primary consideration at such events,
many owners elect to install roll bars in their cars. Deegan Industries manufactures two types of roll bars for Porsches. Model DRB is bolted to the car using existing holes in the car’s frame.
Model DRW is a heavier roll bar that must be welded to the car’s frame. Model DRB requires 20 pounds of a special high-alloy steel, 40 minutes of manufacturing time, and 60 minutes of assembly time.
Model DRW requires 25 pounds of the special high-alloy steel, 100 minutes of manufacturing time, and 40 minutes of assembly time. Deegan’s steel supplier indicated that at most 40,000 pounds of
the high-alloy steel will be available next quarter. In addition, Deegan estimates that 2000 hours of manufacturing time and 1600 hours of assembly time will be available next quarter. The profit
contributions are $200 per unit for model DRB and $280 per unit for model DRW. The linear programming model for this problem is as follows:
Max z = 200DRB + 280DRW
Subject to:
20DRB + 25DRW ≤ 40,000 Steel available
40DRB + 100DRW ≤ 120,000 Manufacturing minutes
60DRB + 40DRW ≤ 96,000 Assembly minutes
DRB, DRW ≥ 0
Solve the problem using Excel Solver.
a) What are the optimal solution and the total profit contribution?
b) If the available manufacturing time is increased by 500 hours, will the shadow price for the manufacturing time constraint change? Explain.
c) Should Deegan consider using overtime to increase the available assembly time? Why or why not?
d) Because of increased competition, Deegan is considering reducing the price of model DRB such that the new contribution to profit is $175 per unit. How would this change in price affect the
optimal solution? Explain. What is the new total profit contribution?
e) Another supplier offered to provide Deegan Industries with an additional 500 pounds of the steel alloy at $2 per pound. Should Deegan purchase the additional pounds of the steel alloy? Explain.
7. Better Products, Inc., manufactures three products on two machines. In a typical week, 40 hours are available on each machine. The profit contribution and production time in hours per unit are
as follows:
│Category │ Product 1 │ Product 2 │ Product 3 │
│Profit/unit │ $30│ $50│ $20│
│Machine 1 time/unit │ 0.5 │ 2.0 │ 0.75 │
│Machine 2 time/unit │ 1.0 │ 1.0 │ 0.5 │
Two operators are required for machine 1; thus, 2 hours of labor must be scheduled for each hour of machine 1 time. Only one operator is required for machine 2 time. A maximum of 100 labor-hours is
available for assignment to the machines during the coming week. Other production requirements are that product 1 cannot account for more than 50% of the units produced and that product 3 must
account for at least 20% of the units produced.
a) Formulate a linear programming model that can be used to determine the number of units of each product to produce to maximize the total profit contribution.
b) Solve the problem with Excel Solver. What is the optimal solution, and what is the projected weekly profit associated with your solution?
c) How many hours of production time will be scheduled on each machine?
d) What is the value of an additional hour of labor?
e) Assume that labor capacity can be increased to 120 hours. Would you be interested in using the additional 20 hours available for this resource? Develop the optimal production mix assuming the
extra hours are made available. | {"url":"http://www.cob.sjsu.edu/yetimy_m/linearprogramming.htm","timestamp":"2014-04-18T16:37:22Z","content_type":null,"content_length":"45343","record_id":"<urn:uuid:5321b0a4-c6f8-4595-a3dd-52a0a6cb9d51>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Browse by Fields of Research (ANZSRC 2008)
Number of items at this level: 50.
Deo, Ravinesh C. (2013) Comparative analysis of turbulent plane jets from a sharp-edged orifice, a beveled-edge orifice and a radially contoured nozzle. International Journal of Mechanical,
Industrial Science and Engineering, 7 (12). pp. 1471-1480.
Deo, Ravinesh C. (2013) The role of nozzle-exit conditions on the flow field of a plane jet. International Journal of Mechanical, Industrial Science and Engineering, 7 (12). pp. 1059-1069.
Deo, Ravinesh C. and Nathan, Graham J. and Mi, Jianchun (2013) Similarity analysis of the momentum field of a subsonic, plane air jet with varying jet-exit and local Reynolds numbers. Physics of
Fluids, 25 (1). 015115-1. ISSN 1070-6631
Obregon, M. and Raj, N. and Stepanyants, Y. (2012) Numerical study of nonlinear wave processes by means of discrete chain models. In: 4th International Conference on Computational Methods (ICCM
2012), 25-28 Nov 2012, Gold Coast, Australia.
Nikolaevskiy, Victor and Strunin, Dmitry (2012) The role of natural gases in seismics of hydrocarbon reservoirs. In: III-rd International Conference Elastic Wave Effect on Fluid in Porous Media (EWEF
2012), 24-28 Sept 2012, Moscow, Russia.
Supeni, E. E. and Epaarachchi, J. A. and Islam, M. M. and Lau, K. T. (2012) Design of smart structures for wind turbine blades. In: 2nd Malaysian Postgraduate Conference (MPC 2012), 7-9 July 2012,
Gold Coast, Australia.
Dehkhoda, Sevda and Hood, Michael and Alehossein, Habib and Buttsworth, David (2012) Analytical and experimental study of pressure dynamics in a pulsed water jet device. Flow, Turbulence and
Combustion, 89 (1). pp. 97-119. ISSN 1386-6184
Abeysekera, Vasantha (2012) Mortar consumption characteristics of 'brickwork' and a framework for managing brick and mortar walls in chaotic environments. Engineer, 45 (2). pp. 49-64. ISSN 1800-1122
Strunin, D. V. and Mohammed, F. J. (2012) Numerical analysis of an averaged model of turbulent transport near a roughness layer. ANZIAM Journal (Australian & New Zealand Industrial and Applied
Mathematics Journal), 53. C142-C154. ISSN 1446-8735
Liu, Liwi and Liu, Yanju and Leng, Jinsong and Lau, Kin-tak (2011) Electromechanical stability of compressible dielectric elastomer actuators. Smart Materials and Structures, 20 (11). p. 115015. ISSN
Strunin, Dmitry V. (2011) A model of turbulent dispersion through roughness layer using centre manifolds. In: 6th AIAA Theoretical Fluid Mechanics Conference, 27-30 Jun 2011, Honolulu, HI. United
Lian, C. and Zhuge, Y. and Beecham, S. (2011) The relationship between porosity and strength for porous concrete. Construction and Building Materials, 25 (11). pp. 4294-4298. ISSN 0950-0618
Shiau, Jim S. (2011) A shakedown limit under Hertz contact pressure. In: 2011 International Conference on Advanced Engineering Materials and Technology (AEMT 2011), 29-31 Jul 2011, Sanya, China.
Strunin, D. V. (2011) Universality of turbulent dispersion in a steady flow in an open channel. Quarterly Journal of Mechanics and Applied Mathematics, 64 (2). pp. 197-214. ISSN 0033-5614
Stepanyants, Yu. A. and Yakubovich, Evsey (2011) Scalar description of three-dimensional vortex flows of incompressible fluid. Doklady Physics, 56 (2). pp. 130-133. ISSN 1028-3358
Malpress, Ray and Buttsworth, David (2010) Assessment of an eccentric link in the connecting rod of a spark ignition engine intended for variable compression ratio operation. In: 6th Australasian
Congress on Applied Mechanics (ACAM 6), 12-15 Dec 2010, Perth, Western Australia.
Deo, Ravinesh C. and Nathan, Graham J. and Mi, Jianchun (2010) On the influence of initial conditions on a turbulent plane jet: the role of nozzle exit area. In: 17th Australasian Fluid Mechanics
Conference (AFMC 2010), 5-9 Dec 2010, Auckland, New Zealand.
Phillips, D. G. and Tran, Canh-Dung and Fraser, W. B. and van der Heijden, G. H. M. (2010) Torsional properties of staple fibre plied yarns. Journal of the Textile Institute, 101 (7). pp. 595-612.
ISSN 0040-5000
Ning, Zhiliang and Wang, Hao and Sun, Jianfei (2010) Deformation behavior of semisolid A356 alloy prepared by low temperature pouring. Materials and Manufacturing Processes, 25 (7). pp. 648-653. ISSN
Strunin, Dmitry V. (2010) An averaged model of dispersion of pollutant in a channel: logarithmic flow. In: 29th IASTED International Conference on Modelling, Identification and Control (MIC 2010),
15-17 Feb 2010, Innsbruck, Austria.
Ku, H. S. and Fok, S. C. and Siores, E. (2009) Contrasts on fracture toughness and flexural strength of varying percentages of SLG-reinforced phenolic composites. Journal of Composite Materials, 43
(8). pp. 885-895. ISSN 0021-9983
Yusaf, Talal F. (2009) Diesel engine optimization for electric hybrid vehicles. Journal of Energy Resources Technology, 131 (1). 12203-1-12203-4. ISSN 0195-0738
Strunin, Dmitry V. and Roberts, Anthony J. (2009) Low-dimensional boundary-layer model of turbulent dispersion in a channel. In: WCE 2009: World Congress of Engineering , 1-3 Jul 2009, London, United
Mai-Duy, Nam and Tran-Cong, Thanh (2008) A second-order continuity domain-decomposition technique based on integrated Chebyshev polynomials for two-dimensional elliptic problems. Applied Mathematical
Modelling, 32 (12). pp. 2851-2862. ISSN 0307-904X
Bordas, Stephane and Duflot, Marc and Le, Phong (2008) A simple error estimator for extended finite elements. Communications in Numerical Methods in Engineering, 24 (11). pp. 961-971. ISSN 1069-8299
Golshani, Aliakbar and Tran-Cong, Thanh and Buttsworth, David (2008) Impact on a water filled cylinder. In: ACMFMS 2008: Asian Conference on Mechanics of Functional Materials and Structures, 31 Oct-3
Nov 2008, Matsue, Japan.
Islam, Md Mainul and Kim, Ho Sung (2008) Manufacture of syntactic foams using starch as binder: post-mold processing. Materials and Manufacturing Processes, 23 (8). pp. 884-892. ISSN 1042-6914
Suslov, Sergey A. and Tran, Thien Duc (2008) Revisiting plane Couette-Poiseuille flows of a piezo-viscous fluid. Journal of Non-Newtonian Fluid Mechanics, 154 (2-3). pp. 170-178. ISSN 0377-0257
Caputo, Jean-Guy and Stepanyants, Yury (2008) Front solutions of Richards' equation. Transport in Porous Media, 74 (1). pp. 1-20. ISSN 0169-3913
Tran, Canh-Dung and van der Heijden, G. H. M. and Phillips, David G. (2008) Application of topological conservation to model key features of zero-torque multi-ply yarns. Journal of the Textile
Institute, 99 (4). pp. 325-337. ISSN 0040-5000
Stepanyants, Yury A. (2008) Solutions classification to the extended reduced Ostrovsky equation. Symmetry, Integrability and Geometry: Methods and Applications (SIGMA), 4.
Mai-Cao, L. and Tran-Cong, T. (2008) A meshless approach to capturing moving interfaces in passive transport problems. CMES: Computer Modeling in Engineering and Sciences, 31 (3). pp. 157-188. ISSN
Khennane, Amar and Tran-Cong, Thanh (2007) Timoshenko beam-solution in terms of integrated radial basis functions. In: 5th Australasian Congress on Applied Mechanics (ACAM 2007), 10-12 Dec 2007,
Brisbane, Australia.
Mai-Duy, Nam and Tanner, Roger I. (2007) A spectral collocation method based on integrated Chebyshev polynomials for two-dimensional biharmonic boundary-value problems. Journal of Computational and
Applied Mathematics, 201 (1). pp. 30-47. ISSN 0377-0427
Ku, H. and Trada, M. and Puttgunta, V. C. (2007) Mechanical properties of vinyl ester composites cured by microwave irradiation: pilot study. Key Engineering Materials, 334-33. pp. 537-540. ISSN
Mai-Duy, Nam and Tran-Cong, Thanh (2007) A RBF-based fictitious-domain technique for Dirichlet boundary value problems in multiply-connected domains. In: 3rd Asian-Pacific Congress on Computational
Mechanics (APCOM'07) and EPMESC XI, 3-6 Dec 2007, Kyoto, Japan.
Johnson, K. and Lemcke, Pamela M. and Karunasena, W. and Sivakugan, Nagaratnam (2006) Modelling the load-deformation response of deep foundations under oblique loading. Environmental Modelling and
Software, 21 (9). pp. 1375-1380. ISSN 1364-8152
Strunin, D. V. (2006) Similarity without diffusion: shear turbulent layer damped by buoyancy. Journal of Engineering Mathematics, 54 (3). pp. 211-224. ISSN 0022-0833
Liu, Hong-Yuan and Yan, Wenyi and Mai, Yiu-Wing (2006) Z-pin bridging in composite laminates and some related problems. Australian Journal of Mechanical Engineering, 3 (1). pp. 11-19.
Melnik, R. V. N. and Strunin, D. V. and Roberts, A. J. (2005) Nonlinear analysis of rubber-based polymeric materials with thermal relaxation models. Numerical Heat Transfer Part A: Applications, 47
(6). pp. 549-569. ISSN 1040-7782
Stepanyants, Yury (2005) Dispersion of long gravity-capillary surface waves and asymptotic equations for solitons. Proceedings of the Russian Academy of Engineering Sciences Series: Applied
Mathematics and Mechanics, 14. pp. 33-40.
Dai, Shao-Cong and Yan, Wenyi and Liu, Hong-Yuan and Mai, Yiu-Wing (2004) Experimental study on z-pin bridging law by pullout test. Composites Science and Technology, 64 (16). pp. 2451-2457. ISSN
Merifield, Richard S. and Sloan, Scott W. and Lyamin, Andrei V. (2003) The stability of inclined plate anchors in purely cohesive soil. Technical Report. University of Southern Queensland, Toowoomba,
Yan, Wenyi and Wang, Chun Hui and Zhang, Xin Ping and Mai, Yiu-Wing (2003) Theoretical modelling of the effect of plasticity on reverse transformation in superelastic shape memory alloys. Materials
Science and Engineering A: Structural Materials: Properties, Microstructures and Processing , 354 (1-2). pp. 146-157. ISSN 0921-5093
Yan, Wenyi and Liu, Hong-Yuan and Mai, Yiu-Wing (2003) Numerical study on the mode I delamination toughness of z-pinned laminates. Composites Science and Technology, 63 (10). pp. 1481-1493. ISSN
Strunin, Dmitry V. (2003) Nonlinear instability in generalized nonlinear phase diffusion equation. Progress of Theoretical Physics. Supplement (150). pp. 444-448. ISSN 0375-9687
Sharifian, S. Ahmad and Buttsworth, David R. (2000) Deflection of a pretensioned circular diaphragm due to aerothermal loading. In: 4th Biennial Engineering Mathematics and Applications Conference :
Mathematics and Engineering: An Innovative Partnership (EMAC 2000), 10-13 Sep 2000, Melbourne, Australia.
Melnik, R. V. N. and Roberts, A. J. and Thomas, K. A. (2000) Mathematical and numerical analysis of Falk-Konopka-type models for shape-memory alloys. International Journal of Differential Equations
and Applications, 1A (3). pp. 291-300. ISSN 1311-2872
Suslov, Sergey A. and Roberts, A. J. (1998) Advection-dispersion in symmetric field-flow fractionation channels. Working Paper. University of Southern Queensland, Faculty of Sciences, Toowoomba,
Fabrikant, Anatoly and Stepanyants, Yury (1998) Propagation of waves in shear flows. World Scientific, Singapore. ISBN 978-981-02-2052-5 | {"url":"http://eprints.usq.edu.au/view/for08/010207.html","timestamp":"2014-04-19T23:27:51Z","content_type":null,"content_length":"28779","record_id":"<urn:uuid:205f11fb-da81-4b38-8051-b04a55947924>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
maximujm and minimum
August 5th 2012, 08:53 PM #1
Junior Member
Jul 2012
maximujm and minimum
the wrok done by a voltaic cell pf cpnstant electromotive force E and constant resistance r in passing a steadt current through an external, resistance R is proportional to E^2R/(r+R)^2. Show
that the work done is greatest when R=r
how to solve this
Re: maximujm and minimum
Use the First Derivative Test for work as a function of R.
August 6th 2012, 05:30 AM #2
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/calculus/201785-maximujm-minimum.html","timestamp":"2014-04-18T03:02:15Z","content_type":null,"content_length":"31716","record_id":"<urn:uuid:3f066df4-4538-4aa5-9481-6a627438549e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Katsumi Homma, Kawasaki JP
Katsumi Homma, Kawasaki JP
application Description Published
Delay analysis apparatus, delay analysis method and computer product - Within-die delay distributions and die-to-die delay distributions of two arbitrary paths in an analysis
target circuit are extracted from a delay distribution library, and an effect index indicative of a relative error of an overall path delay distribution of one path and an
20080222586 overall path delay distribution when the two paths are integrated as one path is calculated based on the within-die delay distributions and the die-to-die delay distributions 09-11-2008
of the two paths. When the effect index is determined to be equal to or above a threshold, the overall path delay distribution of the two paths integrated as one path is
calculated. Hence, a path that affects an analysis result alone is selected to execute a statistical Max operation, thereby increasing a speed of delay analysis processing.
Delay analysis support apparatus, delay analysis support method and computer product - A delay analysis support apparatus that supports analysis of delay in a target circuit
includes an acquiring unit that acquires error information concerning a cell-delay estimation error that is dependent on a characterizing tool; an error calculating unit that
20080244487 calculates, based on the error information and a first probability density distribution concerning the cell delay of each cell and obtained from the cell delay estimated by the 10-02-2008
characterizing tool, a second probability density distribution that concerns the cell-delay estimation error of each cell; and an linking unit that links the second probability
density distribution and a cell library storing therein the first probability density distribution.
Design support method and apparatus, and computer product - A design support apparatus includes an extracting unit that extracts a first cell from among plural cells in a
20090007044 target circuit; a detecting unit that detects a second cell arranged adjacent to the first cell; and a setting unit that sets a delay value of the first cell according to an 01-01-2009
arrangement pattern of the second cell.
METHOD AND APPARATUS FOR ESTIMATING MAN-HOURS - A method for estimating a man-hours of an entire project having a series of tasks with a computer includes, inputting an
estimated man-hours of the each task, acquiring model functions for extracting estimation errors included in the estimated man-hours of the each task based on an attribute of a
20090055142 worker who performs the each task, calculating a probability density distribution representing estimation errors depending on the attribute and a probability density 02-26-2009
distribution representing modeling errors depending on methods for estimating the man-hours for each task using the model functions, calculating man-hours of the entire project
having a series of tasks for the each task using statistical methods to accumulate the probability density distribution representing estimation errors and the probability
density distribution representing the modeling errors, and outputting calculating results of man-hours of the entire project to a output device.
METHOD AND APPARATUS FOR SUPPORTING DELAY ANALYSIS, AND COMPUTER PRODUCT - A delay distribution of a partial path that passes through a node to which a plurality of signals is
20090138838 input and for which an estimation in a statistical MAX is predicted to be large, that is present on a critical path having large influence on a circuit delay, and that has high 05-28-2009
possibility of improving the circuit delay, among nodes in a circuit graph is calculated by the Monte Carlo simulation instead of the block based simulation, thereby increasing
speed and accuracy of delay analysis.
LEAKAGE CURRENT ANALYZING APPARATUS, LEAKAGE CURRENT ANALYZING METHOD, AND COMPUTER PRODUCT - A leakage current analyzing apparatus receives input of data used for analysis and
20090222773 indicating intra/inter-chip variation concerning the gate length of transistors constituting cells in a circuit to be designed, where the inter-chip variation is handled as a 09-03-2009
discrete probability density distribution R. Using the data input, the leakage current analyzing apparatus obtains a cumulative probability density for a leakage current value
(of the circuit) that is equal to or less than each arbitrary leakage current value I
MONITOR POSITION DETERMINING APPARATUS AND MONITOR POSITION DETERMINING METHOD - A monitor position determining apparatus includes an acquiring unit that acquires design data
concerning circuit elements arranged in a layout of a semiconductor device and for each of the circuit elements, yield sensitivity data indicative of a percentage of change
20100017765 with respect to a yield ratio of the semiconductor device; a selecting unit that selects, based on the yield sensitivity data, a circuit element from a circuit element group 01-21-2010
arranged in the layout; a determining unit that determines an arrangement position in the layout to be an installation position of a monitor that measures a physical amount in
the semiconductor device in a measurement region, the arrangement position being of the circuit element that is specified from the design data acquired by the acquiring unit
and selected by the selecting unit; and an output unit that outputs the installation position determined by the determining unit.
METHOD AND APPARATUS FOR SUPPORTING VERIFICATION OF LEAKAGE CURRENT DISTRIBUTION - A leakage current distribution verification support method includes a process including
20100131249 obtaining the estimated number L of cells in the custom macro circuit and the first arithmetic expression including a polynomial with a term having a common parameter α 05-27-2010
representing variations arising from each cell in the custom macro circuit and with a term having a parameter β representing variations arising from the entirety of the custom
macro circuit, generating a second arithmetic expression including a polynomial with a term having a parameter α
SUPPORT COMPUTER PRODUCT, APPARATUS AND METHOD - A computer-readable recording medium stores therein a program causing a computer that accesses a simulator to execute receiving
a measured yield distribution that expresses an actually measured yield distribution concerning leak current of a circuit-under-design, and model data for leak current of a
cell of the circuit-under-design; providing the simulator with the model data and values for a normal distribution concerning variation components of the leak current of the
20100292977 cell; acquiring the leak current of the circuit-under-design; calculating, based on the acquired leak current, an estimated yield distribution concerning the leak current of 11-18-2010
the circuit-under-design; calculating values for the normal distribution that minimize error between the measured yield distribution and the estimated yield distribution;
setting an initial value to the normal distribution and the calculated values for the normal distribution to the normal distribution; and outputting the estimated yield
distribution that is based on the leak current of the circuit-under-design.
COMPUTER PRODUCT, ANALYSIS SUPPORT APPARATUS, AND ANALYSIS SUPPORT METHOD - A non-transitory, computer-readable recording medium stores therein a program causing a computer to
execute calculating, using respective standard deviations of first delay distributions of delay variation independent to each element included in a path among parallel paths in
a circuit, standard deviation of a first delay distribution of the path when modeled as a series circuit; correcting the standard deviation of the first delay distribution for
20110125480 each element, using the calculated standard deviation of the first delay distribution of the path and a standard deviation of a first delay distribution of the path obtained by 05-26-2011
a statistical delay analysis on the circuit; obtaining a correlation distribution representing a correlation between delay and leak current of the circuit by executing, using
the corrected standard deviation of the first delay distribution for each element, correlation analysis between delay and leak current of the target circuit; and outputting the
obtained correlation distribution.
ANALYSIS SUPPORT COMPUTER PRODUCT, APPARATUS, AND METHOD - A computer-readable, non-transitory medium stores a program that causes a computer to execute a process including
acquiring a unique coefficient that is unique to a device in a circuit under test and is included in a function expressing fluctuation of leak current of the device; detecting
20110276286 as a group and based on the unique coefficient, devices having an identical or similar characteristic; converting first random variables into a single second random variable, 11-10-2011
the first random variables expressing fluctuation of leak current unique to each of the detected devices; yielding a function that expresses fluctuation of leak current of the
detected devices, using the second random variable; and outputting the yielded function.
Patent applications by Katsumi Homma, Kawasaki JP | {"url":"http://www.faqs.org/patents/inventor/katsumi-homma-kawasaki-jp-1/","timestamp":"2014-04-19T03:54:38Z","content_type":null,"content_length":"15485","record_id":"<urn:uuid:e1bb41f9-2b09-42d4-a2ff-a11d7a0ab10c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
just wondering
November 12th 2007, 06:11 AM #1
just wondering
A student moving into a new flat buys a box of 20 light bulbs. Unknown to the student, exactly one of the bulbs is defective. Find:
b) The conditional probability that the second bulb used is not defective, given that the first bulb used is not defective;
If the first bulb is not defective then you have 19 bulbs from which you can choose, and one of these is defective so the probability is $\frac{18}{19}$.
This is the easy way to work it out but I am having trouble using the actual formula which may come in handy for other examples:
Let A be the event that the 2nd bulb is not defective.
Let B be the event that the 1st bulb is not defective.
Then $P(A|B)=\frac{P(A\cap B)}{P(B)}$
$P(B) = \frac{19}{20}$ but how can you work out $P(A\cap B))$?
$P(A\cap B) = P(A|B)\cdot P(B)$
$P(A\cap B) = \frac{18}{19}\cdot \frac{19}{20}=\frac{18}{20}$
So how can we work out that $P(A\cap B) = \frac{18}{20}$ from what we are given?
I apologise for any offence my pedantry has caused
$\begin{array}{l}<br /> P(X \cap Y) = P(X|Y)P(Y) \\ <br /> P(A \cap B) = P(B)P(A|B) = \left( {\frac{{19}}{{20}}} \right)\frac{{18}}{{19}} = \frac{{18}}{{20}} \\ <br /> \end{array}$
Let me rephrase my question:
A student moving into a new flat buys a box of 20 light bulbs. Unknown to the student, exactly one of the bulbs is defective. Let A be the event the 2nd bulb is defective, and let B be the event
the 1st bulb is defective. What is $P(A\cap B)$?
$P(A \cap B) = \left( {\frac{2}{{20}}} \right)\left( {\frac{1}{{19}}} \right)$
Thank you but once again I have asked the wrong thing.
A student moving into a new flat buys a box of 20 light bulbs. He takes a number of bulbs from the box without replacement. Unknown to the student, exactly one of the bulbs is defective. Find:
Let A be the event the 2nd bulb is NOT DEFECTIVE and B the event that the 1st bulb is NOT DEFECTIVE. Find $P(A\cap B)$.
Thanks and sorry about this the stress is making me do silly things.
November 12th 2007, 06:33 AM #2
November 12th 2007, 06:42 AM #3
November 12th 2007, 07:07 AM #4
November 12th 2007, 09:55 AM #5 | {"url":"http://mathhelpforum.com/statistics/22569-just-wondering.html","timestamp":"2014-04-17T15:07:24Z","content_type":null,"content_length":"44826","record_id":"<urn:uuid:a7131c63-3aa0-48d7-aee6-d134e5a04ef5>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
verify rolle's theorem for f(x)=x(x+3)e^-(x/2)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f0877aae4b014c09e63b32e","timestamp":"2014-04-20T16:12:27Z","content_type":null,"content_length":"37257","record_id":"<urn:uuid:04b1ba85-be86-48ec-b03d-9ee52c349453>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Lipschitz
August 1st 2008, 08:22 PM #1
Jul 2008
[SOLVED] Lipschitz
Suppose that $p:R \rightarrow R$ is a polynomial. Show that $p:R \rightarrow R$is Lipschitz if and only if the degree of the polynomial is less than 2.
By definition of Lipschitz, then we need to prove that the function p is Lipschitz for degree less than 2 or equivalently $\leq 1$ then
$| p(x_{1}) - p(x_{2}) | \leq K |x_{1} - x{2}|$ for $K \geq 0$ and for all $x, x_{1} \in R$
Here I'm not sure how to approach this. Since p is a polynomial, then it must be continuous. I was thinking of showing that if p is a polynomial of less than 2, then its derivative is a constant.
Thus, its derivative is bounded and therefore a Lipschitz function.
Thank you for your time.
obviously every polynomial of degree at most 1 is Lipschitz. so we only need to show that a polynomial of degree
at least 2 is not Lipschitz. so suppose p(x) is a polynomial of degree at least 2. see that $\lim_{x\to\infty}p'(x) = \pm \infty. \ \ \ (1)$
now suppose p(x) is Lipschitz. then for some constant $K \geq 0: \ \left|\frac{p(t)-p(x)}{t-x}\right| \leq K,$ for all $t eq x.$ taking limit of
both sides as $t \rightarrow x$ gives us $|p'(x)| \leq K,$ which will contradict (1) if we let $x \rightarrow \infty. \ \ \ \square$
Thank you NonCommAlg. I was planning to prove by contradiction, but didn't know how to show it mathematically. By dividing both sides by |t-x| and taking the limit, we get the definition of a
August 1st 2008, 10:39 PM #2
MHF Contributor
May 2008
August 2nd 2008, 08:07 AM #3
Jul 2008 | {"url":"http://mathhelpforum.com/calculus/45100-solved-lipschitz.html","timestamp":"2014-04-21T12:10:45Z","content_type":null,"content_length":"38473","record_id":"<urn:uuid:649edcf8-6680-4924-b4c7-619d94b8a75f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
The European Mathematical Society
Institute for the Mathematical Sciencies, ICMAT, Madrid, Spain.
Short description of the event:
This workshop is to commemorate the 60th birthday of Manuel de León. Manuel de León is one of the most important researchers on geometric mechanics. It is worth highlighting his contributions on a
wide range of topics such as symplectic geometry, Poisson manifolds, nonholonomic mechanics, Cosserat media, geometric integrators, optimal control theory, among many others. Another facet that makes
Manuel de León well-known is his intense dedication to the diffusion of mathematics and his role in mathematical organizations at the international and national levels, including the International
Mathematical Union (IMU) and the Real Sociedad Matemática Española (RSME). One could keep going by emphasizing his work on the didactics of mathematics, its popularization, edition and direction of
scientific journals, and many other facets. The sum of all these contributions makes Manuel de León deserve this tribute.
Dear colleagues,
We would like to draw your attention to the workshop deLeónfest 2013. This event is to commemorate the 60th birthday of Professor Manuel de León and it will take place at Instituto de Ciencias
Matemáticas in Madrid, Spain, from December 16 to 19, 2013.
Manuel de León has enormously contributed in many aspects of mathematics and research, and still does. Among many of his facets we could highlight his research on symplectic geometry, Poisson
manifolds, nonholonomic mechanics, geometric integrators, optimal control theory, etc, his active role in the diffusion of mathematics, in mathematical organizations at the international and national
levels, in mathematics popularization, edition and direction of scientific journals.
The registration is now open. The registration fee is 100 euros for seniors and 50 euros for students.
More information about this event is available at
If you need any further information, please contact us at
We hope to see you in Madrid for this tribute to Manuel de León.
Best regards,
David Martín de Diego
(chair of the organizing committee). | {"url":"http://www.euro-math-soc.eu/node/3765","timestamp":"2014-04-17T22:04:18Z","content_type":null,"content_length":"13235","record_id":"<urn:uuid:cac4fc8b-54aa-4776-a786-e450fd25aeb4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines
John Platt
April 1998
This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization , or SMO . Training a support vector machine requires the solution of a very large quadratic
programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a
time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because
matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between
linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On real- world sparse data sets, SMO can
be more than 1000 times faster than the chunking algorithm.
Word document PostScript file PDF file
Type TechReport
Number MSR-TR-98-14
Pages 21
Institution Microsoft Research
Related Projects
Related People
Related Groups
Related Labs | {"url":"http://research.microsoft.com/apps/pubs/default.aspx?id=69644","timestamp":"2014-04-20T19:07:40Z","content_type":null,"content_length":"12892","record_id":"<urn:uuid:25c0f276-2994-481d-ba64-f5140024bc6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
This is how I do powers and logs:
For the function
at (0,1), the derivative is:
Even though I don't know what that is, it will have a value; let's say k.
Now the derivative at other points
So all graphs in the family have the property that the gradient function at x is a^x times the gradient at (1,0)
In the family there will be one value of a for which k = 1
Call that one a = e
Now suppose
Taking logs base e for the first expression:
Differentiating wrt x
which means we now know the value of k ... and
so [still working on this last bit but I think I'll post before I lose it all]
No good. I seem to be stuck here because if k - ln a this becomes ln x and I was trying hard to avoid that. I seem to have gone too far and proved the log
base e. I'll come back to it later after a think. | {"url":"http://www.mathisfunforum.com/post.php?tid=19468&qid=269369","timestamp":"2014-04-18T05:57:21Z","content_type":null,"content_length":"28895","record_id":"<urn:uuid:bf590223-6024-406d-8662-825246ddebd1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mr Bezier's New Perspective
Mr Bezier’s New Perspective
Posted: May 19, 2011 Filed under: OpenScad Tutorial 1 Comment
Work on the OpenScad Bezier functions moves along apace. It’s kind of funny how easy it is to forget the high school and college math! I distinctly remember back in the 10th grade knowing about
‘the right hand rule’, and ‘matrix math’, and all the trig you could care to swallow. Somehow now, even after having created a full blown 3D library in the past, I’m actually having to use graph
paper to visualize what I’m doing as I do it.
The latest iteration gets one step closer to Bezier mesh perfection: http://www.thingiverse.com/thing:8643
In this version, I managed to get a Mesh function that traverses curves in one direction (u) and then uses values from that to go in the other direction (v). It actually works, particularly if I’m
just displaying the control points.
This is a nice setup because placing quads and triangles is a generic thing you want to do in 3D packages. So, it is easily reusable. When it comes time to implement B-Spline, or other curves, this
part will be the foundation.
There are two major challenges remaining. The first has to do with the surface normals. At the moment, I’m getting the math wrong. The normals are calculating correctly when they’re facing only in
certain directions, otherwise they’re off. I’m sure it has to do with how I’ve visualized the problem, and where I’m subtracting 90 degrees when I should be doing something else. But, it’s a fairly
isolated problem, and once solved, the normals will be correct, and the faces will be absolutely perfect. The other challenge has to do with the faces meeting up at the curve control points. They
don’t always meet up.
The way in which the patches are generated is to take the 3D coordinates of the triangle, subtract the ‘center of gravity’, then use the x,y of those coordinates to create a polygon. that polygon is
then rotated according to the surface normal, and the patch is translated back into place. The problem is, by taking the x,y coordinates, and ignoring the ‘z’, things will be too short if there’s
much of a slope. What really needs to happen is, I need to take the length of the vector from one point to the next, and use those as the lenghths of the sides.
So, in the next version, I’ll try to correct those two problems, the normal nonsense, and the lengths of the sides. If I can do that, then I think this little baby will be useable. It will be great
because being able to get a surface generated from a parametric curve mesh will make for some very nice designs indeed.
My further thoughts, having gone through this experience so far, is that adding to OpenScad with such ‘extensions’ is a fairly straight forward task. I thought it might be useful to integrate with
other libraries, or perform ‘exec’ to call out to other programs, but realistically, it’s just not needed. The OpenScad language itself has enough of the generic ‘C’ family of languages to create
various functions. The primary thing you have to get over is the general lack of state management, other than tree-scoped variables. But, I’ve found ways around that by treating OpenScad as a
functional language. Everything is just a function that returns some value.
At any rate, things are moving along nicely.
One Comment on “Mr Bezier’s New Perspective”
1. […] UPDATE: Here’s a blog entry to go with it… williamaadams.wordpress.com/2011/05/19/mr-beziers-new-perspective/ […] | {"url":"http://williamaadams.wordpress.com/2011/05/19/mr-beziers-new-perspective/","timestamp":"2014-04-21T09:47:50Z","content_type":null,"content_length":"56925","record_id":"<urn:uuid:e1b38074-bc4f-48b7-811e-7def4c898427>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nahant Trigonometry Tutor
Find a Nahant Trigonometry Tutor
...I found the material very intuitive and still remember almost all of it. I've also performed very well in several math competitions in which the problems were primarily of a combinatorial/
discrete variety. I got an A in undergraduate linear algebra.
14 Subjects: including trigonometry, calculus, geometry, GRE
...Algebra 2 skills, including factoring, finding roots, solving sets of equations and classifying functions by their properties, are a necessary foundation for trigonometry, pre-calculus,
calculus and linear algebra. Particularly important are operations with exponents and an understanding of the ...
7 Subjects: including trigonometry, calculus, physics, algebra 1
...I am patient, enthusiastic about learning, and will work very hard with you to achieve your academic goals. JoannaI have three years' experience tutoring high school students in biology. I have
extensive coursework and research experience in Biology and am passionate about the field.
10 Subjects: including trigonometry, chemistry, geometry, biology
...I have being tutoring undergraduate and graduate students in research labs on MATLAB programming. In addition, I took Algebra, Calculus, Geometry, Probability and Trigonometry courses in high
school, and this knowledge helped me to achieve my goals in research projects involving 4-dimentional ma...
16 Subjects: including trigonometry, calculus, geometry, algebra 1
...Finally you learn about the wide variety of real world situations that can be modeled to predict future outcomes from current data. Calculus is the study of rates of change, and has numerous
and varied applications from business, to physics, to medicine. The complexity of the topics involved however, require that your grasp of mathematical concepts and function properties is strong.
23 Subjects: including trigonometry, physics, calculus, statistics | {"url":"http://www.purplemath.com/nahant_trigonometry_tutors.php","timestamp":"2014-04-18T21:58:59Z","content_type":null,"content_length":"24086","record_id":"<urn:uuid:e615b8d9-f18e-405a-80b8-b5bfe6fa09f8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design and Analysis of Algorithms
COMP271 Design and Analysis of Algorithms
Spring 2003
┃Syllabus│Lectures (schedule and notes) │Assignments│Question Banks & Tutorials│Marks (Extra Credit)┃
Important Notes: Revised Sunday, May 25, 2003:
Due to the SARS situation and following the HKUST academic-affairs office recommendations we will make the following changes in the COMP271 class this semester:
1. There will be no more face-to-face lectures this semester. That is, all formal classes will be cancelled (unless the SARS situation unexpectedly quickly improves)
2. All Tutorial sessions will also be cancelled (although we will continue posting tutorial assignments for you to practice on).
3. You will be responsible for learning the material yourselves from the notes and reading assignments posted on the class lecture page. See the lecture page for more detailed instructions as to
exactly which material you will be responsible to know and when. The next five lecture notes have already been posted. The remaining ones will be posted soon.
4. The instructor will hold office-hours in his office (room 3559) during the scheduled class hours: tuesday/thursday 9:00-10-20. You are welcome to stop by and ask questions then. You are also
welcome to email questions at anytime to golin@cs.ust.hk or to ask for an appointment outside of office-hours.
5. As previously announced the class grading scheme has been changed as follows: The midterm has been cancelled. There are now five assignments worth 6% of the final grade each (rather than four
worth 5% each). The final exam will now be worth 70% of the final grade.
6. Assignment 2 is due Tuesday April 15, 2003. Assignment 3 is due Thursday, April 24, 2003.
7. There will be a voluntary Question and Answer session on Thursday, April 17, 2003, 9:40AM, in the standard 271 classroom. Nothing will be taught but the instructor will be there to answer any
questions you might have about the readings.
8. 17/04/03: Lecture 13 was just revised and reposted to web (errors corrected on pages 15/16). I also just posted solutions to Assignment 2.
9. 21/04/03: There was an error in the example given in Problem 4 of Assignment 3. A revised version of Assignment 3 with the error corrected has just been posted.
10. 21/04/03: This week's tutorial (on greedy algorithms) was just posted to the Question Bank and Tutorial Page. The problems in this tutorial come from Question Bank 4; QB4 and its solutions were
also just posted to the Question Bank and Tutorial Page. Lecture 18 was just posted on the lecture page.
11. 21/04/03:There will be a voluntary Q&A session on Thur, April 24, 2003, 9:40AM, in the 271 classroom.
12. 23/04/03: Assignment 4 was just posted on the web page. It will be due on May 6, 2003. Also a box has been placed at the front of the computer science department administration office for
collection of assignment 3. By 5 PM, Thursday, April 24, 2003, please either put your assignments in the box or email them to me. As before, make sure that you keep copies of your solutions.
13. There will be a voluntary Q&A session on Tuesday, April 29, 2003, 9:40AM, in the 271 classroom (note that Thursday, 01/05/03, is a public holiday so there will be no Q&A session then). Also,
there will be no tutorial posted this week.
14. Lectures 19 and 20 were just posted to the lecture page. A revised version of Lecture 14 (last page changed) and Lecture 18 (pp. 12 and 25 changed) were also posted. After consideration of the
time remaining in the semester I have decided not to teach the supplementary material on Heuristics and Approximation this year. If you are interested in this topic, please read Chapter 35 of the
CLRS textbook and let me know if you have any questions. Also, Question Bank 5 and selected solutions have just been posted to the Question Bank and Tutorial Page.
15. Due to requests the due-date for Assignment 4 has been delayed one day. It is now due Wednesday, May 7, at 5:00 PM. As usual, there will be a box in the CS department office for submitting your
assignments. Alternatively, you can email them to me at golin@cs.ust.hk or slip them under my office door.
16. There will be NO Q&A session on Tuesday, May 6. If you want to ask questions, I will however, hold office hours that afternoon, Tuesday May 6, 3-4PM in my office, 3559 (other times by email
17. Thursday, May 8, is a public holiday. If there is interest, though, I can hold a Q&A session or office hours on that day. If you would like this, please send me email.
18. Assignment 5 was just posted to this page. It will be due on May 19, 2003 at 5PM. Solutions to assignments 3 and 4 were also just posted.
19. There will be a voluntary Q&A session on Tuesday, May 13, 2003, 9:40AM, in the 271 classroom. Assignments 2 and 3 will be returned at that time. If you do not come to the Q&A you can pick the
assignments up from my office.
20. There will be a voluntary Q&A session on Thursday, May 15, 2003, 9:40AM, in the 271 classroom.
21. I will hold one review session next week to answer any questions you might have in your exam preparation. The date of the review session will be set depending upon student response to email sent
out (and will be posted soon).
22. Information on the final exam contents can be found below in the exam section of this page.
23. The last review session before the final will be held Tuesday May 20, 11AM-12:20PM, Room 3598. Please note that the time is NOT the normal 271 class time and the room is NOT the normal 271 class
room. The purpose of this review session is to answer any questions you might have so, please prepare in advance for this. Also, you can pick up all of your marked assignments, including
assignment 4, at the review session.
24. The assignment marks and extra credit marks have all been posted on the web. Please check the marks to see that they are correct and let me know ASAP if you think that there is an error in the
record. A zero mark indicates that the indicated assignment was not submitted.
25. Due to student requests, the due date of Assignment 5 has been delayed one day. Assignment 5 is now due at 5PM on Tuesday, May 20.
26. Solution to Assignment 5 has just been posted below.
27. The review session on Tuesday was the last formal session of the semester. If you still have question please stop by my office to ask me in person (it is better to send email to golin@cs.ust.hk
first to make sure that I will be around when you want to come). Alternatively, especially if you prefer explanation sin Cantonese, Leung Yiu Cho, one of the TAs, will hold office hours, from
2-3PM on Friday, May 23rd and Monday, May 26, in room 4209.
28. The assignment marks have just been updated to include the assignment 5 grades. You can pick up your marked assignment 5 (along with any other assignments you have not yet picked up) from Mr.
Leung Yiu Cho, one of the 271 TAs, during his office hours from 2-3PM on Monday, May 26, in room 4209.
29. I will hold one last office hour from 3-4PM on Monday May 26, to answer any last minute questions you might have. (Please note that you will not be able to pick up your assignments from me during
this hour; as mentioned above, your assignments will be available from the TA between 2-3).
30. Lecture 19 was just slightly revised. The revision was the correction of a typographical error in the first line of the proof on page 33.
31. I was asked by a student whether you would be allowed to bring a review sheet into the exam. The answer is no. You may not bring any materials other than pens and/or pencils into the exam. No
review sheets, calculators or pocket PCs will be allowed.
32. New 02/06/03: Please check out all of the marks of the assignments and grades. I will hold office hours in my room (3559) on Tuesday June 3, 2009 at which you can took a look at your final exams.
Please also note that that final course grades will be assigned late afternoon on June 3rd so this will be your only time to ask questions about your exam grades. Finally, if you do plan on
checking out your exam, please download the exam solutions first and take a look at them so you'll have a better idea as to what the solutions are.
Course Overview:
This course presents the fundamental techniques for designing efficient computer algorithms, proving their correctness, and analyzing their running times. General topics include review of
asymptotics, mathematical analysis of algorithms (summations and recurrences), algorithm design techniques (such as divide-and-conquer, dynamic programming, and greedy algorithms), graph algorithms
(minimum spanning trees and shortest paths) and NP-completeness.
Textbook (available at the HKUST bookstore and on reserve in the library)
T. Cormen, C. Leiserson, R. Rivest, C. Stein. Introduction to Algorithms, Second Edition, McGraw Hill and MIT Press, 2001. QA76.6 C662 2001.
References (all available on reserve in the library)
1. Jon Bentley. Programming pearls (2nd Ed). Addison-Wesley, 2000. QA76.6 .B454 2000.
2. Michael R. Garey and David S. Johnson. Computers and intractability : a guide to the theory of NP-completeness. W. H. Freeman, 1979. QA267.7 .G37 1979.
3. Robert Sedgewick. Algorithms in C++ (3rd ed) Volumes 1 and 2. Addison-Wesley, 1998. QA76.73.C153 S38 1998.
4. Jurai Hromkovic. Algorithmics for hard problems : Introduction to combinatorial optimization, randomization, approximation, and heuristics. Springer-Verlag, 2001 QA76.9.A43 H76 2001.
Course Work:
Course work will consist of 4 homework assignments and 2 exams (a midterm and a comprehensive final). Homeworks are to be turned in by the start of class on the due date. No late homeworks will be
accepted. (So turn in what you have by the start of class.) In exceptional circumstances (illness, university business, or religious observances) extensions may be granted. However, all extensions
must be approved by me before the due date.
The primary benefit to working on homeworks is to prepare for the exams; exam questions are often variants of homework problems. For this reason I encourage you to spend time alone thinking about
each problem and your approach in solving it. You are allowed to and encouraged to discuss homework problems with your classmates. However, you must write up the solutions on your own. In particular:
• Homework solutions must be written in your own words (not copied or modified from someone else's write-up).
• You must understand your solution and its derivation. (I may ask you to explain your solution to me.)
• You must acknowledge your collaborators (whether or not they are classmates) or any other outside sources on each homework.
Failing to do any of these will be considered plagiarism, and may result in a failing grade on an assignment or in the course in the course and notification for appropriate disciplinary action. As an
example of plagiarism, if we find that your solution is copied, without attribution, from something on the web, this would be treated as plagiarism. On the other hand, if you report that you found a
solution to a similar problem on the web, tell where this was, and write down the solution in your own words, this would be fine.
As a courtesy to the graders, homeworks should be written neatly. Poorly written work will not be graded. When writing algorithms be sure not only that your solution is correct, but also that it is
easy for the grader to understand why your solution is correct. Part of your grade will be based not only on correctness, but also on the clarity, simplicity, and elegance of your solution.
Assignments should either be emailed to the instructor at golin@cs.ust.hk or submitted to the computer science department office (room 3528) by 5:00PM of the due date. When submitting to the
department office please specify (and write on your paper) that this is meant for Dr Golin. Please also keep copies of your homeworks so that, if there is a mixup, I can ask you to resubmit it.
│ │Distributed │Revised │Due Date│ │
│Assignment 1 │20/02/03 │27/02/03│06/03/03│Solution│
│Assignment 2 │18/03/03 │ │15/04/03│Solution│
│Assignment 3 │04/04/03 │21/04/04│24/04/03│Solution│
│Assignment 4 │22/04/03 │ │07/05/03│Solution│
│Assignment 5 │08/05/03 │ │20/05/03│Solution│
Exam Schedule:
│ │Date/Time │Venue │
│Midterm │CANCELLED │ │
│Final Exam│27/05/03 (TUE) 08:30-11:30 (am) │Room LG1031 (new room)│
Old Exams: To assist you in preparing for the here are copies of last semester's midterm and final. Please note that the material taught during the first few weeks last semester was slightly
different than this semester's so you did not learn the background to question 2 of the midterm.
1. The final exam will cover ALL material covered in the lecture notes (1-20) presented on the class lecture page.
2. Lectures 18, 19, 20 (NP completeness) will only be worth 10% of the final exam
3. The exam format will be similar to last semester's midterm and final. Please note that the material taught during the first few weeks last semester was slightly different than this semester's so
you did not learn the background to question 2 of the midterm. Please also note that last semester's final was not comprehensive (i.e., did not cover a lot of the material from the first half of
the term). This semester's final will be comprehensive. All material in lectures 1-20 can be on the exam.
Grading (this has been modified due to the class cancellations)
│5 assignments │30% (6% each) │
│Midterm │CANCELLED │
│Final │70% │
No make-ups will be given for midterm and final unless prior approval is granted by the instructor, or you are in unfavorable medical condition with physician's documentation on the day of the
examination. In addition, being absent at the final examination results in automatic failure of the course according to university regulations, unless prior approval is obtained from the department
┃Days │Time │Room┃
Lectures ┠─────────┼─────────────┼────┨
┃Tue & Thu│9:00 - 10:20 │4502┃
┃Section│Day│Time │Room│TA ┃
Tutorials ┃1A │Fri│10:00 - 10:50│4475│Yeung Siu Yin┃
┃1B │Fri│11:00 - 11:50│4475│Leung Yiu Cho┃
┃Name │Office│Ext.│Email │Office Hours ┃
┃Mordecai Golin│3559 │6993│golin@cs.ust.hk │Anytime instructor is in office or by appointment┃
Contacts ┠──────────────┼──────┼────┼────────────────┼─────────────────────────────────────────────────┨
┃Leung Yiu Cho │ │ │cscho@cs.ust.hk │TBA ┃
┃Yeung Siu Yin │ │ │siuyin@cs.ust.hk│TBA ┃ | {"url":"http://www.cse.ust.hk/faculty/golin/COMP271Sp03/","timestamp":"2014-04-21T02:01:09Z","content_type":null,"content_length":"28841","record_id":"<urn:uuid:b2563331-dd94-4ea7-9718-678ae53c84d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angles of Reflection
Behind all this talk about teacher talk we've been dancing around a central idea that I think is worth stating and thinking about explicitly: less teacher talk is more. I think we've both come to
believe that the animated Charlie Brown shows are pretty much on the money about the students' experience of listening to their teachers:
When I was in DC, the NSF showed me a slide that suggests that, as a learning activity, listening to a clear and correct teacher explanation is even less effective than giving an incorrect
explanation to a computer avatar. And this year, I had the dramatic experience of walking three classes through a solution to a problem they had gotten wrong on a test, only to have virtually every
student get the same problem wrong on a quiz the week later.
The moral is simple: for the most part, time spent talking is time wasted.
Why is teacher talk so ineffective? And why is it hard for us to shut up and let the students do the talking?
Addressing the first question:
1. Listening to anyone talk is boring after a few minutes. How long can you sit quietly and listen to your best friend tell you about the crazy thing that happened on the way home from work? My
voice talking about solving trig equations is certainly no more gripping than that.
2. Most teacher talk starts from where the teacher is and wants to go, not from what the students know and want to learn. Not only is this kind of talk emotionally irrelevant (see point 1), but it
fails to address any underlying preconceptions or misconceptions--which is why a "clear explanation" so rarely is. To make matters worse, we math teachers often couch our teacher talk in
vocabulary that the students only barely understand, so that the words themselves are mystifying.
3. When I'm talking, my students are either ignoring me (point 1) or listening attentively and trying to take notes, but in neither case are they actually doing mathematics, which is the one
activity that I can guarantee will produce learning gains.
None of these problems are particularly surprising, and it's hard for me to imagine that most teachers aren't aware of them. But we yak on. Why?
1. The message that talking is ineffective is counterintuitive. Math teachers are expected to be experts at doing math problems. So it's natural to think that explaining how to do a problem is the
way to put this expertise to work. Students may believe this more deeply than teachers: I've had students this year complain that I no longer go over any test questions, until I draw their
attention to the fact they helped establish, that going over the solution didn't actually result in their learning any math. [For the record, I couched the issue as my problem/fault/
responsibility rather than theirs.]
2. The intuition (that talking actually helps) is bolstered by our own experiences as math students: for the most part, my teachers talked at me, and here I am today. We have to remember that we are
the survivors of this method; but our survival is not proof the method worked. Someone lucky enough to be alive and healthy in London in 1667 would hardly credit the previous two years' outbreak
of the Black Death as the reason for their vitality.
3. Teaching students to learn from their own talk is difficult; in my experience, the weakest students are also the ones who have the most trouble with the message that "the discussion is the
lesson." (I have two hypotheses about this: first, that weaker students are rationally mistrustful of their own abilities, and overextrapolate to "anything I do myself will not help me learn";
second, that the way they got to be weaker students is from being talked at by teachers, so that the weaker students are the ones who've been talked at the most, and who have the least experience
with other ways of learning.)
4. Talking gives the illusion of speed. I can describe how to solve a problem in under three minutes, making time to go on to the next thing.
5. As we've discussed, figuring out what to do besides just explain-and-practice is extremely difficult: it requires planning, real-time data analysis and decisionmaking, and reflection and
revision. So "teacher talk" is an easy default.
But if all the above are reasons for talking, I think there's an additional underlying cause: being the center of attention is fun. Part of being a teacher is having an odd kind of power, and power
is intoxicating. As with so much of the affective parts of teaching, I think it's important that we recognize our own complicity: we talk, in part, because we get a charge out of talking.
And that's enough talk for now. Next post: two of my favorite recent problems.
P.J. is right that we have to plan our questions, but then what happens when something occurs that we did not anticipate? It goes deeper than careful planning, at least in the usual sense of the
phrase. We need to work hard to break our bad habits about questioning. We must develop habits of language and thought that enhance learning. We need to be clear about what our objectives are and
pause before we ask--or answer--a question, and make sure that our response is consistent with the outcomes we desire.
With regard to PJ’s surprise solution, I think the moment in class when a student connected two ideas from totally different places and solved the problem ought to stand as one of the high points of
any teacher’s career. At that moment, PJ’s student achieved a higher level of doing mathematics than merely understanding a well-known theorem. PJ’s student demonstrated, in the presence of the
entire class, what it means to do mathematics. The student’s insight is more important than the theorem that was meant to be taught that day. A meta goal was reached. Exciting moments like that are
rare and can only happen when students are encouraged to approach problems using their own intellect and intuition. If we could make those events happen every day, mathematics would be the most
exciting class for every student in every school. It is important that PJ knew to set aside the lesson at hand and celebrate the insight of one of his students. But how does a teacher ensure that
this sort of exciting moment happens?
I think we can establish an atmosphere in our classroom in which students will be willing to take chances and are not afraid to be wrong. I think we can structure our classes so students will try to
think of clever solutions and will try to make connections. How can we do this? By asking problems that are rich enough that students can get started on them but will not necessarily see the end for
a while, by celebrating many different ways to solve a problem, and by asking questions that encourage students to think for themselves. The first thing is to give them time to work and encourage
collaboration. In other words: instead of asking if everyone understood, how about asking: Did anyone do the problem a different way? Does this solution remind anyone of another problem we have done?
What made that student think of using the Power theorem?
Asking questions should be a means to stimulate further thought. If we want to know what students know, we can ask them to do an interesting problem, and then we can walk around and see if they can
do it. We can observe their errors and let them sort out a solution among themselves. If a question does not take us further along in the investigation at hand, we don’t need to ask it. We can ask
students: what relevant questions arise after this problem has been solved? What is the next question we might ask? Would this solution have worked if the coefficients had been irrational numbers?
What if the point had not been on the circle? Did you need everything that was given to solve the problem?
And often, there should be no question—just a situation; the students’ first task is to determine what questions can be asked and answered.
Teaching my classes this week after reading your post, more than once I found myself staring into the cruel mirror of recognition: as much as I "know" the questions I shouldn't ask, I catch myself
asking them, or almost asking them, more than I care to admit.
(And, for the record, they are terrible questions--so bad, in fact, they're not even worthy of the name, because as you point out, they don't even ask anything. So from now on, I'll call utterances
like "Everybody get that?" or "Are there any questions?" nonquestions, as opposed to genuine questions like "So if that claim is true, what about ... ?" )
So why is that? Why is it so hard to ask genuine questions, and so easy to fall into the trap of asking nonquestions that don't accomplish anything?
Thinking about that this week, I've come up with two basic answers.
First, maintaining good questioning habits is hard. You have to:
• Plan ahead of time what genuine questions you will ask and when you will ask them;
• Either remember those questions or read them from a script;
• As Kathleen suggested, get frequent feedback (from videotape or peer observations) that alerts you to poor questions when they happen, just like you would do to keep from backsliding on any bad
Second, planning good questioning requires acknowledging the scarcity of two essential resources.
The first resource is time: as our friend Tom McDougal says, it's the teacher's most precious resource. If nothing else, the simple asking of a nonquestion and waiting for a nonresponse takes time.
It sucks time out of genuine questions, in part because it creates ambiguity in every question, by forcing students to figure out whether the question is one to which the teacher really expects an
answer, and thereby delaying or dampening students' responses.
The second resource is information: about what students know, think, understand, and can do. A genuine question allows a teacher to garner some information: about one student or, depending on
questioning technique and the response mode (individual whiteboards or clickers, small group discussions, etc.) multiple students. A nonquestion wastes the opportunity to find out what students know,
and as noted above, actually reduces the effectiveness of subsequent genuine questions by introducing ambiguity into the questioning framework.
For me, just reminding myself of the waste when I hear myself asking a nonquestion is good negative reinforcement. But obviously it's not enough, because I keep asking them, not often, but more than
never. Your post and Kathleen's comment remind me that I need to get into others' classrooms more often, and I need to get them into my classroom more often--if only for this one thing.
Everybody see that?
You want me to go over that again?
Did I go too fast for you?
Even without context, we recognize these as teacher phrases. These are things well-meaning teachers routinely say to students in an effort to be encouraging and positive about the lesson at hand.
There are many more phrases such as these; I am sure you can think of some.
One of my mentors, David R. Johnson from Nicolet High School in suburban Milwaukee, wrote an article, “Every Minute Counts,” and a sequel, “Making Minutes Count Even More.” The articles deal with the
nitty-gritty of teaching mathematics. Even the titles embrace an important idea: that good teachers make use of every minute. There is no time to be wasted.
I bring up these articles now because David had considerable insight about these teacher phrases, and I would like to share some of his thoughts with you.
“Everybody see that?” This kind of question is not answerable by a student and teaches them to ignore my questions.
“You want me to go over that again?” They really didn’t want me to go over it the first time.
“Did I go too fast for you? “ No. Faster, faster. Let’s get this done.
“Here’s an easy one.” This comment could be one of the worst things we say. As a student: if I get it right, so what, it was easy; but what if I don’t? Then I know I can’t even do the easy ones.
When I first heard these comments from David, they hit me hard. I recognized the accuracy of his observations. I also recognized these remarks as things I said virtually every day. I tried to change
my habits, but it was hard. Gradually, I realized that these bad habits were symptomatic of a larger problem with my teaching. I was still thinking of myself as the person who was explaining math so
well that it would be clear to everyone. My classes were still teacher-centered.
It took me a long time, and a lot of trial and error to change what was happening in my class so that students were working on authentic problems that taught them important ideas in a coherent way,
while I observed and learned from them how they were thinking and what progress they had made.
The questions that David discusses are all about how well I am doing, not how well my students are doing. I look over these remarks, and it strikes me that all of them assume that it is my job as a
teacher to explain, and it is the student’s job to listen and therefore learn. Those job descriptions highlight what is really wrong with these teacher comments. The answers to these questions, these
comments, are really meant to reassure me that I am doing well in explaining, which is not the point at all. The real question, every minute of every class, is how well are each and every one of my
students doing as they struggle to comprehend the new ideas I have confronted them with today. And the best way for me to measure comprehension is to walk around and listen to what they have to say
to each other and look at what they write. Then it is still not my job to explain to them how math works. It is my job to ask interesting questions and then to direct the discussion students are
having as they try to figure things out.
A coda: I never would have figured all of this out by myself. We need each other: teachers need students, and teachers need other teachers so that we can all contemplate best models from every
angle. I am a pretty good teacher, but only because of the wisdom that has been passed on to me by people like David Johnson. | {"url":"http://anglesofreflection.blogspot.com/2011_02_01_archive.html","timestamp":"2014-04-17T00:47:52Z","content_type":null,"content_length":"125945","record_id":"<urn:uuid:0dc2a04f-1828-4732-a08d-f6f141953b48>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Solving Matrix Systems with Real, Interval, or Uncertain Elements
This Demonstration solves the system of linear equations given in the pane. Its size can be 2, 3, or 4. Set the elements (the real and uncertain parts) of the matrix and the right-hand side (rhs)
vector for all the rows and columns by using the "value" slider.
The slider intervals for the real and uncertain parts of the uncertain number are and , respectively. These values are multiplied by , where ranges from -5 to 5. The slider jump is , where ranges
from 1 to 6.
Clicking the "nothing" alternative lets you position the row and column and adjust and without changing the real part or uncertain part of an entry.
The solution vector is dynamically updated by one of the
built-in functions
, or
. It is desirable to use
when setting the matrix and vector elements. Once the desired matrix and rhs vector are fixed, you can compare the solutions given by all four built-in functions for real, interval, and uncertain
There is international consensus on the evaluation of standard uncertainty and combined standard uncertainty in measurement and computations [1, 2]. Just as the International System of Units (SI)
has brought coherence to all scientific and technological measurements, the worldwide consensus on uncertainty permits a vast spectrum of results to be readily understood and properly interpreted.
The needs of numerous testing laboratories to calculate the measurement uncertainty on a large scale is supported by numerous software tools. Rasmussen [3] refers to 10 different software packages.
To facilitate the calculation of the combined standard uncertainty similar to complex and interval numbers, Aibe and Mikhailov introduced a new object called an uncertain number [4] and developed in
6 an Uncertain Calculus package transforming any functional relationship of uncertain numbers into an uncertain number , assuming that all arguments of the function are independent. The rules
manipulating uncertain numbers in this Demonstration are extracted from the Uncertain Calculus package [4].
This Demonstration solves a system of linear equations , where is a vector of variables and and are a matrix and a vector with real, interval, and uncertain elements. The real and interval numbers
are obtained from the uncertain numbers.
The elements of the initialization diagonal matrix and vector are uncertain numbers 1±0. The "size" of the system is limited to four equations in order to show the entire matrix dot multiplied by
the solution vector and the rhs vector. The uncertainty of elements changes only the uncertainty of the solution.
When the matrix has a nonzero determinant, then the built-in functions
find the unique solution. However, if the determinant is zero (the matrix is singular) there may be either no vector or an infinite number of vectors satisfying the system. Nevertheless, for a
singular matrix the built-in functions
give solutions that minimize the sum of the squares of all entries in
, where is the identity matrix. The Demonstration lets you compare the uncertainty introduced by all four of these built-in functions.
In the authors' experiments,
surprisingly introduces less uncertainty than
, while
introduce the highest uncertainty.
[1] European Cooperation for Accreditation of Laboratories,
Expression of the Uncertainty of Measurement in Calibration
, EAL-R2 and EAL-R2-S1, 1997.
[2] International Organization for Standardization,
Guide to the Expression of Uncertainty in Measurement
, Geneva, Switzerland: ISO, 1995.
[3] S. N. Rasmussen, "Software Tools for the Expression of Uncertainty in Measurement,"
MetroTrade Workshop on Traceability and Measurement Uncertainty in Testing
, Berlin, January 30–31, 2003.
[4] V. Y. Aibe and M. D. Mikhailov, "Uncertainty Calculus in Metrology,"
Proceedings of ENCIT 2008
, 12th Brazilian Congress of Thermal Engineering and Sciences, Belo Horizonte, MG, Brazil, November 10–14, 2008. | {"url":"http://demonstrations.wolfram.com/SolvingMatrixSystemsWithRealIntervalOrUncertainElements/","timestamp":"2014-04-20T23:29:28Z","content_type":null,"content_length":"49974","record_id":"<urn:uuid:0a28735b-e294-4e8a-ad89-1ae03e190684>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why It’s Impossible to Solve Basketball
A very popular form of analysis is to look at lineups. Basically, can we determine how well players do together and estimate how well they’ll do against an opponent?
Another popular theme, when it comes to basketball, is that the game is very complex and hard to understand. I disagree with this theme. However, if you use the lens of lineup data it turns out this
is true. In fact, I’m here to say if you use lineup data to try and “solve” basketball, you may as well give up.
Counting That High Is Difficult
Lineups are combinations of players. It turns out that combinations grow rapidly. The math I’ll use is the Binomial Coefficient. The Wikipedia article is a fun read but in case you don’t feel like
skimming it, I’ll break it down quickly. If I have a group of players, I can choose a lineup of them. If I don’t care about order (e.g. Kidd, Terry, Marion, Nowitzki, Chandler is the same as Terry,
Kidd, Nowitzi, Marion, Chandler) then we actually have the math to count how many options we have (aren’t nerds great?)
Let’s start with the most complicated case. If I had a team of 12 players that could play every position (all LeBrons for instance) then I could choose a lineup of five players from a roster of 12.
This gives us our “worst case” scenario, which is a team of 796 viable lineups.
“That sounds like a lot”, you say. Luckily, players usually only play a position or two, so it gets easier. Let’s use the 2012 Oklahoma City Thunder as an example. I’ll even simplify it further by
limiting us to three positions – Point Guard, Guard-Forward and Forward-Center. A team has to put out a lineup of a Point Guard, two Guard-Forwards and two Forward-Centers. Using Yahoo Sports and
Popcorn-Machine, here’s how the Oklahoma City Thunder’s finals roster looked last year.
• R. Westbrook (PG)
• D. Fisher (PG)
• R. Ivey (GF)
• L. Hayward (GF)
• D. Cook (GF)
• J. Harden (GF)
• T. Sefolosha (GF)
• K. Durant (GF)
• C. Aldrich (FC)
• N. Collison (FC)
• S. Ibaka (FC)
• K. Perkins (FC)
I have two option at PG, six options at GF and four options at FC. If we do the math on this – For those of you that read the article, that would be: (2 c 1) * (6 c 2) * (4 c 2) – we find we have 180
possible lineups! That’s a huge amount. This grows even further if we start to say things like: Durant can play at the power forward or that Westbrook is really a shooting guard.
What’s more, the other team has a huge number of lineups. In the absolute “simplest case” (I got this with 1 PG, 2 GF and 9 FC. Understand this means the backcourt would play the whole game) we could
get a team with “only” 36 lineups. And each of those lineups could in theory match up with each of the opponents lineups. This means the possible lineup duels two teams could in theory have is
somewhere between 1000-600,000, with around 30,000 being the most likely!
What can lineup data tell us?
The basic problem we are faced with when it comes to lineup data is the sheer possibilities. There’s another big question, how much information is actually contained in the data we do have? 82 Games
kindly tells us the top 20 lineups for each team. If we use 2011 (I picked 2011 over 2012 to avoid lockout arguments) we find most teams still have a lot of information missed. Here’s a quick rundown
of the percentage of team minutes held in the top 20 lineups of each team.
2010-2011 Minutes in top 20 lineups by team. Data via 82games.com, Playoff teams in Bold
Team % Minutes in Top 20 Lineups
L.A. Lakers 80.5%
Chicago 67.0%
OKC 65.8%
Portland 60.6%
Houston 57.7%
Memphis 57.5%
Boston 56.4%
Philadelphia 55.9%
Golden State 54.7%
San Antonio 54.5%
Dallas 54.0%
Phoenix 53.5%
Indiana 52.7%
New Orleans 52.1%
Orlando 51.1%
Miami Heat 50.8%
Utah 49.5%
New York 49.0%
Charlotte 47.6%
L.A. Clippers 47.1%
Atlanta 46.2%
Detroit 43.2%
Minnesota 42.0%
Cleveland 41.5%
Denver 41.4%
Milwaukee 39.5%
Sacramento 39.2%
New Jersey 38.0%
Toronto 37.7%
Washington 33.0%
Barring the Los Angeles Lakers, virtually ever team has at least 1/3 of their minutes left unexplained by their top rotations. What’s more, most team are actually closer to 50% when in comes to their
minute allocations. Here’s a simpler breakdown:
• Average % of team minutes used by top 20 lineups: 50.7%
• Std. Dev % of team minutes used by top 20 lineups: 9.9%
For playoff teams:
• Average % of team minutes used by top 20 lineups: 53.3%
• Std. Dev % of team minutes used by top 20 lineups: 8.8%
It’s also worth noting that lineups drop off quickly. Detroit’s 20th lineup had the most minutes at 50.5 (or around a game) So when looking to team lineups, you only get a small subset of all
possible combinations. What’s more, you get very limited data on most of these.
Summing Up
We have a large set of possibilities and a very small amount of data to try and explain it. If we go this route when explaining basketball, it’s easy to see why it looks impossible. And in fact, if
this was the only route worth going, I would agree with you.
The good news is the data supports a different notion. Players, regardless of lineup, tend to be pretty consistent. What’s more, good players have a significant edge in the NBA. It turns out that the
NBA’s box score (courtesy of Lee Meade) is a tremendously valuable tool for any team.
Lineup data may seem valuable. As with all data the real question is what can it tell you?When it comes to the assumption of lineups and how much we can learn, the answer is that it can’t tell you
that much. | {"url":"http://wagesofwins.com/2012/10/01/why-its-impossible-to-solve-basketball/","timestamp":"2014-04-21T14:41:37Z","content_type":null,"content_length":"53631","record_id":"<urn:uuid:98efd02a-6962-4164-9a2c-d9185972edbe>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- geometry.college.independent
Discussion: geometry.college.independent
A discussion of topics covered in college geometry, problems appropriate for that level, and geometry education.
To subscribe, send email to majordomo@mathforum.org with only the phrase subscribe geometry-college in the body of the message.
To unsubscribe, send email to majordomo@mathforum.org with only the phrase unsubscribe geometry-college in the body of the message.
Posts to this group from the Math Forum do not disseminate to usenet's geometry.college newsgroup. | {"url":"http://mathforum.org/kb/forum.jspa?forumID=125&start=1935","timestamp":"2014-04-20T00:42:53Z","content_type":null,"content_length":"38208","record_id":"<urn:uuid:d4adaee0-a8de-4b36-9684-2c64e33d2fe6>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fourier and Bessel
up vote 3 down vote favorite
Oliver Heaviside, on page 387 of Electrical Papers, Vol. I, Macmillan and Co., 1892, available here, writes
$$v = 1 - \frac{n^2r^2}{2^2} + \frac{n^4r^4}{2^2 4^2} - \frac{n^6r^6}{2^24^26^2} + \ldots = J_0(nr)$$
This function is usually denoted by $J_0(nr)$, and was first employed by Fourier. Whether he invented it or discovered it is a doubtful point; the question is raised whether mathematical truths lie
within the human mind alone, or whether the infinite body of known and unknown mathematics could exist in a dead universe. But this is metaphysics, which is all vanity and vexation of spirit.
Heaviside gives no reference.
I have two questions:
1. Are there any references in the Fourier work about the symbol $J_0()$ ?
2. What does the letter $J$ stand for?
Any references would be appreciated.
ho.history-overview special-functions reference-request fourier-analysis
3 According to Cajori's history of mathematical notation, Bessel used the letter I and Hansen changed it to J. If this is true, then it almost certainly has nothing to do with the J in Joseph
Fourier. – Henry Cohn May 18 '12 at 11:44
4 Incidentally, Dutka's article "On the early history of Bessel functions" (dx.doi.org/10.1007/BF00376544) does not seem to discuss the naming issue, but goes back even further than Fourier (to the
Bernoullis and Euler). – Henry Cohn May 18 '12 at 11:46
add comment
1 Answer
active oldest votes
There is a fundamental reference using Bessel functions in Fourier's works. This is "Théorie analytique de la chaleur" firstly published on 1822. You will find this series firstly
given in chapter VI pag. 370. This chapter is about the propagation of the heat in a cylinder ("of course" let me add).
up vote 4 down The modern nomenclature was invented by Bessel himself on 1824, just two years after Fourier's work. This is proved in F. Bessel, "Untersuchung des Theils der planetarischen
vote accepted Störungen", Berlin Abhandlungen (1824). Here the functions I and J get their names.
Of course, Jon! Thanks! Unfortunately, the symbol $J_0()$ is not used in "Théorie" ... – Papiro May 19 '12 at 21:58
1 You are right but I will fix it. – Jon May 20 '12 at 9:20
+1 on the "of course" (of course). – Emilio Pisanty May 20 '12 at 12:29
2 Actually, the 1824 Bessel paper does not quite use the modern notation. Instead, it uses $I$ where we use $J$. For example, the table on page 46 shows $J_0(k)$ and $J_1(k)$ in
modern notation. – Henry Cohn May 20 '12 at 14:53
@Henry: Nice comment. But the first use is surely there. – Jon May 20 '12 at 15:52
add comment
Not the answer you're looking for? Browse other questions tagged ho.history-overview special-functions reference-request fourier-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/97298/fourier-and-bessel/97389","timestamp":"2014-04-20T16:23:40Z","content_type":null,"content_length":"59764","record_id":"<urn:uuid:07b868bc-ae83-4098-9a59-f7aeddee58ac>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hingham, MA Algebra 2 Tutor
Find a Hingham, MA Algebra 2 Tutor
...After that, I'll show you how to find out and apply the rules for yourself. I was a modern European History major at Harvard, graduating magna cum laude in that field. I wrote an undergraduate
thesis involving original research that was much admired.
55 Subjects: including algebra 2, English, reading, algebra 1
...I usually tutor students at their homes, but I am willing to work at another location (public library, coffee shop, etc.), if preferred. Please contact me for more details, past results, and
references. Looking forward to working with you!
41 Subjects: including algebra 2, reading, Spanish, chemistry
I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor
elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College.
13 Subjects: including algebra 2, chemistry, calculus, geometry
...I have a strong background in Math, Science, and Computer Science. I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or
review sheets that they have been assigned.
17 Subjects: including algebra 2, statistics, geometry, economics
...During our sessions and the attentiveness of the student, I also believe in engaging in a certain amount of conversation with the student that can make our sessions feel more like getting help
from a friend rather than just learning math. Regardless of a student's current situation, there is a b...
13 Subjects: including algebra 2, calculus, geometry, GRE | {"url":"http://www.purplemath.com/Hingham_MA_Algebra_2_tutors.php","timestamp":"2014-04-17T22:11:39Z","content_type":null,"content_length":"23946","record_id":"<urn:uuid:b4f5a513-e51d-4a32-a5ff-5d18b5818849>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimates of the Future Behavior of Asset Prices
We can observe the current prices of assets (commodities like gold or wheat, or financial assets like stocks) from transactions that are taking place every instant in markets all over the world. But
the public and policymakers often need to make decisions that depend on what’s going to happen to asset prices in the future. For a large number of assets, we use prices from option markets to
estimate the chance or probability of future changes in that asset’s price. Thus, on this web page we provide estimates of the probability of a 20% increase in the S&P 500 over the coming year, or
the probability of a 20% fall in the dollar value of the euro over the next six months.
To be more precise, we provide estimates of what economists call risk-neutral probabilities. We do so because the Federal Reserve Bank of Minneapolis has concluded that the economic policymakers will
typically find risk-neutral probabilities useful in their decision-making. Most importantly, the risk-neutral probability accounts for how valuable resources will be in the future relative to today.
For example, suppose we find that the risk-neutral probability of a 20% fall in real estate prices is larger than the risk-neutral probability of a 20% increase in real estate prices. We can conclude
that market participants’ current valuation of resources in the former “large decline” case is higher than their current valuation of resources in the latter “large increase” case. Policymakers can
best compare current economic costs against future economic benefits (or vice versa) if they make use of this kind of information about the current valuation of future resources.
To automatically receive updates on future asset value analysis from The Federal Reserve Bank of Minneapolis, please contact us at option-report-feedback@mpls.frb.org. | {"url":"http://www.minneapolisfed.org/banking/rnpd/","timestamp":"2014-04-24T00:48:36Z","content_type":null,"content_length":"71199","record_id":"<urn:uuid:07127f36-b986-4609-b9af-399b8ea55288>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] How to calculate the robust standard error of the dependent variable
[R] How to calculate the robust standard error of the dependent variable
Joris Meys jorismeys at gmail.com
Sat Jun 19 03:01:02 CEST 2010
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
On Fri, Jun 18, 2010 at 11:19 PM, YI LIU <liuyi.feier at gmail.com> wrote:
> Hi, folks
> linmod=y~x+z
> summary(linmod)
Which package? R is not matlab...
> The summary of linmod shows the standard error of the coefficients. How can
> we get the sd of y and the robust standard errors in R?
But I guess that's not your question.
Joris Meys
Statistical consultant
Ghent University
Faculty of Bioscience Engineering
Department of Applied mathematics, biometrics and process control
tel : +32 9 264 59 87
Joris.Meys at Ugent.be
Disclaimer : http://helpdesk.ugent.be/e-maildisclaimer.php
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2010-June/242977.html","timestamp":"2014-04-17T10:51:09Z","content_type":null,"content_length":"4160","record_id":"<urn:uuid:27e01406-2ef6-47b8-8285-a699fc8de1f8>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arbitrary small positive lower semi continuous functions
up vote 5 down vote favorite
This question is a generalization of the question posed in this page to lower semi continuous functions. so let me describe the Question in the following way.
Def: Let $(X,\tau)$ be a Tychonoff Topological space. we say that this space has an arbitrary small lower semi continuous function, if the following statement is true for it:
Statement:For each $x\in X$ consider an arbitrary positive real number $\epsilon_x>0$. then there exists a lower semi continuous real valued function $f:X\rightarrow \mathbb{R}$ with the following
property: $$\forall x \in X $$ $$ 0< f(x) < \epsilon_x$$
Question:Can we characterize the spaces which has the above statement as it's property?
real-analysis gn.general-topology set-theory
add comment
1 Answer
active oldest votes
Since we are dealing with upper and lower semicontinuous functions instead of continuous functions, it is fruitful to consider all topological spaces instead of just completely regular
spaces. The following theorem characterizes all such spaces, and $2\rightarrow 1$ incorporates François Dorais's idea in his answer to the previous question. As before, the answer involves
the first measurable cardinal, and all such $T_{1}$-spaces are discrete if there does not exist a measurable cardinal. Since a space has arbitrarily small lower semicontinuous functions if
and only if it has arbitrarily large upper semicontinuous functions, it suffices to characterize the spaces with arbitrarily large upper semicontinuous functions.
Theorem: Let $X$ be a topological space. Then the following are equivalent.
1. For every $x\in X$, the neighborhood filter $\mathcal{N}(x)$ of $x$ is the intersection of finitely many $\sigma$-complete ultrafilters.
2. For every countable partition $P$ of $X$ there is an open cover $\mathcal{U}$ of $X$ such that for each $U\in\mathcal{U}$ there are $A_{1},...,A_{n}\in P$ with $U\subseteq A_{1}\cup...\
cup A_{n}$.
3. For every countable partition $P$ of $X$ there is a countable open cover $\mathcal{U}$ such that for each $U\in\mathcal{U}$ there are $A_{1},...,A_{n}\in P$ with $U\subseteq A_{1}\
cup...\cup A_{n}$.
4. If $\epsilon_{x}\in\mathbb{R}$ for all $x\in X$, then there is an upper semicontinuous function $f:X\rightarrow\mathbb{R}$ with $f(x)\geq\epsilon_{x}$ for $x\in X$.
5. If $n_{x}\in\mathbb{N}$ for all $x\in X$, then there is an upper semicontinuous function $f:X\rightarrow\mathbb{N}$ with $f(x)\geq n_{x}$ for $x\in X$.
$1\rightarrow 2$. Assume that every neighborhood filter $\mathcal{N}(x)$ is the intersection of finitely many $\sigma$-complete ultrafilters. Let $x\in X$ and assume that $\mathcal{N}(x)=\
mathcal{M}_{1}\cap...\cap\mathcal{M}_{n}$ where $\mathcal{M}_{1},...,\mathcal{M}_{n}$ are $\sigma$-complete ultrafilters on $X$. Let $P$ be a countable partition of $X$. Then there are $R_
{1},...,R_{n}\in P$ where $R_{i}\in\mathcal{M}_{i}$ for $1\leq i\leq n$, so $R_{1}\cup...\cup R_{n}\in\mathcal{M}_{1}\cap...\cap\mathcal{M}_{n} =\mathcal{N}(x)$, so if $U_{x}=(R_{1}\cup...\
cup R_{n})^{\circ}$, then $\{U_{x}|x\in X\}$ is the required open cover of $X$.
$2\rightarrow 5$ Assume $n_{x}\in\mathbb{N}$ for $x\in X$, and let $A_{n}=\{x\in X|n_{x}=n\}$. Then $\{A_{n}|n\in\mathbb{N}\}$ is a countable partition of $X$. Define a function $f:X\
rightarrow\mathbb{N}$ where if $x\in X$, then $f(x)$ is the smallest natural number such that $x\in(A_{1}\cup...\cup A_{f(x)})^{\circ}$. Then one can see that $f(y)\leq f(x)$ whenever $y\in
(A_{1}\cup...\cup A_{f(x)})^{\circ}$. Therefore the function $f$ is upper semicontinuous. Furthermore, we must have $f(x)\geq n_{x}$. Therefore, $f$ is the required function.
$5\rightarrow 4$. This is trivial.
$4\rightarrow 3$. Assume that $P=\{A_{n}|n\in\mathbb{N}\}$ is a countable partition of $X$. Let $n_{x}=n$ whenever $x\in A_{n}$. Then there is an upper semicontinuous function $f:X\
rightarrow\mathbb{R}$ with $f(x)\geq n_{x}$ for all $x\in X$. Then $\{f^{-1}(-\infty,n)|n\in\mathbb{N}\}$ is a countable open cover of $X$. However, if $n\in\mathbb{N}$ and $x\in f^{-1}(-\
infty,n)$, then $n_{x}\leq f(x)<n$, so $x\in A_{n_{x}}\subseteq A_{0}\cup...\cup A_{n-1}$. Therefore $f^{-1}(-\infty,n)\subseteq A_{0}\cup...\cup A_{n-1}$.
$3\rightarrow 2$. This is trivial.
up vote 3 $2\rightarrow 1$. Let $x\in X$. We claim that the neighborhood filter $\mathcal{N}(x)$ is $\sigma$-complete. Assume that $\{A_{n}|n\in\mathbb{N}\}$ is a descending sequence of neighborhoods
down vote of $x$. Let $P=\{A_{0},A_{0}\setminus A_{1},...,A_{n}\setminus A_{n-1},...,\bigcap_{n}A_{n}\}$. Then since $P$ is a partition of $X$, there is some $N$ where $A_{0}^{c}\cup(A_{0}\setminus
accepted A_{1})\cup...\cup(A_{N-1}\setminus A_{N})\cup \bigcap_{n}A_{n}\in\mathcal{N}(x)$. Therefore $A_{0}^{c}\cup(A_{0}\setminus A_{1})\cup...\cup(A_{N-1}\setminus A_{N})\cup \bigcap_{n}A_{n}=A_
{N}^{c}\cup\bigcap_{n}A_{n}\in\mathcal{N}(x)$, so $A_{N}\cap(A_{N}^{c}\cup\bigcap_{n}A_{n})=\bigcap_{n}A_{n}\in\mathcal{N}(x)$ as well. Therefore $\mathcal{N}(x)$ is $\sigma$-complete. We
now claim that the Boolean algebra $P(X)/\mathcal{N}(x)$ is finite. Otherwise, there is a countable partition $\{a_{n}|n\in\mathbb{N}\}$ of $P(X)/\mathcal{N}(x)$. However, one can easily
show that this implies there is a countable partition $\{A_{n}|n\in\mathbb{N}\}$ of $P(X)$ where $a_{n}=A_{n}/\mathcal{N}(x)$ for all $n$. However, for all natural numbers $n$, we have $(A_
{1}\cup...\cup A_{n})/\mathcal{N}(x)=a_{1}\vee...\vee a_{n}\neq 1$, so $A_{1}\cup...\cup A_{n}\not\in\mathcal{N}(x)$. This is a contradiction. Therefore, $P(X)/\mathcal{N}(x)$ is finite, so
$\mathcal{N}(x)$ is the intersection of finitely many $\sigma$-complete ultrafilters.
If there does not exist a measurable cardinal, then these spaces are fairly trivial. In fact, these spaces are essentially the pre-ordered sets $X$ where if $x\in X$, then $\{y\in X|y\leq x
\}$ is finite. The $T_{1}$ such spaces are the partially ordered sets $X$ where if $x\in X$, then $\{y\in X|y\leq x\}$ is finite. If $X$ is a preordered set, then the collection of lower
sets forms a topology.
On the other hand, if we assume that there is a measurable cardinal, then these spaces are much more interesting. Therefore in the remainder of this discussion, assume that there is a least
measurable cardinal $\mu$. If a space has arbitrarily small lower semicontinuous functions, then the intersection of less than $\mu$ open sets is open. In particular, every regular space
with arbitrarily large upper semicontinuous functions is a $P$-space. Thus, every regular space with arbitrarily large upper semicontinuous functions is zero-dimensional.
If $\mathcal{U}_{x}$ is a $\sigma$-complete ultrafilter on $X$ for each $x\in X$ and $\mathcal{V}$ is also a $\sigma$-complete ultrafilter, then let $\sum_{x\in X}^{\mathcal{U}}\mathcal{U}_
{x}$ be the subset of $P(X)$ where $R\in\sum_{x\in X}^{\mathcal{U}}\mathcal{U}_{x}$ if and only if $\{x\in X|R\in\mathcal{U}_{x}\}\in\mathcal{U}$.` Clearly $\sum_{x\in X}^{\mathcal{U}}\
mathcal{U}_{x}$ is a $\sigma$-complete ultrafilter on $X$.
We can even describe spaces with arbitrarily large upper semicontinuous functions combinatorially in terms of the convergent ultrafilters as follows.
Now assume that $X$ is a set and $M_{x}$ is a collection of finitely many $\sigma$-complete ultrafilters on the set $X$ for each $x\in X$. Then call the system $(M_{x})_{x\in X}$ additive
1. each $M_{x}$ contains the principal ultrafilter $\{R\subseteq X|x\in R\}$ and
2. If $\mathcal{U}\in M_{x}$ and $\mathcal{U}_{y}\in M_{y}$ for $y\in Y$, then $\sum_{y\in X}^{\mathcal{U}}\mathcal{U}_{y}\in M_{x}$ as well.
It turns out that if $X$ is a set and $M_{x}$ is a collection of finitely many $\sigma$-complete ultrafilters on $X$ for each $x\in X$, then the system $(M_{x})_{x\in X}$ is additive if and
only if there is a topology on the set $X$ such that $\mathcal{N}(x)=\bigcap M_{x}$ for each $x\in X$.
We conclude that one may consider topological spaces with arbitrarily small lower semicontinuous functions as additive systems $(M_{x})_{x\in X}$ where each $M_{x}$ is a finite set of $\
sigma$-complete ultrafilters on $X$. Furthermore, the Hausdorff spaces with arbitrarily small lower semicontinuous functions correspond to the additive systems $(M_{x})_{x\in X}$ where $M_
{x}\cap M_{y}=\emptyset$ whenever $x,y$ are distinct points in $X$.
Another way of restating the OP property is "any countable point-finite covering of $X$ is locally finite" (more or less 2) – Pietro Majer Oct 8 '12 at 9:35
add comment
Not the answer you're looking for? Browse other questions tagged real-analysis gn.general-topology set-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/108936/arbitrary-small-positive-lower-semi-continuous-functions/109109","timestamp":"2014-04-16T11:03:58Z","content_type":null,"content_length":"58437","record_id":"<urn:uuid:fdef01d9-1632-4a05-af6c-8a09ddce13f2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thanksgiving Special: D-Wave at MIT
Some people think I have a vendetta against D-Wave Systems and its questionable quantum computer claims (see here, here, here, here, here, here, here, here, here for context). But actually, nothing
could be further from the truth. I keep trying and trying to change the subject! Wouldn’t you all rather hear about Wolfram, I say? Or unparadoxes? Or my #1 topic du jour, nothing whatsoever?
Apparently you wouldn’t. From my inbox to my comments section to the hallway, the masses have spoken, and what they want to know is: did I attend D-Wave’s presentation at MIT on Monday, and if so
what did I think?
Yes, I attended, in body though not in mind. You see, Monday was also the day of the STOC deadline, so if our guests from D-Wave (Mohammad Amin and Andrew Berkley) were expecting a ferocious skeptic,
they instead got a bleary-eyed zombie with visions of MA[EXP], P/poly, and 7:59PM EST cavorting in his head.
This meant that Ed Farhi, Isaac Chuang, Peter Shor, and Lorenza Viola had to do most of the questioning. As it turned out, they did a vastly better job than I could have.
As others have pointed out in stronger terms, I’m not a physicist. (On the other hand, the gentleman linked to in the previous sentence is not correct about my being paid by the NSA to discredit
Canadian quantum computing efforts: it’s actually the GCHQ and the Mossad.) As such, I can’t directly evaluate D-Wave’s central claim to have built an adiabatic quantum computer, nor have I ever
tried to do so. All I can do is point out the many things D-Wave has said to the press (about NP-complete problems, for example) that I know are false, its history of making dramatic announcements
without evidence, and its contemptuous attitude toward scientists who have asked for such evidence. For me, that’s more than enough to destroy D-Wave’s credibility on the claims I can’t directly
evaluate. After all, the burden of proof is not on me; it’s on them.
However, other people have not been satisfied with this line of argument. “We don’t care who the burden the proof is on,” they say. “We just care whether D-Wave built an adiabatic quantum computer.”
But my physicist colleagues don’t suffer from the same argumentative limitations that I do. At the group meeting preceding the talk, Farhi announced that he didn’t care what the press releases said,
nor did he want to discuss what problems quantum computers can solve (since we academics can figure that out ourselves). Instead he wanted to focus on a single question: is D-Wave’s device a quantum
computer or not?
What followed was probably the most intense grilling of an invited speaker I’ve ever seen.
It quickly emerged that D-Wave wants to run a coherent quantum computation for microseconds, even though each of their superconducting qubits will have completely decohered within nanoseconds. Farhi
had to ask Amin to repeat this several times, to make sure he’d gotten it right.
Amin’s claim was that what looks like total decoherence in the computational basis is irrelevant — since for adiabatic quantum computation, all that matters is what happens in the basis of energy
eigenstates. In particular, Amin claimed to have numerical simulations showing that, if the temperature is smaller than the spectral gap, then one can do adiabatic quantum computation even if the
conventional coherence times (the t[1] and t[2]) would manifestly seem to prohibit it.
The physicists questioned Amin relentlessly on this one claim. I think it’s fair to say that they emerged curious but severely skeptical, not at all convinced by the calculations Amin provided, and
determined to study the issue for themselves.
In other words, this was science as it should be. In contrast to their bosses, Amin and Berkley made a genuine effort to answer questions. They basically admitted that D-Wave’s press releases were
litanies of hype and exaggeration, but nevertheless thought they had a promising path to a quantum computer. On several occasions, they seemed to be struggling to give an honest answer that would
still uphold the company line.
Two other highlights:
• I asked Amin and Berkley whether they could give any evidence for any sort of speedup over classical simulated annealing. They laughed at this. “It’s sixteen qubits!” they said. “Of course you’re
not going to see a scaling effect with sixteen qubits.”I said I understood perfectly well (though I wondered silently whether the dozens of journalists covering D-Wave’s demo understood the
same). But, I continued, surely you should be able to see a scaling effect by the end of 2008, when your business plan calls for 1024 qubits?”Well, that’s what it says in the press release,” they
Forget about the press release, Farhi interjected. How many qubits are you actually going to make?
Amin and Berkley shrugged; they said they’d just try to make as many qubits as they could.
• Even though it hadn’t exhibited any sort of speedup, Amin and Berkley steadfastly maintained that their 16-qubit device was indeed a quantum computer. Their evidence was that simulations of its
behavior that took quantum mechanics into account gave, they said, a better fit to the data than simulations that didn’t. On the other hand, they said they were not able to test directly for the
presence of any quantum effect such as entanglement. (They agreed that entanglement was a non-negotiable requirement for quantum computing.)There was a Feynmanesque moment, when Ike Chuang asked
Amin and Berkley an experimental question so simple even I understood it. Ike said: if you’re indeed seeing quantum effects, then by running your computer at higher and higher temperatures, at
some point you should see a transition to classical behavior. Have you tried this simple control experiment?Amin and Berkley said that they hadn’t, but that it sounded like a good idea.
For a theorist like me — accustomed to talks ending with “if there are no questions, then let’s thank the speaker again” — this was exciting, heady stuff. And when it was over, I still had almost
three hours until the STOC deadline.
Greg Kuperberg Says:
Comment #1 November 22nd, 2007 at 12:31 pm
(though I wondered silently whether the dozens of journalists covering D-Wave’s demo understood the same)
The theme of this post has been to ignore the dubious hype from Geordie Rose et al and concentrate on the physics. That is a reasonable perspective, if not the only reasonable perspective in my
opinion. But then this parenthetical contradicts it. There is absolutely no need to wonder on this point. The journalists at that “demo” thought that they saw a demonstration of that 16-qubit device
solving a Sudoku, among other feats. We have seen no evidence that that actually happened, or that it’s even possible. That is, that’s possible for any 16-qubit device to solve or help solve a Sudoku
in any meaningful way. So at that point, why bother wondering what the journalists understood? I really thought that you wanted to take such questions off of the table.
Their evidence was that simulations of its behavior that took quantum mechanics into account gave, they said, a better fit to the data than simulations that didn’t.
So therefore a hydrogen atom is a quantum computer for the same reason.
Scott Says:
Comment #2 November 22nd, 2007 at 2:11 pm
Greg, the reason that parenthetical remark is parenthetical is precisely that it contradicts Ed Farhi’s rule.
Incidentally, there’s one incident I forgot to relate. Just for your sake, I asked Amin and Berkley how D-Wave encoded a Sudoku puzzle into 16 qubits. They laughed and said they had no idea, that
they weren’t involved with that. There was mention of a parody newspaper Caltech students distributed around MIT, which contained Sudoku puzzles with all but one of the squares filled in. Then Ed got
angry at the digression into something so “irrelevant,” and we moved on.
What I’m finding, more and more, is that the arguments you and I find persuasive are simply not persuasive to most people (even though they should be). It would be as if a company claimed it had an
algorithm that could compress a random string, you and I pointed out that 2^n > 2^n-1, and everyone dismissed that as an irrelevant piece of theory. All they want to know is: how well does the
compression algorithm work on my data? And you try to explain to them that if a company could be so egregiously, howlingly wrong about such a fundamental point, then there’s little reason to expect
their algorithm will do anything useful for anyone. And they don’t buy it. “Of course that’s what an academic would say!”
Such exchanges are, of course, incredibly frustrating to us — because they reveal that most people don’t inhabit a world where the truth or falsehood of abstract propositions actually matters,
actually has any sort of teeth. They don’t understand that you can refute a complicated theory by attacking at its weakest point; they think you have to understand every detail first.
Now you might ask: if people don’t understand such basic things, then why will they accept an argument based on decoherence and spectral gaps, rather than on the number of bits needed to encode a
Sudoku? I don’t know. I just don’t know. But empirically, the former seems to have more effect.
Matt Says:
Comment #3 November 22nd, 2007 at 2:18 pm
Hi Scott,
I’m an undergrad studying CS and pure math, and an avid reader of your blog. Could you make a post, or a brief reply to this comment, addressing the question: “What is the best way for undergraduates
to become involved with quantum computation/information theory?”
Is the best way to focus our time on complexity theory and wait for graduate school to learn about the quantum side of things? Should we parallel our computer science studies with physics? What books
should we be reading (I’ve started Griffith’s book)? What areas should we be focusing on? Any suggestions would be much appreciated.
Scott Says:
Comment #4 November 22nd, 2007 at 2:41 pm
Matt: It all depends on what you’re interested in! To do original research in quantum algorithms and complexity, you don’t need to know anything whatsoever about physics. On the other hand, knowledge
of physics is sometimes helpful in suggesting new ideas (as we saw with the quantum adiabatic algorithm, and the recent Farhi-Goldstone-Gutmann NAND-tree algorithm). It’s also extremely helpful for
understanding what your physicist colleagues are talking about.
Personally, I came entirely from the math/CS side. Quantum computing for me was just another model of computation — one with complex numbers called “amplitudes” instead of probabilities — that had
the additional attractions of (1) being relatively new and unexplored, and (2) having something to do with the deepest workings of the universe. It was only after I’d spent a couple years working on
quantum query complexity, lower bounds, etc. that I started learning anything about the physics. What I found, then, was that my knowledge of quantum information was like a secret decoder ring for
understanding what the physicists were telling me (Hamiltonian = instantaneous unitary, Feynman path integral = the BQP⊆PP simulation, etc.), and that without that computer science frame on which to
hang things I’d be completely lost.
So: if you like CS, take CS courses! If you like physics, take physics courses! With the obvious exception of your graduation requirements, don’t take anything just because someone tells you you
As for books, I like Nielsen&Chuang and Preskill’s lecture notes. There might be conventional QM textbooks from which one can learn a great deal, but I don’t know as I haven’t read them.
Job Says:
Comment #5 November 22nd, 2007 at 2:45 pm
In D-Wave’s defense, in theory you could encode any given sudoku puzzle into a single bit, one state for the unsolved version and one for the solved version. As long as a peripheral such as a
graphics card can decode this signal…
Given that they have 16 qubits their QC is able to solve 32768 sudoku instances. Imagine how many it’ll be able to solve with 1024 qubits!
Greg Kuperberg Says:
Comment #6 November 22nd, 2007 at 4:21 pm
They don’t understand that you can refute a complicated theory by attacking at its weakest point; they think you have to understand every detail first.
In all fairness, I think that Farhi understands that principle perfectly well. The problem is that his notion of the weakest point depends in his expertise rather than ours. So it would be fine if he
said that your objections, or mine, could well be important, but that he wants to see the physics instead. He knows at a lot and he is perfectly entitled to his own reasons to be upset or happy with
D-wave. But I still think that an explanation of the D-wave Sudoku demo is important, because I want to know if D-wave is honest. If Farhi feels that that is irrelevant, not only for him but in
general, then I think that that position is too impatient.
But I also I thought that you had the same sentiment as Farhi about the Sudoku demo, namely that their science is more important than their honesty. Or at least that you care far more about the
former than the latter. Even so, thanks for asking them — it’s interesting that they know nothing about the Sudoku demo. You would think that someone at the company does!
Stas Says:
Comment #7 November 22nd, 2007 at 4:35 pm
Scott, the reason that people ask you and not other academics for an opinion on D-Wave is very simple: you are just a much better writer/communicator. And you would do a great service to humanity if
you help expose the truth about D-Wave to general public, including their investors. If D-Wave turns out to be a scam, it may have a long-term negative effect on funding of legitimate projects in
quantum computing.
Ben Toner Says:
Comment #8 November 22nd, 2007 at 5:46 pm
Hi Scott,
What are you results about MA_EXP?
Graham Hedgerow Says:
Comment #9 November 22nd, 2007 at 6:14 pm
Tales from the Crypt:
Beware. I lost my job at Dirac Q-Systems Ltd. in Tunbridge Wells when e-mail to my bonny girl became quantumly entangled with e-mail to my wife. My Mrs. wouldn’t hear my explanation involving
parallel universes.
Anon Says:
Comment #10 November 22nd, 2007 at 7:00 pm
It seems perfectly reasonable to me that the physicists at Dwave don’t know anything about how the algorithms are implemented. Personally if I were them I’d be curious about how it all works, but I
can also believe that they’re not that interested in how the algorithms people cook up a demo that the public can understand (in some superficial sense of the word).
The person who heads Dwave’s algorithms effort is Bill Macready (see: http://www.zoominfo.com/Search/PersonDetail.aspx?PersonID=3072102&QueryID=5b8f7d5a-cf09-4942-a231-21a62da6ff0a). He will be able
to answer any questions on how Soduku/image matching/etc was “encoded” into the 16/28-qubit machines.
The one detail I have on the image matching is this: the 28-qubit machine allegedly solves the maximum common subgraph problem for graphs that can be encoded using 28 bits. How then does Dwave
“solve” the image feature matching problem using just 28 bits, for images that are large and have many features (such as those that Dwave used in their SC demo)? Apparently they “cheat” and break the
overall problem into many small maximum common subgraph problems (of a size that can be encoded in 28 bits). Each small MCS problem is “solved” on the QC, and then the solutions are somehow combined
classically. I suspect there is a lot more classical work going on than quantum work!
This is entirely believable, although it is misleading to the public to tell them that the quantum computer has solved the image matching problem on some large image, when in fact the QC has only
been used to solve a really small part of it.
However, I find this to be a small issue in comparison with the question: does the 28-qubit Dwave quantum computer actually quantum compute the solution to MCS, or is it just classical annealing?
John Sidles Says:
Comment #11 November 22nd, 2007 at 7:34 pm
Matt asks: “What is the best way for undergraduates to become involved with quantum computation/information theory?”
Scott suggests: “As for books, I like Nielsen&Chuang and Preskill’s lecture notes.”
That is a fine recommendation IMHO.
To my knowledge, these two textbooks are unique in treating measurement before dynamics. Which is natural from a QIT point of view (because dynamics is a special case of measurement, while
measurement is not a special case of dynamics).
A further suggestion would be to look at Joe Harris’ textbook Algebraic Geometry along with a physics-oriented textbook on differential geometry like Martin’s Manifold Theory.
The rationale being (1) pretty much any technical field has a natural formulation in terms of geometry, and therefore (2) hypothesis like P!=NP will someday (soon?) be understood as geometric
So you might as well learn some modern geometric lingo! :;
Robin Blume-Kohout Says:
Comment #12 November 22nd, 2007 at 7:46 pm
IQC got the 30-minute version (+15 minutes of grilling) today, so I’m very glad that you posted this digest of your longer experience. In the interest of continued grilling, some questions for you:
1. You said Farhi wanted to address “Is it a QC” but not “What can a QC do?” How the %&*! do you do that? Especially in contexts like AQC, is there some way to define a QC that isn’t operational?
2a. Your speakers agreed that entanglement is non-negotiable — but isn’t the necessity of entanglement for speedup still a conjecture? especially in AQC?
2b. Are the statements “Total decoherence in the computational basis is irrelevant” and “Entanglement is a non-negotiable requirement” compatible in some way that I’m missing?
Robin Blume-Kohout Says:
Comment #13 November 22nd, 2007 at 7:59 pm
Greg & Scott,
Attacking the weakest point is fine if you just want to prove that the final conclusion is wrong. It doesn’t help much if you want to figure out whether they’re right about something.
Yes, the statements about solving NP-complete problems (and almost certainly the demos of Sudoku) are crap, and that drum needs to be beaten at appropriate intervals. But D-Wave has made several
other intriguing and controversial statements that aren’t obviously wrong… and which could be very useful/interesting even if the whole chain of logic has a flaw somewhere.
Scott Says:
Comment #14 November 22nd, 2007 at 8:35 pm
In all fairness, I think that Farhi understands that principle perfectly well.
Greg: I never meant to suggest he didn’t. I was talking about the “average person,” whose standards of proof physicists somehow seem much more in tune with than computer scientists and
Scott Says:
Comment #15 November 22nd, 2007 at 9:17 pm
Robin: Good questions! A few quick responses:
1. Indeed, as soon as Eddie said the issue was whether or not they’d built a QC, I predicted that the discussion would become about what’s meant by a QC, and that’s essentially what happened.
2a. Well, there’s still a remote possibility that one could go beyond BPP by using a QC that’s in a separable mixed state at every time step. On the other hand, we have no real evidence that there’s
anything interesting one can do in this model. Furthermore, the model is extremely artificial — since the same unitaries that let us leap from one specific separable mixed state to a completely
different one, should also let us easily make entangled states!
2b. I have no idea! This seems like the perfect question to ask Eddie and Ike a few weeks from now.
Scott Says:
Comment #16 November 22nd, 2007 at 9:22 pm
What are you results about MA_EXP?
Ben: It’s my paper with Avi Wigderson on “Algebrization: A New Barrier in Complexity Theory,” which actually involves 39 complexity classes at last count (MA[EXP] is one of them). We should have a
preprint on the web “any day now.”
Geordie Says:
Comment #17 November 22nd, 2007 at 9:25 pm
FYI the slides from the MIT presentation are here http://dwave.wordpress.com/2007/11/20/d-wave-talk-at-mit/ .
Robin: these are good questions. An answer to #1 might be that it is still very much an open question as to how “AQC” performs when the sweep time is less than what is required to satisfy
adiabaticity (when the procedure becomes explicitly heuristic), which is the regime is which any real large-scale AQC will be operated. So even if the system supports AQC, in practice it isn’t clear
how to quantify the benefit from operating it in the regime that matters.
For 2b. yes these are compatible. The argument is in the slides but it boils down to simple textbook quantum mechanics. The density matrix of the system becomes diagonal (T_2) quickly IN THE ENERGY
BASIS. In the readout basis you need to rotate this density matrix. This rotation can give non-zero off-diagonal elements which are signatures of coherence in the readout basis. These are robust and
do not “go away”. They are an equilibrium property of the system and are protected by an energy gap. The way Mohammad characterizes this is as follows:
“In the gate model, there is no Hamiltonian constantly present except when you want to apply some gates. So if there is no energy spacing induced by some Hamiltonian, you are always in classical
limit (i.e. large T limit). That is why after the decoherence time your qubits become classical bits. But such an argument does not hold for the case where the wavefunction is an eigenstate, or even
mixture of a few eigenstates of a strong Hamiltonian that can create energy spacings larger than T. This is really the source of confusion for a lot of quantum information theorists. For them the
Hamiltonian is just a generator for unitary operations and does not mean anything more. So after T_2, the qubits become classical and there is no wavefunction to apply a unitary operation to.”
Finally, the variety of demos we’ve run (including sudoku, image matching, etc.) are not “crap”. They use a novel hybrid approach to integrating QCs into classical solvers. In hindsight it is pretty
obvious that to make any QC useful it needs to be integrated with the best known classical techniques regardless of what quantum algorithm it’s embodying. And while I’ve said this 10^87 times I’ll
say it again: what we’re doing is explicitly heuristic and has no global optimality guarantees. While you can use the system we’re building on decision problems it is natively an optimization solver
for quadratic unconstrained binary optimization problems (I’ll post my SC07 talks shortly on rose.blog).
Noam Chompwicz Says:
Comment #18 November 22nd, 2007 at 10:05 pm
Shouldn’t the spelling be “Algebraization”?
Scott Says:
Comment #19 November 22nd, 2007 at 10:10 pm
Avi and I had a long debate about what to call it, and we almost reached consensus but not quite: he still spells it “algebraization.” My main complaint is that it’s not clear how to pronounce that:
as “algebra-ization”? “algebrae-ization”? “algebryzation”?
Greg Kuperberg Says:
Comment #20 November 23rd, 2007 at 2:10 am
How about “algebraic relativization”?
Greg Kuperberg Says:
Comment #21 November 23rd, 2007 at 2:23 am
Finally, the variety of demos we’ve run (including sudoku, image matching, etc.) are not “crap”. They use a novel hybrid approach to integrating QCs into classical solvers.
I didn’t use the word “crap”, and I can only speculate as to what the Sudoku solver really is. But I will say that the demo of the Sudoku solver on YouTube looks dishonest. It does not look
consistent with the quantum computer that you described. It will continue to look dishonest unless you (1) name the people who did the Sudoku solver, and (2) explain the actual approach used, whether
or not it is “novel” or “hybrid”. Only then, conceivably, could it look okay. Even if neither you nor I ever said another word about it, it would still look dishonest.
Greg Kuperberg Says:
Comment #22 November 23rd, 2007 at 2:27 am
Yes, the statements about solving NP-complete problems (and almost certainly the demos of Sudoku) are crap, and that drum needs to be beaten at appropriate intervals. But D-Wave has made several
other intriguing and controversial statements that aren’t obviously wrong… and which could be very useful/interesting even if the whole chain of logic has a flaw somewhere.
But looking for diamonds in the garbage is a very different quest from appraising a stone that lies among diamonds.
Anon Says:
Comment #23 November 23rd, 2007 at 2:30 am
Greg: Robin made the Soduku comment that Geordie is complaining about. (See Robin’s comment above: “the statements about solving NP-complete problems (and almost certainly the demos of Sudoku) are
I don’t see why it’s important to name the people who built the Soduku demo, but I do agree with you on (2.) – until I see how the Soduku problem was implemented (encoded as an optimization problem
that a 16-bit AQC could solve), it doesn’t demonstrate anything about the Dwave machine.
Geordie: You have said now and before that for Soduku you used the AQC to run a “subroutine” that helps solve the problem, but that doesn’t help us understand anything – what I think everyone here
would like to know is what this so-called subroutine is that your AQC is computing.
qv Says:
Comment #24 November 23rd, 2007 at 8:06 am
I think for D-wave need award Nobel prize, if they even proof that they adiabatic quantum computer is 2 times faster than the same classical computer but without entanglement and superposition. If
running time is few microseconds then I think it’s must to be possible calculate diferents between say one and two microseconds for quantum and classical computer respectivly. But 2^16 or 2^14 is
about 10000, so if they quantum computer is quantum then at least it probably must be 100 times speed up over classica computer. So 1us and 10ns would must be resonable diferent. And if this diferent
not exist then how they can hope that with more qubits qq wouldn’t be more noisy than with 16, 28 qubits?
milkshake Says:
Comment #25 November 23rd, 2007 at 9:04 am
Science journalists are not representative of the lay public – any normal guy who does a technically-oriented job (fixing cars, for example) or any curious kid understands that one should try to
figure out principles of how things works, not just exploit a phenomena for practical purposes.
But the science journalists never have to do a research by themselves – they may think that they are very good at explaing it – but how do they know that they are not dumbing stuff down, and
producing vague but bombastic gorp
All PR stuff (not just the one produced by D-Wave) plays on lazyness of journalists. Journalists have short deadlines and PR men make their work “easier” by feeding them pre-digested stuff and
ready-made conclusions that journalists can just lift verbatim and glue together by weak filler of their making – it is a quck, easy way of delivering an article on time, a technical subject that one
is not very strong in.
And I really hate the condescending tone that some journalists resort to whenever reporting about science – just because these folks are dumb like a paddle it does not follow that everybody else is
Scott Says:
Comment #26 November 23rd, 2007 at 11:05 am
How about “algebraic relativization”?
Too long. Try saying “non-algebraically-relativizing techniques will be needed” (or is it “algebraically non-relativizing techniques”?). Or another phrase of which we need some variant:
“non-commutatively non-algebraically-relativizing techniques.”
Luca Says:
Comment #27 November 23rd, 2007 at 12:19 pm
Maybe you can use a TLA instead
Greg Kuperberg Says:
Comment #28 November 23rd, 2007 at 12:56 pm
Robin made the Soduku comment that Geordie is complaining about.
Yes, I realize that.
I don’t see why it’s important to name the people who built the Soduku demo, but I do agree with you on (2.) – until I see how the Soduku problem was implemented (encoded as an optimization problem
that a 16-bit AQC could solve), it doesn’t demonstrate anything about the Dwave machine.
It’s important to name names because it’s a question of honesty, not just technology. Reporters came out of that room thinking that a quantum computer solved a Sudoku. No one has proposed any
meaningful way for a 16-qubit quantum computer to help solve a Sudoku, but somehow the Sudoku was solved. Conclusion: The demonstration looks dishonest.
In response to that, no one should be interested in humdingers from Rose about “novel hybrid approaches”. Until and unless the PIs of the Sudoku solver come forward, and they explain their work, the
cloud of dishonesty hangs over him personally. In fact, unless witnesses come forward to explain otherwise, we don’t know that anyone at D-wave wrote a Sudoku solver at all; the demo could have been
a fabrication.
Stas Says:
Comment #29 November 23rd, 2007 at 1:05 pm
I’d like to add to Greg’s comments that the standard 9×9 Sudoku is extremely easy to solve with 0-1 linear programming packages, such as CPLEX, or with constraint propagation techniques. I can
elaborate on details if anyone is interested. So, all that PR based on Sudoku is extremely fishy…
Robin Blume-Kohout Says:
Comment #30 November 23rd, 2007 at 1:12 pm
Thanks for the answers!
Far as I can tell, 2a is the question worth $64M, because it’s closely related to two really interesting questions: “What gives a QC its power?” and “What does it mean to say something is `quantum’?
A necessary and sufficient condition for a device to exceed BPP — or to achieve BQP — would be awfully nice. Since that’s kind of a pipe dream, I’d settle for a necessary condition, and the best shot
seems to be entanglement.
I’m a bit skeptical though, because I don’t know of a really good reason for entanglement [between various qubits] to be necessary — just a general conviction that entanglement is the most reliable
signature of “nonclassicality”. DQC1 proves that you don’t need a lot of it. My real worry is that entanglement is fundamentally about locality, and emphasizing locality within a quantum computer
seems a little procrustean. The more I think about it, the more I prefer contextuality, but that’s hard to study.
Incidentally, when you say “the model is extremely artificial — since the same unitaries that let us leap from one specific separable mixed state to a completely different one, should also let us
easily make entangled states,” the immediate naive counterargument is “Well, yeah… if you can prepare arbitrary input states.” How robust is your statement w/r.t. computational models like NMR and
DQC1 that have strong preparation restrictions?
sw Says:
Comment #31 November 23rd, 2007 at 1:52 pm
“Ike said: if you’re indeed seeing quantum effects, then by running your computer at higher and higher temperatures, at some point you should see a transition to classical behavior. Have you tried
this simple control experiment?”
If I understand correctly, D-Wave implements its supposedly-quantum computer using superconducting flux qubits. So isn’t it that as you raise the temperature of the setup, you’ll pass the transition
temp of the superconductor and they will stop superconducting and thus are no longer quantum. But you can’t tell if this transition to classical behavior is just simply because their qubits don’t
superconduct anymore, or because of some more fundamental quantum -> classical transition?
Scott Says:
Comment #32 November 23rd, 2007 at 2:13 pm
Robin, I disagree with you about 2a being a $64M question — maybe a $64 question.
(Incidentally, I can tell you my conjecture: yes, entanglement is necessary. It’s just that we haven’t managed to prove it yet.)
DQC1 is a different animal — that really does seem to provide more power than BPP, but then again, it also lets you prepare entangled states (and probably even more to the point, has no restrictions
on the allowed operations).
Robin Blume-Kohout Says:
Comment #33 November 23rd, 2007 at 2:29 pm
I apologize for (and retract) the word “crap”, which wasn’t very appropriate for a distinguished and civilized salon like Shtetl-Optimized. I’m generally content to hitch my rhetorical wagon to Greg
on this topic (though he should not be tarred with my poor choice of language).
Howver, in the interest of clarifying what I meant, I should have used two distinct words: I think suggestions that an AQC device will provide efficient solutions to NP-complete problems are at this
point in time incredible, and that demonstrations of Sudoku solving are insubstantial. That is, I don’t believe the former, and I don’t think that the latter provide convincing evidence for anything.
Moving on (because science is more fun). In the context of Q#1 (“A QC is what a QC does“), I agree with most of what you say — but not necessarily that any real large-scale AQC will be operated in
the non-adiabatic regime. The whole point of demanding a polynomial gap is to make operation within the adiabatic regime at least possible within reasonable time. Almost everybody else who thinks
about AQC is worried, not about the timescale regime, but about the noise regime (more on this in a minute).
Moreoever, a device operating outside of the adiabatic regime is not an AQC, at least in this specific context. I think this is what you’re alluding to by putting AQC in quotes, but I also think it’s
sufficiently important to justify making the point explicitly — if a QC is what a QC does, then a device operating outside of the adiabatic regime is only an AQC if it is computationally equivalent
to an AQC.
This is a conjecture that I think you’ve backed away from by focusing on the device’s potential to act like a new and powerful heuristic. This conjecture — that D-Wave’s device could provide a
speedup comparable to a new heuristic for NP-complete problems — is quite credible to me… though I still don’t know of any convincing evidence for it. At the risk of stating the obvious, I’ll point
out that the heuristic conjecture is unlikely to enrage most QC theorists, since AFAIK it doesn’t move complexity class boundaries, nor contradict BBBV.
However, it’s also incompatible with calling the device a “quantum computer” (e.g. here), which (again, at the risk of stating the obvious) is what irritates assorted computer scientists.
Greg Kuperberg Says:
Comment #34 November 23rd, 2007 at 2:47 pm
I’d like to add to Greg’s comments that the standard 9×9 Sudoku is extremely easy to solve with 0-1 linear programming packages, such as CPLEX, or with constraint propagation techniques.
This is an interesting remark about the inherent complexity of Sudoku. It’s low by design, so that it can be fun to solve Sudokus with pencil and paper, and so that Sudokus can be generated by
software with some ad hoc difficulty rating. Moreover, the “constraint propagation” software technique is not very different from the way that most people solve Sudokus without software. You make a
9x9x9 table of which numbers can appear in which cells, and you eliminate contradictory choices until the table stabilizes. Then you find the most economical place to guess, and extend the method to
a branched search. (Also called a depth-first search.)
But in one respect, even this remark is overly fancy. Off-the-shelf Sudoku solvers exist. Libraries of solved Sudokus exist. If you want to demonstrate that your quantum computer can solve Sudokus,
but you are willing to cheat, then you can grab these off-the-shelf solutions. You don’t need to consider how they work. You could even just solve a few yourself and type that in.
Again, what is much harder is to find any meaningful way for a 16-qubit device to help solve the Sudoku. Just as it’s much easier to make an omelette yourself than with the help of a 4-year-old.
qv Says:
Comment #35 November 23rd, 2007 at 3:01 pm
at higher temperature becoming thermal noise and all entanglement going to moon…
But I don’t think that this temperature regulation can answer somthing if they quantum effects very weak, becouse at higher temperature becoming higher resitivity etc and result can be misleading and
not the same as suspecting “this teory”.
Sudoku puzle very easy for conventional computers (I donwload one…), so probably there is some clever algorithm. And if dwave quantum computer is not just probabilitic random bits generator, but some
combination of classical anyling and “quantum effects”, then it’s possible that if quantum effects don’t working, classical anyling doing his job well.
I don’t know yet any of quantum computers, which giving even 2x speed up, so why dwave is so optimistic?
Robin Blume-Kohout Says:
Comment #36 November 23rd, 2007 at 3:07 pm
Re: noise and decoherence, your answer doesn’t actually address my question. I asked about the statement “Total decoherence in the computational basis is irrelevant”, and your point is that
decoherence in the energy basis is irrelevant.
I’ll take the risk of answering my own question here: no, those statements are not compatible, because decoherence in the computational basis necessarily kills (1) entanglement between qubits, or (2)
entanglement with a reference system. Anyway, the question is moot because D-Wave is not claiming that there is strong decoherence in the computational basis; they’re claiming that there is strong
decoherence in the energy basis, which is clearly not incompatible with massive entanglement.
One of the reasons for confusion is that Mohammad’s talk only covered the dynamics of a single qubit — despite repeated pressure (at IQC) to present or discuss some kind of data for multi-qubit
systems. For a single qubit, the computational basis can always be chosen to be (or not to be) the energy basis. As Ray Laflamme pointed out at the talk, there are a lot of people with 1 qubit — and
the physics of 3 (or, god forbid, 512) qubits is generally quite different.
The problem with textbook quantum mechanics is that it’s often oversimplified. In particular, decoherence is (a) not necessarily in a particular basis, and (b) not generally in the energy basis (
“Deconstructing Decoherence”, by Anglin/Paz/Zurek, discusses related things). If your system’s Hamiltonian dominates, then the best pointer basis will be usually be close to the energy eigenbasis.
However, you’ll notice there are several caveats in there, most importantly the fact that there may be no pointer basis that is indefinitely stable.
Even if we grant all the necessary assumptions, there’s still one very worrying thing about extending the analysis presented in Mohammad’s talk to large devices. You don’t just need a gap; you need a
low density of levels just above the gap. This is true for random Hamiltonians, but the same arguments (e.g. central limit theorem) don’t work for the measure-zero subset of computationally
interesting ones.
(full disclosure: I only play a quantum information theorist on TV; my background is in decoherence theory).
Turkey Says:
Comment #37 November 23rd, 2007 at 3:08 pm
My question is whether the Sudoku demo was provided to intentionally generate some hype about some possibilities and capabilities of QCs which are at best unlikely and with this mislead the general
population in order to gain some funding and attention.
Greg Kuperberg Says:
Comment #38 November 23rd, 2007 at 3:10 pm
demonstrations of Sudoku solving are insubstantial
On the contrary, if that Sudoku demo truly is as dishonest as it looks right now, than that would be very substantial, for negative reasons. People who lie to reporters have taken a very serious
step, and should be treated differently afterwards.
Again, I want to say this carefully. I have no specific evidence of how exactly that Sudoku was solved. I saw a virgin Sudoku, called a “SudoQ”, I heard Rose say “the quantum computer spits out the
answer”, and then I saw a solved Sudoku. I can’t think of an honest explanation. Conceivably there is one, but what is it?
Robin Blume-Kohout Says:
Comment #39 November 23rd, 2007 at 3:13 pm
Just for yet another gemstone analogy, it might even be that we’ve got a diamond necklace with a half dozen paste gems scattered throughout. So, we could:
(1) chuck it,
(2) see if there are any diamonds, salvage them, then chuck it,
(3) replace the glass chips with diamonds, to get a nice shiny diamond necklace.
Option 3 is only sensible if there’s a preponderance of gems in the chain, which is open to [extensive] argument.
Robin Blume-Kohout Says:
Comment #40 November 23rd, 2007 at 3:30 pm
$64! Aaaagh! I feel so… cheapened! Although… are those Canadian or US dollars? grin
Back to science: yes, there are lots of interesting questions about the boundary between BPP and BQP, but this one is special, and I wouldn’t sell it off for O($10^2). If the entanglement conjecture
is true, then we have a physics characterization of a quantum computer — rather than the inherently CS-based characterization “It’s a QC if it solves problems out of BPP”. The first question most
people ask about putative quantum hardware is “Does it entangle?” even though the conjecture is still unproven! This is powerful evidence that either (a) the conjecture is important or (b) people are
dumb. I tend (uncharacteristically) toward the former, in part because the conjecture has spawned useful advances in classical algorithms, e.g. matrix product states.
We don’t know why quantum computers are more powerful, and if we could find a necessary condition, that would be very enlightening — at least for me. And, yeah, I know it’s not sufficient (see
Gottesman-Knill), which means my precious physics-based characterization of QCs isn’t complete. Baby steps… baby steps…
St. John Rucastle Says:
Comment #41 November 23rd, 2007 at 4:13 pm
If there is no legitimate suffix “ization” for the noun “algebra,” then we are at liberty to decide how to spell it. We can create any spelling that would make the word easy for the average and
below-average person to speak. I believe that this is not the case here. If you do not spell the word as “algebraization,” then you will appear illiterate to older readers. Younger readers, raised on
television, action movies, and video games, will not know the difference.
Greg Kuperberg Says:
Comment #42 November 23rd, 2007 at 4:22 pm
see if there are any diamonds, salvage them, then chuck it,
I can’t say that it’s completely unreasonable. But I would like scientists in general, and us in particular, to devote more attention to questions of integrity. We owe it to science journalists.
There has been an attitude that it just doesn’t matter whether “SudoQ” was a bogus demonstration, it’s “irrelevant”, because only journalists should care. In my view, that is deeply unfair to
outsiders. Even for insiders, it’s a dubious kind of benefit of the doubt.
Coin Says:
Comment #43 November 23rd, 2007 at 4:25 pm
I’ve been following this for awhile and enjoying Scott’s concise deconstructions, but there is one thing that I still don’t understand about all of this.
There is frequent reference made, during discussion of D-Wave, to the quantum computer which D-Wave claims to have built being “adiabatic”. What does this mean? My only intuition of what it means for
a process to be “adiabatic” is in the context of atmospheric air parcels (i.e. packets of hot air) so when I first heard this I assumed it was marketing buzzword gibberish. However it seems to have
real meaning to the people here. What is the difference between a quantum computer and an adiabatic quantum computer?
(Wikipedia still doesn’t actually have a page on “adiabatic quantum computation”, although their quantum computation article links to the nonexistent page. Wikipedia also references some paper which
if you follow the citation rabbit hole leads to this paper, which seems to be defining adiabatic quantum computation but I’m not sure I understand it. From that paper “adiabatic quantum computation”
doesn’t sound so much like the general computers like quantum computers as I understand them are, and sounds more like a physical process for solving a custom problem, like the
soap-bubbles-to-solve-Steiner-trees thing Scott wrote about once. But maybe I’m missing the point…)
Scott Says:
Comment #44 November 23rd, 2007 at 4:43 pm
If you do not spell the word as “algebraization,” then you will appear illiterate to older readers.
I’m willing to take that risk.
Scott Says:
Comment #45 November 23rd, 2007 at 4:45 pm
Coin: A quantum computer is “adiabatic” if at every time t, it’s in the ground state (or close to the ground state) of whatever Hamiltonian H[t] is currently being applied.
(I’m sure the physicists have other ways of putting it, but that’s the definition I know…)
Greg Kuperberg Says:
Comment #46 November 23rd, 2007 at 4:49 pm
What is the difference between a quantum computer and an adiabatic quantum computer?
The more fundamental notion, in my view, is the adiabatic algorithm (or algorithms) in quantum computing in general. Adiabatic algorithms are a quantum variant of simulated annealing. When you solve
an optimization problem with simulated annealing, you interpret the optimization objective as an energy, then impose an artificial temperature, which you then decrease slowly enough that the
simulation converges to the least-energy state with good probability. You can have simulated annealing algorithms either classically or quantumly.
So anyway, what would an adiabatic variant look like? An adiabatic process is one in which the energy functional changes, but the energy does not. In particular, you can have a zero-temperature
adiabatic process that maintains a system in its ground state. This is not meaningful for a discrete classical system, because there is no way for the ground state to evolve continuously. But you can
do it quantumly, because even in a finite system, the set of pure states is a connected manifold.
So that is an adiabatic quantum algorithm (at zero temperature). The system begins in the ground state of an energy or Hamiltonian whose ground state is known. Then you evolve the Hamiltonian while
frequently measuring the energy. (You have to choose a path of Hamiltonians whose measurements are computable.) The act of measuring the energy, if you do it frequently enough, maintains the ground
An adiabatic quantum computer is an SPD devoted to an adiabatic quantum algorithm. But since there are adiabatic quantum algorithms that are universal for QC, with polynomial overhead, you can call
it a computer rather than an SPD.
An issue with D-wave — setting aside questions of integrity in their interactions with outsiders — is that when they call their device an adiabatic quantum computer, all three words are rather
metaphorical. It’s not clear that it’s really quantum, or a computer at all, and it looks more like annealing than adiabatic evolution.
Robin Blume-Kohout Says:
Comment #47 November 23rd, 2007 at 4:54 pm
In case you wondering, I completely agree (re: post #42). That’s why I lauded the efforts that you, Scott, and Umesh have made to keep the discussion honest. We also have a duty (IMHO) to perform the
best science and technology possible — which means salvaging diamonds in addition to pointing out cubic zirconia.
Greg Kuperberg Says:
Comment #48 November 23rd, 2007 at 4:56 pm
but the energy does not
I mean, rather, the entropy.
Robin Blume-Kohout Says:
Comment #49 November 23rd, 2007 at 5:07 pm
Great explanation of AQC. But… there are a couple of points where I’m not sure if I agree. Can you help me out?
1. Is an adiabatic algorithm really “a quantum variant of simulated annealing”? SA uses temperature, and therefore stochasticity, at a fundamental level (thermal fluctuations are essential to get you
out of local minima), whereas ideal AQC is a deterministic evolution at zero temperature (until the final readout).
2. Re: “the energy functional changes, but the energy does not,” the energy itself can change, as long as you stay in the ground state, no?
3. [potentially ignorant question] Do you really have to measure the energy as you go along? I thought the adiabatic theorem kept you in the ground state with high probability anyway. And… if you do
induce a Zeno effect by measuring energy at rapid intervals, doesn’t that actually let you run faster than the adiabatic threshold?
Pardon my ignorance, please.
Robin Blume-Kohout Says:
Comment #50 November 23rd, 2007 at 5:08 pm
Oops. Re: my 2nd question, I should have checked for updates before I posted. Please ignore.
Greg Kuperberg Says:
Comment #51 November 23rd, 2007 at 5:25 pm
Is an adiabatic algorithm really “a quantum variant of simulated annealing”?
Yes, I think so. There is first of all a classical variant of simulated annealing in which the energy functional also changes continuously. This is actually important in practice — sometimes you want
to smooth the optimization objective in order to grease the annealing, until the end when you roughen the objective to exactitude. AQC is exactly the zero-temperature version of this generalization.
SA uses temperature, and therefore stochasticity, at a fundamental level (thermal fluctuations are essential to get you out of local minima), whereas ideal AQC is a deterministic evolution at zero
temperature (until the final readout).
It certainly doesn’t use deterministic evolution, it uses quantum evolution. Quantum states in general are the non-commutative generalization of classical probability distributions. See my notes on
this theme (posted on my home page). It is true that the state in AQC is non-stochastic in the sense that it has no entropy, but in every other respect it is just like an adiabatic transition in
classical annealing.
2. Re: “the energy functional changes, but the energy does not,” the energy itself can change, as long as you stay in the ground state, no?
The entropy, rather. To be even more precise, an adiabatic process is one that does not exchange entropy with the environment.
3. [potentially ignorant question] Do you really have to measure the energy as you go along?
Yes, of course. If you evolve the Hamiltonian for which the system is supposed to stay in the ground state, you have to let the system know some how. In the strict sense of algorithms, “you” (the
classical controller) have to measure the Hamiltonian. In an adiabatic SPD, the evolving Hamiltonian could instead be measured “automatically” — i.e., by physics — but that’s not really different.
Scott Says:
Comment #52 November 23rd, 2007 at 5:25 pm
OK, for those who are interested: the algebrization paper is now available from my papers page. Enjoy!
Greg Kuperberg Says:
Comment #53 November 23rd, 2007 at 5:32 pm
Saving the best for last:
And… if you do induce a Zeno effect by measuring energy at rapid intervals, doesn’t that actually let you run faster than the adiabatic threshold?
Yes. The important point is that measuring the Hamiltonian is the part that carries a computational cost. It is the adiabatic analogue of thermal mixing steps in simulated annealing. The algorithmic
work in either case is proportional to the ratio: rate of measurement or mixing / rate of change of H or T. In the case of AQC, the mininum of this ratio is inversely proportional to the energy gap
(between the ground state and the next state). So the bad news is that the generic AQC scheme has an exponentially small energy gap, just as the generic simulated annealing scheme has an
exponentially long mixing time.
Geordie Says:
Comment #54 November 23rd, 2007 at 5:58 pm
Greg’s description of AQC is fairly accurate. Bill and I wrote a short introduction to AQC last summer which is here http://dwave.files.wordpress.com/2007/08/20070810_d-wave_quantum_annealing.pdf ,
which ties together the variety of ways in which free energy minimization can be used to drive computation.
Greg Kuperberg Says:
Comment #55 November 23rd, 2007 at 6:43 pm
Actually, Scott, I have a question about your paper with Avi. In comparing two complexity classes X and Y, you go out of your way to give X and Y different oracles. What if instead you defined an
oracle to be algebraic if its input is a pair (p,x), where p is a prime and x is a string over Z/p, and if its output is given by a low-degree polynomial that depends only on the length of x? Could
you then say that you consider separations and inclusions relative to the restricted class of algebraic oracles?
Or, to the contrary, do you really need to give the two classes two different oracles?
Scott Says:
Comment #56 November 23rd, 2007 at 7:07 pm
Greg, I’ll just paste the explanation from pages 9-10 of our paper:
When we examine the above definition [of algebrization], two questions arise. First, why can one complexity class access the extension ~A, while the other class can only access the Boolean part
A? And second, why is it the “right-hand class” that can access ~A for inclusions, but the “left-hand class” that can access ~A for separations?
One answer is that we want to define things in such a way that every relativizing result is also algebrizing. This clearly holds for [our definition]: for example, if C^A is contained in D^A,
then C^A is also contained in D^~A, since D^A⊆D^~A. On the other hand, it is not at all clear that C^A⊆D^A implies C^~A⊆D^~A.
A second answer is that, under a more stringent notion of “algebrizing result,” we would not know how to prove that existing interactive proof results algebrize. So for example, while we will
prove that PSPACE^A⊆IP^~A for all oracles A and extensions ~A of A, we do not know how to prove that PSPACE^~A=IP^~A for all ~A.
A third answer is that, for our separation results, this issue seems to make no difference. For example, in Section 5 we will construct oracles A,B and extensions ~A,~B, such that not only P^~A=
NP^~A and P^~B≠NP^~B, but also NP^~A⊆P^A and NP^B⊄P^~B. This implies that, even under our “broader” notion of an algebrizing result, any resolution of the P versus NP problem will require
non-algebrizing techniques.
We list as one of our open problems to prove (for example) that IP^~A = PSPACE^~A for all algebraic oracles ~A.
On the other hand, we also cite a survey article by Fortnow, where he does show that IP^~A = PSPACE^~A for all “algebraic oracles” ~A, but only under a much more subtle definition of “algebraic
oracle” — where you first take a low-degree extension, then reinterpret the low-degree extension as a Boolean function, then take a low-degree extension of that Boolean function, etc. For our
purposes (e.g., proving algebraic oracle separations), Fortnow’s definition seems much too difficult to work with.
Mohammad Amin Says:
Comment #57 November 23rd, 2007 at 7:45 pm
Dear Robin,
I usually try to avoid discussions on blogs. But I am happy to see that you understand the physics behind my talks and raise relevant questions.
The important points that I tried to address in my presentations, and I think you are among those who got it right, is three-fold:
1. Classical limit in physics is large temperature limit and not long time limit. There are many systems that stay in fully coherent superposition states forever as long as T is small. A good example
is superconducting condensate. The coherence of the condensate does not get destroyed at long times nor at large sizes.
2. The decoherence (T_2) time scale in qubits is just the time scale that makes the density matrix diagonal in the energy basis. It is not the time scale that makes the qubits classical. A density
matrix that is diagonal in energy basis can be non-diagonal in commutation basis and therefore can have superposition, entanglement, etc. Of course in the absence of a strong Hamiltonian since the
energy spacings of the system is necessarily smaller than T, the system is in classical limit. In such a case, as soon as decoherence time is passed the qubits become classical. This is the case for
gate model quantum computation because there is no Hamiltonian to stabilize the state of the system.
3. In adiabatic quantum computation AQC, on the other hand, such a Hamiltonian exists. Therefore in AQC it is irrelevant to ask if the computation time is shorter or longer than T_2. The relevant
questions, as you also mentioned, are whether there is a strong Hamiltonian (stronger than noise) and how small is the temperature.
In my presentations I tried to answer these questions. The single qubit data I showed was not to experimentally justify that our multi-qubit system is e.g. in entangled state (which is actually a
very hard experimental question). Instead, I showed that we have a method to extract every term in the Hamiltonian experimentally as well as the noise strength that corresponds to such a term. This
way, you can easily see that the first condition, i.e., having a well defined Hamiltonian, is satisfied. Second I showed that we have a very clear way to determine the temperature of our sample and
it is 20 times smaller than the energy gap between the ground state and excited state of our system.
This line of thought was phrased by Seth Lloyd in Eddie Farhi’s the group meeting as “convincing evidence”, which I think was the highlight of the meeting and I am surprised that Scott did not
mention anything about it. When we were asked for further evidence, we asked the audience for suggestions for experiments. Isaac Chuang suggested raising the temperature, but the conclusion was since
the relaxation time of our system is very short, one would end up measuring thermal distribution of the final Hamiltonian which doesn’t give you any information about the evolution. Once again Seth
stated that there could be different levels of evidence, and the evidence that we have could be the lowest level but still an evidence.
Once again I really prefer to have discussion via email or direct discussion than blog. If you have any further question please feel free to send me an email. I will be happy to answer your
Scott – You told me in MIT that you want the public don’t get misinformed about quantum computation. I think you are misinforming them about what went on in Eddie Farhi’s group meeting. I understand
that you are not a physicist, but there were physicists in the audience and one them who understands adiabatic quantum computation better than anyone else in the world is Seth Lloyd. I hereby request
you to acknowledge to your audience hearing at least two times from Seth Lloyd phasing our evidence as “convincing evidence”.
Greg Kuperberg Says:
Comment #58 November 23rd, 2007 at 8:00 pm
Mohammed Amin: it is good to see your name in this discussion. According to Scott, you said that you were not involved in the Sudoku demonstration. Now, D-Wave is not all that big of a company, maybe
a few dozen employees. Do you know who did the Sudoku demo?
If you would prefer to reply by e-mail, that is fine; my e-mail address is listed on my web page.
Scott Says:
Comment #59 November 23rd, 2007 at 8:02 pm
Yes, Seth Lloyd was at the meeting, and he did indeed talk about different levels of evidence, with the results you presented the lowest level but “still an evidence.” I apologize for omitting that
from my narrative of the meeting; I didn’t know that you considered it the highlight.
Mohammad Amin Says:
Comment #60 November 23rd, 2007 at 8:08 pm
Scott- First of all Seth Lloyd did not say “still an evidence” but said “convincing evidence”. Second I don’t understand how this conclusion of Seth Lloyd in a session that was all about the evidence
that we presented cannot be the highlight of the meeting.
Robin Blume-Kohout Says:
Comment #61 November 23rd, 2007 at 8:24 pm
Thanks for the clear explanation. However… I’m still skeptical on one point. You say that measurement is necessary because “If you evolve the Hamiltonian for which the system is supposed to stay in
the ground state, you have to let the system know some how.”
But — the Hamiltonian is exactly the thing that the system “knows”! If you change the Hamiltonian from H to H’, the system is darn well going to know about it; it’s going to start evolving as e-iH’t
instead of e-iHt.
Now, as I recall the adiabatic theorem, its import is that you stay in an eigenstate of H provided that the transit rate is slower than the inverse gap. Nothing in there about nonunitary dynamics
like measurement. Furthermore, if I let the system evolve via e-iHt for a random and unrecorded time T>>1/gap, then the result is a dephasing channel in the energy basis — which is precisely the
effect of performing a measurement in the energy basis.
So, I’m having a lot of trouble buying the argument that you have to measure, simply because (1) AFAIK, proofs of the adiabatic theorem don’t require nonunitarity, and (2) I can simulate a
measurement in the energy basis just by closing my eyes and waiting.
Are we possibly having a physics/CS miscommunication here, wherein you’re taking “Hamiltonian” to mean an abstract Hermitian operator rather than the actual operator that is instantaneously
generating the evolution of your quantum computer? I’m aware that you can substitute repeated measurement of an operator H for unitary evolution according to H… but I can’t see where the measurement
is necessary.
Robin Blume-Kohout Says:
Comment #62 November 23rd, 2007 at 8:35 pm
Thanks for your comments! I’ll look forward to the opportunity to chat with you at leisure sometime when our paths cross. For now, I’ll just make two very quick comments and leave it at that.
* I’m a little worried about how “quantum” and “classical” are getting tossed around (in many many places, not just your post!). For instance, there are [many] systems that are definitely
nonclassical, yet do not support various quantum information tasks.
* T1 and T2 are well-defined for single qubits. For multiple qubits, however, it’s not so simple: it’s definitely not safe to say “The whole system will decohere in its energy basis after the
single-qubit T2″. So, there’s another question (in addition to “How big is H?” and “How big is T?”), which is “Does the decoherence model scale?”
Thanks again for the talk at IQC!
Greg Kuperberg Says:
Comment #63 November 23rd, 2007 at 9:36 pm
Are we possibly having a physics/CS miscommunication here
I would say so, although I meant to address both sides of it. First, you can use adiabatic QC as an algorithm with some qubits that don’t have a Hamiltonian. In this case you have to invent one and
act it on the qubits.
Now, as you say, if you have a physical Hamiltonian H, letting it evolve by exp(-iHt) for an uncertain period of time has the same effect as measuring H. I completely agree with this mathematical
point, but I interpret it differently: This evolution for a random period of time is the measurement of H. The environment is learning the value of H! If the environment measures H, then of course
you don’t have to.
In my notes I define a quantum operation M called a hidden measurement. I.e., if you have some quantum state and I measure some operator H without telling you the value, what is the posterior state
for you? As you say, the hidden measurement of H is exactly the time evolution exp(-iHt) rho exp(iHt) averaged over a long period of time. But there is also a somewhat subtle converse, which you can
infer from Stinespring dilation or mutual information or by other means: regardless of how you arrive at the hidden measurement operation M, the environment does know the measured value.
Robin Blume-Kohout Says:
Comment #64 November 23rd, 2007 at 9:58 pm
Okay, I think we’re in agreement on all the major points. However, in the cause of scientific discourse, I’m going to press on one rather interesting interpretational detail.
You say “If the environment measures H, then of course you don’t have to.” This implies that somebody has to. What’s interesting in this case, though, is that the system remains in the ground state
at all times — which means that everybody (well, you and the environment) knows exactly what that measurement will reveal before it’s made… and the environment never develops any mutual information
with the system (as implied by the fact that the whole process is isentropic).
What this means is that you can derandomize the process. If you let the system evolve for a certain period of time (T) in between the N adjustments of H (by epsilon = 1/N), then as long as T is
chosen so that the O(epsilon) errors in amplitude for the individual adjustments don’t add up coherently (i.e., don’t choose T to be too short, or really close to 2*Pi), the final probability of
ending up out of the ground state is still O(N*epsilon^2).
This is the content of the adiabatic theorem, IIRC. Basically, the e-iHt evolution takes you in very small spirals around a big slow trajectory for H(t) — which works even if you have perfect
knowledge of the clock (which is what was acting as an environment in the hidden-measurement model).
St. John Rucastle Says:
Comment #65 November 23rd, 2007 at 11:29 pm
Av Avi knows best.
Greg Kuperberg Says:
Comment #66 November 23rd, 2007 at 11:57 pm
I agree, Robin, that is an interesting point of interpretation. In general, if a quantum operation is a convex sum of unitaries, and if its application does not create mutual information for some
adiabatic reason, then there should be a way to derandomize it, because you don’t learn anything from looking at which unitary term was employed. Neat. If I were looking for gems in the crap, then
your remark certainly counts, in the context of an otherwise dreary conversation about D-Wave. ((1) Admittedly Scott’s new paper is another gem, rather more substantial than a good side remark. (2)
Dreary though it may be, the D-Wave discussion could still be interesting for negative reasons.)
which means that everybody (well, you and the environment)
In these discussions, there are the qubits, there is you, and then the environment is everybody else.
Sam Says:
Comment #67 November 24th, 2007 at 3:45 am
Greg, I think your interpretation of Hamiltonians as measurement is a bit odd, and don’t understand how you are finding measurements in a closed (unitary) system.
Scott, I can’t believe you used “algebrize”, aack! What about A-relativize for short?
Scott Says:
Comment #68 November 24th, 2007 at 9:28 am
Sam: Yeah, I thought of that. But
(1) I think that computer scientists have been much too eager to coin acronyms, and that this has damaged the public perception of our field relative to physics (which has cool names like “quark”,
“supersymmetry”, “black hole”…),
(2) we already use “irrelativize” for another purpose, and
(3) we didn’t have the luxury of proposing a word in isolation, and phrases like “non-A-relativizing techniques” grated on my ears.
Greg Kuperberg Says:
Comment #69 November 24th, 2007 at 12:12 pm
Greg, I think your interpretation of Hamiltonians as measurement is a bit odd, and don’t understand how you are finding measurements in a closed (unitary) system.
To answer the second question first, the whole universe is a closed unitary system as far as we know. Certainly the laws of chemistry can be placed in a closed unitary system, and we human beings are
chemical entities. So it isn’t my proposal to find measurements in a closed unitary system, it’s implicit in the postulates of quantum mechanics. It just isn’t explained very well. My notes are meant
to address that concern, and they begin to do that, but sadly they are far from finished.
In a simplified model, if you make a unitary operator on a joint system A tensor B that entangles their state, then you can typically say that A measured B. It really comes to the same thing, because
if A is then subject to decoherence and rendered classical, she can still remember a measured value.
Now, there is a more subtle version of this in which elapsed time is a quantity that can be measured, or an uncertain variable and therefore a source of decoherence. I agree that it’s odd, but it is
indisputably part of the story. If you include this source of decoherence, then the world does indeed measure the Hamiltonians of its objects eventually.
Gil Says:
Comment #70 November 24th, 2007 at 12:18 pm
“What I’m finding, more and more, is that the arguments you and I find persuasive are simply not persuasive to most people (even though they should be).”
Try to improve the arguments! My hunches regarding D-wave may very well be similar to yours, Scott, but the initial arguments against D-wave were not persuasive. Worse, some of them were generic
enough to apply to any company with ordinary behavior trying to build quantum computer technology or other similar daring endeavors.
“It would be as if a company claimed it had an algorithm that could compress a random string, you and I pointed out that 2**n > 2**(n-1), and everyone dismissed that as an irrelevant piece of
If “random strings” refer to uniformly distributed 0-1 strings of length n then you are right. But “random strings” can have other meanings. It can be a random string drawn from the internet.
“All they want to know is: how well does the compression algorithm work on my data? ”
This is quite reasonable.
“And you try to explain to them that if a company could be so egregiously, howlingly wrong about such a fundamental point, then there’s little reason to expect their algorithm will do anything useful
for anyone.”
Maybe D-wave was egregiously howlingly wrong on some crucial theoretical points. It can be useful to point them out. But the claim/hope that QC will be able to solve NP complete problems which found
itself into the media is not as terrible as 2**(d-1) >= 2**d and is not that relevant to D-wave activities. Moreover, they quickly withdrew from it.
“They don’t understand that you can refute a complicated theory by attacking at its weakest point; they think you have to understand every detail first.”
Well, the reality is that you cannot refute a complicated theory (especially a developing theory) by attacking at its weakest point. But sure enough, attacking at its weakest point is a good
ScentOfViolets Says:
Comment #71 November 24th, 2007 at 1:12 pm
This is addressed to John Sidles. John, I’m studying algebraic geometry, so I find your speculation that there might be a purely geometric formulation for P!=NP intriguing. That being said, why the
recommendation for Harris’ “Algebraic Geometry”? I happen to have a copy, and I don’t see anything about it that is particularly noteworthy. A good, solid text, of course. Am I missing something? Are
there any sections worthy of special attention in this context? For the beginner, I’d recommend something like Fulton’s “Algebraic Curves” for a general concise overview of the relationship between
algebraic entities like radicals and geometric properties such as genus.
Robin Blume-Kohout Says:
Comment #72 November 24th, 2007 at 1:40 pm
Perhaps I can put a sharper edge on Gil’s argument by pointing out a rather obvious point: words mean different things to different people. D-Wave is in an unfortunate position between Scylla (the
scientific community) and Charybdis (business & venture capital). The Tech Review Q&A with Jason Pontin (see link in post #33) makes this very clear. Terms like “approximate solution” and “solving
NP-complete problems” are being used “as businesspeople would use it, and not as computer scientists use it” (to quote Pontin). Some would say that the same is true for D-Wave’s use of
“fault-tolerant” and “quantum computer”.
The easy way to deal with this is to simply consign them to the deepest pits of scientific wrongness. After all, if the device does end up providing really good, fast, approximate solutions to TSP,
we’re entirely justified in saying “That’s irrelevant; it’s not a QC and it doesn’t solve NP-complete problems.”
Or, we could try to clarify these nomenclatural discrepancies in a way that’s as educational as possible for non-specialists. I think both the Tech Review Q&A and several of Scott’s posts do a decent
job of this. But we could do still better. I think the world is still lacking good, readable answers to “If a computing device isn’t classical, is it a quantum computer?” and “What sorts of useful
problems could this thing help solve, without violating any theorems or widely-believed conjectures?”
In other words, instead of just explaining what these terms (e.g. “random string” or “approximate solution”) mean (rigorously) to a computer scientist, we could also analyze their relationship with
the definition a man-on-the-street would assign.
And… Geordie, I know D-Wave has posted high-level answers to these questions. But I can’t afford to take them as authoritative, only because you have an explicit financial interest in the answers.
John Sidles Says:
Comment #73 November 24th, 2007 at 3:17 pm
ScentOfViolets is a *great* on-line name!
I am definitely not an expert on algebraic geometry … I use Harris’s textbook precisely because it is a “good, solid text.”
For a quantum system engineer, it is quite a bit of fun to read Harris, and recognize old friends under unfamiliar names … our group is preparing a review article that does this pretty
For quantum simulation to be computationally feasible, we need to describe quantum trajectories in some compressed format … the typical approach in engineering is to project the trajectory onto some
lower-dimensional manifold.
So it is natural to ask, what is the geometry of these manifolds? For reasons of algorithmic efficiency, the manifolds used in practice invariably are algebraic varieties … hey, that’s why its a good
idea to read Joe Harris’ book!
Following up these ideas leads to a review article of (seemingly) infinite length. And there is one loose end left over … a cryptographic loose end … which (like every other carbon-based life form)
we wrote up as a 2008 STOC submission (plus Victoria BC is such a beautiful place to visit).
Since “geometric cryptography” is well outside our QSE Group’s main line of research, I don’t mind posting the abstract here, as an example of how naturally the subjects of algebraic geometry and
quantum information theory can be merged. The basic ideas are reasonably accessible to undergraduates.
This article describes algorithms for cryptographic key exchange that can be implemented by numerical quantum simulation. We consider a quantum system that is described by a total Hamiltonian of
which only Alice knows half, and only Bob knows the other half. We further stipulate that the system is subject to Markovian noise processes, of which (again) only Alice knows half, and only Bob
knows half. And finally, we stipulate that Alice’s Hamiltonian and noise processes have a sparse representation in a basis that is known only to Alice, and Bob’s Hamiltonian and noise processes have
a sparse representation in a basis that is known only to Bob. Starting with a random quantum state, Bob and Alice collaborate to evolve a quantum trajectory, in alternating infinitesimal steps, that
is governed by the net Hamiltonian and noise model, exactly as though they were collaborating in simulating the trajectory for purely engineering purposes. This quantum trajectory is assumed to be
public knowledge. To extract a shared secret key from the public trajectory, Alice projects the trajectory onto a Kahler submanifold that is an algebraic variety of product-sum form in her basis set;
Bob similarly projects the trajectory onto a Kahler submanifold that is an algebraic variety of product-sum form in his basis set. Alice therefore knows the difference vectors between the public
trajectory and her private Kahler projection; these difference vectors are her key. Bob similarly knows the difference vectors between the public trajectory and his Kahler projection, and these
difference vectors are his key. Provided that both Alice and Bob choose to model their noise processes as covert measurement processes, it can be shown that these two difference vector trajectories
are anti-correlated, and these anti-correlated trajectories constitute Alice and Bob’s shared information. This method is of interest because it establishes the geometric equivalence of a set of
problems in physics, engineering, and information theory.
There’s not a lot more to tell … the body of the article is mainly references, and will pose no mysteries to anyone who has read Nielsen and Chuang on the one hand, and (say) Joe Howard’s book on the
other hand.
Just to remark, the resulting scheme bears a family resemblance to Ajtai-Dwork lattice cryptosystems.
More generally, from a geometric point of view, can’t pretty much *any* algorithm be regarded as a projection, from a large-dimension space of random strings, to a low-dimension space of proofs?
So perhaps, ScentOfViolets, with P!=NP having been naturalized, relativized, and now algebrized (which would be the best Bob Dylan song ever) it will fall to your student generation’s turn to
geometrize it. Good luck!
Joe Says:
Comment #74 November 24th, 2007 at 3:45 pm
Mohammad: While I have great respect for Seth, the fact that he found an arguement compelling isn’t in and of itself a reason why we should. Science has no authorities, and all that.
As a result, I can’t really see the point in arguing over the exact phrase.
ScentOfViolets Says:
Comment #75 November 24th, 2007 at 4:16 pm
Many thanks for your kind reply and your pointers. I take my screen name from the nineteenth century diffusion problem, which I am told by my old Thermo professor was how it was colloquially known at
the time.
Scott Says:
Comment #76 November 24th, 2007 at 4:55 pm
Joe: Thank you! Seth was one of ten or more people who expressed their opinions at the meeting, and in my zombified state I failed to take notes of everything that was said.
Incidentally, “still an evidence” was actually a quote from Mohammad’s own comment. I don’t remember Seth’s exact words, but am perfectly willing to believe he used the phrase “convincing evidence.”
Stas Says:
Comment #77 November 24th, 2007 at 7:33 pm
Robin -
Terms like “approximate solution” and “solving NP-complete problems” are being used “as businesspeople would use it, and not as computer scientists use it” (to quote Pontin). Some would say that the
same is true for D-Wave’s use of “fault-tolerant” and “quantum computer”.
I doubt business people would talk in such terms at all, they would rather talk in terms of specific applications, like protein folding, for example. But the question is not anymore about used words,
it’s about probable blatant dishonesty is their quantum computing “demonstrations”, such as Sudoku. And it has recently come to my mind, why would any high-tech start-up run such intensive and costly
PR years before the technology is to be ready for customers? They could have been quietly filing patents and doing demos for their investors, to annouce their breakthrough when everything is already
developed and tested. But no, they are getting loud now, when any real question about performance can be answered by mentioning that it’s “only 16 qubits yet”, so wait for the miracle a little bit
more… And I have one possible explanation for that: somebody wants to get acquired (or even publicly traded) long before the technology can be really tested… Somebody wants to wash their hands…
sanktnelson Says:
Comment #78 November 24th, 2007 at 8:01 pm
Why does everybody keep going on about the sudoku? Just because the guys at the talk (who are physicists, not computer scientists) couldn’t write down the formula that was used to map the thing onto
their hardware from the top of their head?
To me this seems the most boring and simplest of their demos. Their system solves optimization problems subject to constraints. Sudoku: optimization = find right answer, constraints = existing
numbers on the grid combined with basic algebra. Only that in this case an approximate answer isn’t good enough. But with a sufficiently small sudoku, it might even fit into their 16 qubits without
being broken into smaller pieces by classical hardware first.
All those demos were just supposed to show how easy it is to port applications to their hardware, not how much faster they become (which they didn’t anyway, and in the case of sudoku likely never
will, because it’s very efficiently solvable on classical computers).
Job (UG) Says:
Comment #79 November 24th, 2007 at 8:23 pm
sanktnelson, i think because a variable grid-sized Sudoku is known to be NP-Complete and NP-Complete problems aren’t known (and are unlikely) to be efficiently solved by Quantum Computers (BQP vs NP
is an open question).
If Quantum Computers were able to solve NP-Complete problems efficiently they would be worth that much more, so the demo at present is misleading (intentionally or not) about the capabilities of
Quantum Computers, even though there’s nothing amazing about solving a few instances of a 9*9 sized Sudoku (my kitchen sink can do that).
That’s my interpretation of all this.
sanktnelson Says:
Comment #80 November 24th, 2007 at 9:08 pm
hmm, for some reason I was under the impression that sudoku is not np-complete, but was just thrown in because it’s a mathematical problem that people actually can relate to. Well, seems I was wrong.
That doesn’t change my original point though: All their demos were not to show efficient solving of NP-complete problems, or efficient solving of anything, for that matter. It just shows how easily
different problems (with some underlying mathematical similarity) can be adapted to use their hardware. The latest demo at SC07 is completely in line with that. And even Geordie Rose has repeatedly
stated that the final test of their technology will be when they are actually faster at solving anything. Which is not at 16 and not at 28 qubits. Maybe at 512, but who knows?
Greg Kuperberg Says:
Comment #81 November 24th, 2007 at 9:36 pm
Why does everybody keep going on about the sudoku? Just because the guys at the talk (who are physicists, not computer scientists) couldn’t write down the formula that was used to map the thing onto
their hardware from the top of their head?
No, it’s because no one has proposed any meaningful way to map a full-sized Sudoku onto their 16-bit hardware. 16 bits is simply too few to provide any real help. And it’s because the company CTO,
who did the demo, refuses to explain it. It’s as if he pulled a barge pole out of a top hat and insisted — without explaining how — that it’s new technology rather than old-fashioned cheating.
The only relevance of Amin and Berkley is that they said that they know nothing about it. Someone at D-wave must know about it, but apparently not these two guys.
But with a sufficiently small sudoku, it might even fit into their 16 qubits without being broken into smaller pieces by classical hardware first.
It was a full-sized Sudoku, and no one has even proposed a way to break it into smaller pieces so that 16 bits provide any real help. These are not even 16 fully programmable bits; they are in a
fixed Hamiltonian that can only be used with a penalty factor. It is difficult to use them for more than 6 bits. Again, Rose says in the video, “the quantum computer spits out the answer,” but no one
has proposed an honest meaning of that statement.
Job (UG) Says:
Comment #82 November 24th, 2007 at 9:37 pm
Scott, you passed up a good chance to introduce “dramatization” and “dramatize” into TCS lingo.
“It can be shown that a solution to NP !C P cannot dramatize, since NP^A~ C P^A, where A is Merlin and A~ is his drama-queen of a daughter Mary obtained by taking his low-degree extension and… The
prover (Merlin) then wants to convince Arthur that f(x) = 0 for any x in {0, 1}^n but, and here’s the catch, he’s allowed to make any use of Mary he wants though at the risk of having her talk his
ear off at no one’s benefit…”
Varun Says:
Comment #83 November 24th, 2007 at 10:26 pm
The SudoQ mystery… solved:
I have never posted on Scott’s blog before and I am not even a regular reader but I feel obligated to post this time because of all the cofusion surronding the Sudoku demo. I am one of the software
engineers working in the applications group at D-Wave and yes… I did the sudoku demo!
Before going further, although irrelevant, I cannot stop myself from pointing out that out of all the amazingly smart people posting on this blog, just one (sanktnelson, Comment #78) actually
comprehended the purpose of the sudoku demo.
Anyways… I just wanted to say (although I know that you guys are not just going to take my word for it) that the demo was genuine and is not all that hard if you think about it. If you remember from
Geordie’s february demo, the 16-qbit QC is able to solve small maximum independent set (MIS) problems. So all I did was convert the sudoku into an MIS problem and device a strategy to break large MIS
problems into chunks that the 16-qbit hardware can accept. There are numerous (not neccessorily smart) ways of doing that. This combination of algorithmic breaking down and use of hardware to solve
small problems is perhaps what Geordie was referring to as “hybrid” approach.
Note that this also explains why Amin and Andrew may not know about the sudoku demo. We in the applications group view the hardware as a black box (although it is getting more and more gray) that can
solve MIS problems. We dont know how the hardware guys solve it.. and the hardware guys dont know what kind of problems are we using the hardware for. It is to display this ease of use that was the
primary intent of the demo (the table plannar and sudoq), that is, how real world problems can be solved on the hardware even though they are different from the native language of the hardware which
is MIS.
Greg Kuperberg Says:
Comment #84 November 24th, 2007 at 10:38 pm
I did the sudoku demo!
All right, this is a start. I am going to guess that you are Varun Jain?
So all I did was convert the sudoku into an MIS problem and device a strategy to break large MIS problems into chunks that the 16-qbit hardware can accept.
This too is a start, but there is a lot left to explain. You converted the Sudoku into an MIS problem. So the Sudoku became a graph, with how many vertices? A straight conversion might use 729
vertices, to represent the 9 possible values in each cell, minus perhaps the cells that are set in advance. Was that your conversion? What was the graph and how many vertices did it have?
Now, an MIS problem in the 16-qubit computer would have to be a subgraph of the 4×4 grid, right? How many vertices did each of these chunks have? And, the central question: How do you convert the
Sudoku graph into chunks?
Greg Kuperberg Says:
Comment #85 November 24th, 2007 at 10:41 pm
Let me add also that if as you say, the conversion was “not all that hard”, then the simplest way to clear the cloud of suspicion entirely would be to post the software to the Internet. If it’s not
hard, then it’s unlikely to contain any trade secrets.
Varun Says:
Comment #86 November 24th, 2007 at 10:51 pm
Greg, I dont know if you have, but if you have worked in any commercial industry, then I am sure you know NDA’s and why I or anyone at D-Wave cannot post the software or details about the software
online. What I can tell you is that there are numerous ways based on branch-and-bound to reduce problems like SAT and MIS into small, manageable chunks. Infact, branch-and-bound is the basis of the
most successful SAT solvers.
Job (UG) Says:
Comment #87 November 24th, 2007 at 10:53 pm
The term PseudoKu just occurred to me, i wish i had been able to use it earlier.
Greg Kuperberg Says:
Comment #88 November 24th, 2007 at 11:03 pm
It’s not all that hard, but there is an NDA so you can’t explain it? Gee, now it is back to sounding bad. Your employer could easily release you from that NDA for a question as simple as basic MIS
conversions. Besides, it is a particularly aggressive use of NDA if the software is just a demo intended for journalists.
What I can tell you is that there are numerous ways based on branch-and-bound to reduce problems like SAT and MIS into small, manageable chunks.
I know that. But I also estimated that these “chunks” are so small that they are trivial, and that your software front end would then be doing all of the real work. That would not count as an honest
demonstration. As I said, it’s like making an omelette with the help of a four-year-old. It’s not honest to say that he made the omelette, if all that he actually did is fetch the eggs and onions.
I also said that I was not there and I do not know what really happened. Either you can explain it without violating NDA or you can’t.
I grant that it is a good first step that you claim authorship.
Tyler DiPietro Says:
Comment #89 November 24th, 2007 at 11:51 pm
I’m going against my better judgement in interjecting here, but…
“What I can tell you is that there are numerous ways based on branch-and-bound to reduce problems like SAT and MIS into small, manageable chunks. Infact, branch-and-bound is the basis of the most
successful SAT solvers.”
That seems a bit condescending. I’m pretty sure most people reading this blog are aware that classical heuristics exist for approximating solutions to NP-hard problems. I don’t see how mentioning
these comparatively elementary facts adds anything to discussion of how D-Wave has achieved quantum speedups over the extant methods.
Tyler DiPietro Says:
Comment #90 November 25th, 2007 at 12:04 am
Concatenating one last bit to my post:
“…or whether the quantum hardware had any non-trivial role in finding the solution.”
Varun Says:
Comment #91 November 25th, 2007 at 12:47 am
Greg, the software at D-Wave is a combination of state-of-the-art and numerous novel ideas. I would be very surprised if anyone would authorised to publish it online or talk about its details. But
your omlette anology is a little miscued. Here, the four-year-old is not just fetching the eggs and onions, but making a miniature omlette so to speak… hence giving evidence that when grown up, he/
she will be able to make a full fledged omlette (sorry… couldnt think of anything better)
Tyler, I never meant to even suggest anything “quantum”. I was just explaining how was sudoku processed before it was sent to the hardware. To repeat, I am a member of the applications group and I
will not even pretend to know how D-Wave’s hardware works. Infact, my knowledge about quantum processing, AQC and physics is no more than that of your average computing science grad. And I dont know
where did you even get the idea that D-Wave is claiming to get quantum speedups over existing methods. As far as I know, they are NOT claiming any such thing. Infact, from what I remember, in
february demo, Geordie mentioned that currently the hardware is about a 100 times slower than conventional machines. Anyways, its not my intent nor responsibility to comment on the hardware aspect. I
just wanted to try to clear up some confusion about the sudoku demo from the software perspective.
Tyler DiPietro Says:
Comment #92 November 25th, 2007 at 1:02 am
First, I agree that I should have been more careful with my objection. D-Wave isn’t claiming quantum speedups, at least not yet. But they did hype the sudoko-demo as a demonstration of “the first
commercially viable quantum computer.” It would seem that if the device in question didn’t do anything that differed appreciably from the already developed classical methods, then doing such a thing
is a bit disingenuous. I think that this is somewhat close to Greg’s objection, but he can correct me if I’m wrong.
Greg Kuperberg Says:
Comment #93 November 25th, 2007 at 1:18 am
Greg, the software at D-Wave is a combination of state-of-the-art and numerous novel ideas.
No, you said it was “not all that hard if you think about it”; now you say that it is state-of-the-art and novel. From the beginning, the demonstration looked dishonest, while this looks like a
How sophisticated can these prep methods really be, and yet allow the conclusion that a quantum computer solved the Sudoku? Because, the outside wisdom is that solving a Sudoku by computer is only
moderately complicated, on the level of a student project in an algorithms course. The more sophisticated it is, the less work is left for those 16 qubits. Even with off-the-shelf optimization
methods, no one has proposed a way in which the contribution from the qubits is more than trivial.
If you wrote the code, then you would know whether the 16 qubits contributed trivially or non-trivially to the solution of that Sudoku. But you keep coming back with evasive answers. It was supposed
to be an important demonstration, worthy of newspaper reports. But now the central question of how the qubits were actually used is walled off by NDA. We’re supposed to guess from hints because it’s
“not all that hard”, but on the other hand it’s hard enough that it’s proprietary.
It occurs to me that, if you really are Varun Jain, then you have almost your whole career ahead of you, and you don’t deserve the task of dishing up evasions. It’s good that you came forward, but
Rose shouldn’t make you cover for him. As I keep saying, since I wasn’t there, maybe there is an innocent explanation. But innocent or not, you deeply deserve permission to tell us what it is.
Greg Kuperberg Says:
Comment #94 November 25th, 2007 at 1:24 am
It would seem that if the device in question didn’t do anything that differed appreciably from the already developed classical methods
The objection is as follows: Clearly, and as Rose and Jain have both admitted, they need a classical computer to convert the Sudoku into subproblems that fit into 16 or fewer bits. In what way did
this non-quantum preprocessor do less than all of the work to solve the Sudoku? Never mind what the qubits did. Even if the qubits stood up and tap danced, it would not count as a quantum solution if
the preprocessor’s work was tantamount to a full solution.
Varun Says:
Comment #95 November 25th, 2007 at 2:16 am
If you wrote the code, then you would know whether the 16 qubits contributed trivially or non-trivially to the solution of that Sudoku. But you keep coming back with evasive answers.
I thought I was clear in my answer, but if thats the exact sentence you are looking for then yes the 16-qbits did contribute non-trivially.
It occurs to me that, if you really are Varun Jain, then you have almost your whole career ahead of you, and you don’t deserve the task of dishing up evasions. It’s good that you came forward, but
Rose shouldn’t make you cover for him.
I am sorry… but to suggest that Geordie might be impersonating me or asking me to cover for him is, simply put, outrageous.
Its saturday night and I didnot have much to do so was reading the blog and decided to contribute because of the accusations made against the sudoku demo specifically, as I had contributed to it. I
have shared whatever I could and its unfortunate that I couldnot help answer your questions.
Sam Says:
Comment #96 November 25th, 2007 at 2:19 am
Obviously, they knew the solution to the Sudoku in advance, since it was a trivial problem. So it is not going to be easy for them to argue that the “quantum computer” solved it. Perhaps one can make
a similar objection to factoring 15= 3 * 5 — without looking very closely at the NMR pulse sequence and techniques, you’d have no idea whether the quantum computer had really solved the problem. Here
all those kinds of details are private, plus there is an additional layer of hype, plus the “quantum computer” model seems itself to be a bit vague. Myself, I’d never be convinced until I saw all
those details, and the chance of that is apparently zero.
On the other hand, D-Wave claims that they have a very modular system in which the “quantum computer” is a black box. So it should be possible to reveal the necessary details in the reduction without
compromising the main secrets.
Certainly, tap-dancing qubits would convince me (of anything).
Greg Kuperberg Says:
Comment #97 November 25th, 2007 at 2:38 am
I thought I was clear in my answer, but if thats the exact sentence you are looking for then yes the 16-qbits did contribute non-trivially.
So you say, but no one has plausibly described such a non-trivial contribution. We all understand that 16 bits can in principle do an MIS on a very small graph, but that does not look like a
non-trivial contribution to solving a Sudoku.
But if you want to just leave it that, that you as the programmer can vouch that the 16 qubits did no more than solve a sequence of MIS problems that is vastly smaller than the Sudoku, that is at
least a partial (and negative) answer.
to suggest that Geordie might be impersonating me or asking me to cover for him is, simply put, outrageous.
I’m not suggesting either. I’m saying that the Sudoku demo looks like a shabby exaggeration, even more so if it is locked up in non-disclosure agreements. I’m saying that it’s really Rose’s
responsibility and you shouldn’t imitate his behavior, whether or not he asked you to.
Greg Kuperberg Says:
Comment #98 November 25th, 2007 at 2:48 am
Perhaps one can make a similar objection to factoring 15= 3 * 5 — without looking very closely at the NMR pulse sequence and techniques, you’d have no idea whether the quantum computer had really
solved the problem.
The difference is that 15 = 3*5 is an honest representation of that quantum computer’s capabilities. They didn’t claim that their computer can factor a 50-digit number with “help” from a classical
preprocessor. They also certainly shouldn’t lock up their NMR sequences with non-disclosure agreements; and as far as I know, they didn’t.
Varun Says:
Comment #99 November 25th, 2007 at 2:58 am
But if you want to just leave it that, that you as the programmer can vouch that the 16 qubits did no more than solve a sequence of MIS problems that is vastly smaller than the Sudoku, that is at
least a partial (and negative) answer.
umm… if you have to solve an MIS problem with X nodes and you have a solver that can solve MIS problems with Y (
Varun Says:
Comment #100 November 25th, 2007 at 3:00 am
But if you want to just leave it that, that you as the programmer can vouch that the 16 qubits did no more than solve a sequence of MIS problems that is vastly smaller than the Sudoku, that is at
least a partial (and negative) answer.
umm… if you have to solve an MIS problem with X nodes and you have a solver that can solve MIS problems with Y (Y is much less than X) nodes then I dont see any other way but to use the solver to
solve a sequence of Y-node MIS problems and re-construct the answer to the original X-node problem. If you have a way to solve the X-node problem with the Y-node solver in one shot then you may want
to talk to the clay math people about your million dollars :/
Greg Kuperberg Says:
Comment #101 November 25th, 2007 at 3:07 am
I dont see any other way but to use the solver to solve a sequence of Y-node MIS problems and re-construct the answer to the original X-node problem.
The question is not ways that you see, the question is what you actually did. You almost, but not quite, say that this is what you did. And another very important question is: What were X and Y in
your Sudoku solver? Also, how many times, N, did your classical solver hand a subproblem to the qubits? I hope that you won’t tell me that these three integers are a trade secret, even just to solve
a Sudoku.
Tyler DiPietro Says:
Comment #102 November 25th, 2007 at 3:17 am
“if you have to solve an MIS problem with X nodes and you have a solver that can solve MIS problems with Y (Y is much less than X) nodes then I dont see any other way but to use the solver to solve a
sequence of Y-node MIS problems and re-construct the answer to the original X-node problem.”
Presumably the solver in this instance is the quantum processor. From what I can gather it is essentially being fed “baked” subgraphs by a classical device to operate on, unless the reconstruction
prodecure is also carried out by the quantum processor. Barring the latter being true, it is difficult to see what kind of non-trivial contribution qubits could make here, especially given that the
entire can be done classically.
Sam Says:
Comment #103 November 25th, 2007 at 3:19 am
I agree, knowing X, Y and N would be very helpful, especially Y.
“They didn’t claim that their computer can factor a 50-digit number with “help” from a classical preprocessor.”
And neither is D-Wave. D-Wave is claiming that their quantum computer can solve subroutines of an unspecified but certainly quite small size, in order to *help* a classical computer solve a 9×9
sudoku puzzle that the classical computer could solve in a microsecond and that I could probably solve in my head. (I don’t think exaggerating your argument helps any more than it does theirs. Maybe
factoring 225=15^2 with classical assistance is a better analogy for D-Wave’s claim than factoring 50-digit numbers.)
Varun Says:
Comment #104 November 25th, 2007 at 3:21 am
The question is not ways that you see, the question is what you actually did. You almost, but not quite, say that this is what you did.
I thought I made it clear in my very first post. Here’s an excerpt:
So all I did was convert the sudoku into an MIS problem and device a strategy to break large MIS problems into chunks that the 16-qbit hardware can accept.
Your second question:
And another very important question is: What were X and Y in your Sudoku solver?
X varies with every sudoku problem… but is almost always more than 50 and typically is in the hundreds. Sorry again but I am not sure if I am allowed to tell what Y is. N also varies with the sudoku
problem and how quickly the answer can be found. I cannot even guess what N would be typically, but if I have to guess, I would say in hundreds or even thousands. Basically, larger X in most cases
means larger N.
Sam Says:
Comment #105 November 25th, 2007 at 3:24 am
“Barring the latter being true, it is difficult to see what kind of non-trivial contribution qubits could make here, especially given that the entire can be done classically.”
Tyler, “nontrivial” can have different meanings. I can certainly understand claiming that in factoring 15=3*5, the quantum computer did something nontrivial, even though it can obviously be done
classically. Complexity theorists study “parameterized nontriviality,” and in this case the unknown (secret?) number Y is that parameter.
Sam Says:
Comment #106 November 25th, 2007 at 3:30 am
Ah, so Y is a secret. That’s too bad, since it is so important. Can we play twenty questions?
1. Is Y = 5?
Sam Says:
Comment #107 November 25th, 2007 at 3:31 am
Sorry, the question should have read,
1. Is Y < 5 or > 5?
< signs are eaten by the blog, it seems.
Sam Says:
Comment #108 November 25th, 2007 at 3:32 am
As are equal signs.
Job (UG) Says:
Comment #109 November 25th, 2007 at 3:37 am
Will Y scale with an increasing number of qubits?
Tyler DiPietro Says:
Comment #110 November 25th, 2007 at 3:40 am
“Complexity theorists study “parameterized nontriviality,” and in this case the unknown (secret?) number Y is that parameter.”
Hopefully the complexity theorists can carve out plausible answers to whatever quantum procedure (is that even the correct term to use?) was used to solve the problem. The method I see above looks
like to be indiscernable from a classical heuristic with co-processor offloading, and of course the details of what went on are trade secrets. Oh well.
Greg Kuperberg Says:
Comment #111 November 25th, 2007 at 5:37 am
And neither is D-Wave. D-Wave is claiming that their quantum computer can solve subroutines of an unspecified but certainly quite small size, in order to *help* a classical computer solve a 9×9
sudoku puzzle that the classical computer could solve in a microsecond and that I could probably solve in my head.
No, that is what Rose and Jain are implying now, in this thread. It isn’t what Rose implied in the demo. The screen had a button that said, “quantum solver”; after pressing it, Rose said “the quantum
computer spits out the answer”.
I agree that Jain describes something different. Instead of a quantum computer spitting out any answer, he describes a classical computer that does 99% of the real work, and offloads trivial
sub-problems to a special purpose device with quantum features. The SPD finds maximum independent sets in very small graphs.
Now, this story has not yet been confirmed, but I agree that Jain’s answers sound honest, if incomplete. If they are really the truth, then the demo certainly wasn’t.
Also I owe this comment to Varun Jain: Thank you for your information.
Coin Says:
Comment #112 November 25th, 2007 at 5:49 am
Hi Scott/Greg,
Thanks for your help in response to my earlier post, there’s still a couple things I’m confused on so I’d like to return to this for a moment if it’s okay.
So I think I more or less understand what you’re describing about what an Adiabatic Quantum Computer would be doing if you just sat down and built one. However I’m a little bit confused because this
seems very different from “Quantum Computers” as they’ve been presented to me elsewhere. In QCs as I’ve understood them up to now, in terms of the actual algorithms that run on that QC, everything
that happens (I thought) is basically just operations on the probability distributions of the |0> and |1> states for the qubits– operations which can be decomposed into a series of Hadamard and
Toffoli gates or whatever. This is very different from the paradigm of the AQCs you’re describing– there’s no way I know to describe a Hamiltonian or an “energy state” for the execution of such a
normal-QC algorithm, or in general any way to discuss the “energy” of a bunch of qubits in such a QC at all.
This disjoint between what QCs are doing as I’ve heard it described in the past and what AQCs are doing as you describe them here makes perfect sense to me if this is just because AQCs and normal QCs
are different machines doing different things; however, the thing that’s confusing me a little is that I keep seeing references (here and elsewhere) to “the quantum adiabatic algorithm”, and the
wording makes it sound as if adiabatic quantum computation is not just something you get from designing a computing device governed by a hamiltonian– but is also a behavior which you can inspire a
“normal” quantum computer to perform if you just feed it the right algorithm. Does this “quantum adiabatic algorithm” terminology imply that there is actually some way to implement quantum adiabatic
computation as an algorithm on a “general” quantum computer? Or is the terminology simply meant to imply that there is a class of algorithms, called “quantum adiabatic algorithms”, which can be run
on an adiabatic quantum computer?
If the answer to “can you implement a quantum adiabatic algorithm as a ‘program’ on a normal quantum computer” is “yes”, then how do such things as “the Hamiltonian governing the system” and “the
ground state” gain meaning in the context of the simpler operations one normally sees quantum algorithms composed of?
And if the answer is “no”, then does this mean it is possible that since they’re not technically doing the same thing, then AQCs might not have the same complexity-class properties as QCs– for
example, is it possible that the set of problems computable by an AQC in polynomial time is not strictly equal to BQP?
I hope these questions aren’t too convoluted, thanks again…
Scott Says:
Comment #113 November 25th, 2007 at 6:13 am
Yes, you can implement the adiabatic algorithm on a standard quantum computer (e.g. with Hadamard and Toffoli gates).
But on top of that, the adiabatic algorithm also suggests new architectures for quantum computers, which might have better fault-tolerance properties.
It’s a theorem that QC and AQC are polynomially equivalent — i.e. they both lead to the same complexity class BQP. See this paper by Aharonov et al.
I actually agree with the D-Wave folks that, if we ever see practical quantum computers, there’s a good chance their “native” computational model will be very far from the gate model. But whether
that means adiabatic QC, cluster-state QC, anyonic/topological QC, or something else entirely I couldn’t venture a guess.
(Let me stress that all the architectures mentioned above are proven to be universal — meaning that, again, they all lead to the same complexity class BQP.)
qv Says:
Comment #114 November 25th, 2007 at 7:06 am
I think must be some formula about sudoku complexity, maybe it’s some hypercubes, which are solved faster than (2^n)^0.5. Since is sudoku have 81 number or about then it’s 10^81 variants, but about
26, 27 numbers was filled. So about 10^56 possible variants to try. Even in square root it would be 10^28 steps. But if sudoku puzzle is like some hyperqube like I was sow in some papers, then it’s
possible that to solve sudoku need not (2^n)^0.5=10^28 steps, but say (2^n)^{1/9}= (10^56)^0.1= 398107 steps. I realy would like to know about complexity of sudoku, when about 25 number are filled.
Also I doubt that 16 qubit quantum computer is about 100 times slower than Pentium. It’s probably thousands or maybe even milions times slower. But for sudoku solving still possible that need very
little computation power to solve it if sudoku complexity scals exponentionaly with numbers, but is very easy for small numbers. Maybe even to solve it probabiliticaly, when quantum computer always
giving wrong probabilistic answer and after few 100-1000 guesses software giving good answer…
Stas Says:
Comment #115 November 25th, 2007 at 7:21 am
I would also like to thank Varun for clarification. Varun, I’m especially greatful that you said this:
Infact, from what I remember, in february demo, Geordie mentioned that currently the hardware is about a 100 times slower than conventional machines.
It would be so nice to have this mentioned in all those popular articles on D-Wave…
John Sidles Says:
Comment #116 November 25th, 2007 at 9:16 am
It is very nice to see a discussion with some light in it! So let me express thanks to all who have posted.
For me, the most interesting questions is one that has not been asked on this thread, much less answered, and I would be interested in people’s comments on it.
Q: Can the workings of D-Wave’s computer be efficiently simulated with classical resources?
This question is interesting because (AFAICT) it is perfectly feasible for the following to be the case: “The D-Wave computer’s dynamics are not simulatable with classical resources, even though its
noise levels are high enough to quench all high-order quantum correlations.”
If that assertion were both physically true, and backed-up by innovative QIT analysis, that would imply that we are all living in a much more interesting world of quantum information processing, than
we previously imagined.
In this world (hopefully) people would lose interest in the too-blunt question “Is the D-Wave device a quantum computer?” The trivial answer being “Of course it is, because every physical computer is
a quantum computer.”
In particular, this would be a world in which D-Wave’s business plan conceivably made engineering sense! That is why engineers are so focused on issues of quantum simulation feasibility.
John Sidles Says:
Comment #117 November 25th, 2007 at 9:42 am
As a followup to the above, upon parsing COIN’s post, I want to acknowledge that he/she appears to be asking pretty much the same question, about whether D-Wave’s noisy device, can be efficiently
simulated with classical resources.
Pointers to articles that discuss this question would be very welcome!
Sumwan Says:
Comment #118 November 25th, 2007 at 9:48 am
“I would also like to thank Varun for clarification. Varun, I’m especially greatful”
Mmm that’s the first typo ever I personally noticed in Scott’s writing
Sumwan Says:
Comment #119 November 25th, 2007 at 9:51 am
Oops sorry, My mistake, It is I who must be undergoing the transformation. The message was Stas’s not Scott’s !
qv Says:
Comment #120 November 25th, 2007 at 10:23 am
Here more precisly sudoku formula: 9!*9^9=3.6288*10^14. 9! becouse in one block numbers can’t be the same. And 9^9, becouse there is 9 block, but this number can be smaller… And if about 25 numbers
already was filled then number can be even more smaller. 27 numbers can fill 3 blocks. So then need 9!*6^9= 3.6569943*10^12 variants or steps. (2^16)^0.5=256. 10^12/256~10^10. If processor working on
10 GHz then all calculation can be done in one second. But this number can be, another variant: 9!*9!=1.3*10^11. And more precisly 9!*6!=261273600 steps for classical computer and about 2612736 steps
for 16 qubits quantum computer. If one step is doing per 10 ns then it’s enough time for both quantum and classical computer solve such sudoku puzzle per 1 seconds at 1GHz.
Robin Blume-Kohout Says:
Comment #121 November 25th, 2007 at 12:06 pm
Your question has been brought up, though not quite so explicitly — see, e.g., Posts #12 and #15. Assuming that you mean “D-Wave’s future computers” rather than current technology, this is the
fundamental question, because it equates precisely to “Is this device’s computational power within BPP?”
Unfortunately, it’s also incredibly hard to answer — we don’t even know whether P=NP, right? And, yes, we have a good guess… but the point is that we don’t really know what can be classically
simulated. This is a point that came up in the discussion after Mohammad’s talk at IQC:
(1) It’s not really fair to say something is “quantum” just because one particular classical model doesn’t simulate it.
(2) On the other hand, we can’t say “Prove that there’s no classical simulation,” because nobody can do that.
So we end up back at “Is it a QC?” <==> “Does it solve problems thought to be outside BPP?” If you’re not happy with that (I’m not), then you can try to show some physical criterion for
non-simulatability. The best guess for that right now is [entanglement over O(poly(n)) qubits] + [arbitrary control] (see #30,32,40).
If that conjecture (“O(n)-qubit entanglement is necessary for nonsimulatability”) is true, then your conjecture would be false — but AFAIK nobody knows how to prove/disprove it.
Scott Says:
Comment #122 November 25th, 2007 at 12:31 pm
Robin, arbitrary entanglement across m qubits would let us factor O(m)-digit numbers, which we already don’t know how to do in BPP for any m>>log^3n.
Robin Blume-Kohout Says:
Comment #123 November 25th, 2007 at 12:52 pm
Thanks again for chipping in!
Like Greg, I’m frustrated by the limitations on what you can say. I understand why you have restrictions — but I hope you also understand our frustration (us = academic community). We have no
interest in reverse-engineering your device, or figuring out inner workings — but we want to figure out whether your approach scales. Knowing how X, Y, and N scale is important to do that. There are
protocols that fit your description that are grossly nonscalable — for instance, I could just pass 2-vertex instances of MIS to the 16 qubits (“Are these vertices connected, or not?”). That’s Greg’s
“4-year old + omelette example”.
Greg Kuperberg Says:
Comment #124 November 25th, 2007 at 2:05 pm
In QCs as I’ve understood them up to now, in terms of the actual algorithms that run on that QC, everything that happens (I thought) is basically just operations on the probability distributions of
the |0> and |1> states for the qubits– operations which can be decomposed into a series of Hadamard and Toffoli gates or whatever.
That is a very good description, except that you left out a fundamental new feature: The distribution on the bit strings is not an ordinary distribution, but rather a quantum distribution. Quantum
mechanics rests on a generalization of probability theory that you can either call non-commutative probability or quantum probability. This generalization is the source of extra power for quantum
algorithms for certain problems. It isn’t because we are after faster gates or more bits. On the contrary, if I live to see a useful quantum computer, I might expect it to have, say, 100K good qubits
and a 1MHz clock. Such a computer would be no better than an Apple II for problems that don’t have a quantum algorithm. E.g. sorting or matrix multiplication. But for problems that do have a quantum
algorithm, like factoring, it could beat the entire Google data center.
Does this “quantum adiabatic algorithm” terminology imply that there is actually some way to implement quantum adiabatic computation as an algorithm on a “general” quantum computer?
Yes there is. It’s not at all obvious in the context of this discussion, but once you learn quantum probability, it’s actually the easier half of the equivalence, provided that the Hamiltonian is
how do such things as “the Hamiltonian governing the system” and “the ground state” gain meaning in the context of the simpler operations one normally sees quantum algorithms composed of?
Basically, by simulation. A classical computer can simulate a classical physical system, e.g., planetary orbits. Likewise a quantum computer can simulate a quantum physical system, e.g., molecular
orbitals. The task of simulating a quantum system was Feynman’s original motivation for proposing quantum computation.
does this mean it is possible that since they’re not technically doing the same thing, then AQCs might not have the same complexity-class properties as QCs
People were worried that the set of feasible problems for an AQC is strictly smaller than BQP, but they found a reverse simulation that shows that it isn’t. It looks a lot like the theorem that
simulated annealing is a universal algorithm in classical computation. (And sometimes both algorithms deserve the more derogatory name, “the universal solvent”.)
Greg Kuperberg Says:
Comment #125 November 25th, 2007 at 2:33 pm
I understand why you have restrictions — but I hope you also understand our frustration (us = academic community). We have no interest in reverse-engineering your device, or figuring out inner
workings — but we want to figure out whether your approach scales.
This is one question, but not the main one for me. I want the truth behind the fiasco press conference in February. My motivation is personal integrity, not scalability of hardware. There is very
little reason to believe that the plan that they describe — energy minimization without worrying about coherence — scales. The truth would have to be better than they describe in order to scale. My
interest is in finding out how much less is true than was said in the press conference.
If Varun Jain is describing what really happened, then it means that there wasn’t quite zero truth in the act of pressing the “quantum solver” button and saying that “the quantum computer spits out
the answer”. It was maybe 20% true. And I don’t mean to say that I am highly suspicious of what Varun says. Not at all; he sounds honest.
The reason that I am hedging and basically being a jerk about it is a conversation I had on David Bacon’s blog. Part of the conversation went exactly the same way as the one here: Rose said, well you
could convert Sudoku to MIS and you could chop that into smaller MIS. When I asked him whether I had his word that that is the way that the Sudoku was solved, he said no, it was all hypothetical;
actually they do something “considerably more sophisticated”.
Given this behavior from the CTO, we still have to take Varun Jain’s answer as provisional, for several reasons. For instance, we don’t know what happened to the little subgraphs that he says were
passed to the 16 qubits, since he says that the hardware isn’t his department.
There are protocols that fit your description that are grossly nonscalable — for instance, I could just pass 2-vertex instances of MIS to the 16 qubits (”Are these vertices connected, or not?”).
That’s Greg’s “4-year old + omelette example”.
If the protocol described is correct, then it couldn’t have been much different than that. The 16 qubits only have near-neighbor interactions, and (according to Rose’s description), there exist
graphs with as few as 7 vertices that can’t be embedded. MIS with 6 vertices is trivial even compared to Sudoku, and is much less work than figuring out how the graph fits into the 4×4 grid. (Of
course, graph embedding is also NP-hard, but complexity theory is an overly fancy description of any part of this story.)
Dave Bacon Says:
Comment #126 November 25th, 2007 at 2:37 pm
Even simpler than raising the temperature (which might be more difficult than it sounds), you need to check whether you are actually gaining anything by doing adiabatic evolution: http://
John Sidles Says:
Comment #127 November 25th, 2007 at 4:43 pm
Robin Blume-Kohout Says: … The point is that we don’t really know what can be classically simulated.
That is an excellent point, and one which accounts for much of my own interest in complexity theory, and quantum information theory, and this D-Wave thread in particular.
To phrase this point in engineering terms, “Humanity’s ability to simulate quantum systems with classical resources has been improving (roughly) exponentially. How much longer can this improvement
continue? What are the fundamental limits, both mathematical and physical?”
This 21st Century “Moore’s Law of quantum system simulation” has striking similarities to the Moore’s Law of computer performance (which has a fine Wikipedia page): the metrics are vague, the reasons
for the law’s empirical truth are unclear, the economic and resource implications are large, no one knows how much longer the law will continue to hold, and if the law does continue to hold for
another generation, the consequences for humanity—already profound—will be incalculable.
John Sidles Says:
Comment #128 November 25th, 2007 at 5:44 pm
And as a practical postscript to the above, isn’t the continued exponential increase in quantum simulation capabilities (arguably) the most realistic hope of young QIT researchers for more QIT jobs
to be created?
Seriously, is there any other comparably realistic hope?
Coin Says:
Comment #129 November 25th, 2007 at 6:24 pm
Greg/Scott, thanks for the clarifications!
Tyler DiPietro Says:
Comment #130 November 25th, 2007 at 6:34 pm
“Seriously, is there any other comparably realistic hope?”
I would think that the job market in QIT and QC all depends on whether quantum computing capabilities can scale up to the point of providing advantages in more general problems. If it stays in the
current narrow crypto/sim application space, then you’re probably right. If it can expand into more areas then job opportunites for researchers will be ubiquitous. I’ve heard some noise around about
research into using quantum algorithms for image processing…
John Sidles Says:
Comment #131 November 25th, 2007 at 7:53 pm
Tyler Pietro says: If QIT can expand into more areas then job opportunites for researchers will be ubiquitous.
Tyler, when it comes to job creation, I encourage you not to define QIT and QC too narrowly, or to underestimate the opportunities for cross-disciplinary fertilization. Because if QIT is defined
broadly, then the opportunities for job-creation are indeed remarkably great.
Here is a mathematical assertion that is constructed to illustrate this point: Slater determinants are Grassmannians presented via the Plucker embedding as algebraic varieties that are Kähler
manifolds having an Einstein metric.
By construction, the starting link “Slater determinants” points toward the vast literature on computational chemistry … which at the present time is an enormously vigorous engine of job and resource
Every link following “Slater determinant” points to a Wikipedia page that provides an undergraduate-level entré to a vast mathematical literature … the literature on Kähler manifolds alone is about
as large as the entire literature on quantum measurement theory, for example.
Amazingly, there is apparently almost no cross-referencing between these two communities (and I would be *very* grateful for counter-examples).
QIT is among the disciplines best-poised to bridge this disciplinary divide, and challenges like “Can the D-Wave computer be efficiently simulated by classical resources?” are IMHO important and
well-posed challenge problems that advance this bridging objective admirably.
All of the above is true, provided that the QIT community does not define itself narrowly.
There is admittedly some risk of “interdisciplinary overload” but … well … welcome to the 21st Century.
By the way, the above mathematical assertion derives largely from help and advice to our QSE Group from Joshua Cantor, who deserves most of the credit for the parts that are right. Any parts that are
mathematically wrong or misleading are mine, for sure.
Jack in Danville Says:
Comment #132 November 25th, 2007 at 8:59 pm
Can you (or somebody) clarify the common usage of the notation “log”? The last formal math education I had was in the days when calculators were replacing slide-rules (a very brief window in time).
Back then “log” (un-subscripted) was supposed to be reserved for base-10 (owing to its importance in practical calculation, I imagine). Since base “e” had “ln”, any other log base was supposed to be
specified by the subscript.
Yet much of the literature in CS, information theory, and even other areas, appears to be using “log” to be base-2. If I recall correctly, when I read Shannon’s famous paper, I was convinced he was
using “log” to mean base-2, and that was written in the heyday of the slide-rule. I know some authors attempted to introduce “lg” to specify base-2, but it apparently never caught on.
dave tweed Says:
Comment #133 November 25th, 2007 at 9:02 pm
John Sidles appeared to be talking about exponential increases in the ability to simulate quantum systems, and Tyler DiPietro expanded this to solving general problems using quantum strategies such
as image processing.
For a more general application of quantum-augmented computing strategies there’s the question not only of whether quantum computing scales but whether, as a general rule of thumb, the exact answers
and nothing else that (to my knowledge) quantum algorithms tend to give are useful in these information processing tasks. Eg, in factorising L=M*N you need to get either M and/or N, whereas I recall
reading that proteins generally don’t attain the true global minimum energy folded configuration but a sufficiently stable local minimum partially based on intermediate folding steps (sorry no cite).
Certainly the limited image processing stuff I’ve seen is more “2-D exact database recognition/retrieval” rather than dealing with the complexities of real image modelling (maybe I’m out of date
This is not to disparage quantum computing as an worthwhile subject of study and development, merely that IF it turns out to be good precisely at solving problems no-one deeply cares about incredibly
quickly it won’t become widespread.
Tyler DiPietro Says:
Comment #134 November 25th, 2007 at 9:30 pm
Thanks for the insightful comment and links. My only response is to say that I don’t intend to narrowly define QC or QIT, I was only noting that such a thing happening was a possibility. I would like
nothing more than to see QC scale-up and expand into more general purpose computation.
From what I know, he most common logarithm-base in computer science is indeed base 2, but in quantum computing you also encounter Euler’s constant quite frequently (complex vector space and all
Scott Says:
Comment #135 November 25th, 2007 at 11:49 pm
Jack: Normally we don’t even bother to specify a base, since all logs are equivalent up to constant factors (e.g. to get from log[10](n) to log[2](n), just multiply by 3.32). If we do specify a base
it’s usually base 2.
Alex Says:
Comment #136 November 26th, 2007 at 1:13 am
A somewhat different subject which came to mind from Scott’s comment #113:
I’ve seen a lot of papers saying that the cluster state model of QC is equivalent (in that it can efficiently simulate standard QC). Yet cluster QC uses gates from the stabilizer group meaning that
it can be efficiently simulated classically, and by implication cannot efficiently simulate standard QC (for the time being)? What point am I missing about cluster state QC?
Scott Says:
Comment #137 November 26th, 2007 at 1:39 am
Alex, the point you’re missing is that cluster-state QC does not only use gates from the stabilizer group: the original Raussendorf-Briegel paper used arbitrary 1-qubit gates. I think you can do it
with only 1-qubit stabilizer gates, but in that case you’ll need to start with a cluster state that’s not a stabilizer state.
(It’s been known for a long time that stabilizer gates actually are universal, given certain non-stabilizer initial states as ancillas.)
Alex Says:
Comment #138 November 26th, 2007 at 2:08 am
Thanks Scott
I didn’t know about the arbitrary rotations, should paid closer attention. More questions (please excuse my ignorance):
1- Isn’t a cluster a graph state (almost by definition), and isn’t a graph state always a stabilizer state (again almost by definition)?
2- have there been any results on the complexity of preparing those ancilla states
Sam Says:
Comment #139 November 26th, 2007 at 3:30 am
“Alex, the point you’re missing is that cluster-state QC does not only use gates from the stabilizer group: the original Raussendorf-Briegel paper used arbitrary 1-qubit gates. I think you can do it
with only 1-qubit stabilizer gates, but in that case you’ll need to start with a cluster state that’s not a stabilizer state.”
A minor correction: Cluster-state QC uses *measurements* that are not Pauli measurements. It does not use any gates at all, aside perhaps from the stabilizer gates to prepare the cluster/graph state.
1. Yes, a cluster state is a graph state, and a graph state is a stabilizer state (with a special form).
2. Graph states can be prepared in time = # of edges, assuming perfect stabilizer gates (controlled-phase gates). (The depth of the circuit used to prepare the graph state can be = max degree of a
vertex. Note: “time” = “circuit size” ~ depth * n.) General stabilizer states on n qubits can be prepared certainly in time n^2, and small factors can be shaved off this (Scott mentions this in one
of his papers).
Scott Says:
Comment #140 November 26th, 2007 at 4:57 am
Alex, don’t get hung up on definitions. If you want to do universal QC using 1-qubit stabilizer measurements only, then you’ll need an initial state that’s like a cluster state but is outside the
stabilizer set.
The states used in cluster-state QC can be prepared in linear time. (Dan Gottesman and I showed that arbitrary stabilizer states can be prepared in n^2/log(n) time, and that this is tight.)
Sam, I think of a measurement as a gate followed by a standard-basis measurement.
Joe Says:
Comment #141 November 26th, 2007 at 8:26 am
2- have there been any results on the complexity of preparing those ancilla states
Actually you just need single qubits prepared in the state 1/sqrt(2)|0>+(1+i)/2 |1> to be added to the graph, one for each pi/8 gate you want to produce, so the time is constant. The corrections for
these gates will be a Clifford group operator, which can be applied later (by changing one Pauli basis measurement to another). So the complexity of creating these states is constant.
For the teleportation based non-Clifford group gates used in fault-tolerant computation the ancilla state must grow in size as the encoding does, and so there the answer is less trivial (I point you
back to Scott’s comment above).
Matt Says:
Comment #142 November 26th, 2007 at 11:25 am
Just a comment from left field (a solid-state physicist)– to note that the ‘classical limit is high temperature, not long times’ claim is a somewhat dicey. A classic Ph.D orals question in
solid-state is to note first that the particles in a superconducting electron pair are physically traveling at high velocities in opposite directions (i.e., k2 = -k1) and that a superconductor has a
finite coherence length. Therefore, an electron pair that is contributing to the condensate at time t will have broken apart and flown away at time t + delta– and, therefore, no longer contributes to
the condensate. The question is, in what sense, now, do the electron pairs form a condensate?
The answer is that, at each instant in time, one can find enough electron pairs to form a condensate– since electrons are all identical, it doesn’t matter exactly which ones you use. However, this
argument relies on having a lot of electrons in the ensemble– if you had only 16, it wouldn’t work.
John Sidles Says:
Comment #143 November 26th, 2007 at 12:51 pm
Matt, an article that establishes links between QIT, solid-state physics, and quantum simulation efficiency, is Classical spin models and the quantum stabilizer formalism.
Joe Says:
Comment #144 November 26th, 2007 at 1:15 pm
John: Presumably the entire PEPs and matrix product state literature provide the kind of link you are talking about.
John Sidles Says:
Comment #145 November 26th, 2007 at 3:27 pm
Joe says: Presumably the entire PEPs and matrix product state literature provide the kind of link you are talking about.
PEP is a new acronym to me, but as for matrix product states, these state-spaces too are simply algebraic joins.
From both a quantum information theory point of view, and a geometric point of view, it is mysterious why so many different algebraic quantum state-spaces work so well in practical simulations—this
is the flip side of the earlier statement on this thread, to the effet that “we don’t know very much about what quantum systems can be simulated.”
The growth in the quantum simulation literature reflects this empirical success: it obeys Moore’s law remarkably well. For example, here is the growth in references to density functional theory (DFT,
which is a quantum chemistry technique):
DFT articles in Inspec:
((“density functional” or “quantum density”) WN KY)
9 for 1965-1969
33 for 1970-1974
262 for 1975-1979
719 for 1980-1984
1747 for 1985-1989
3557 for 1990-1994
7819 for 1995-1999
14695 for 2000-2004
9035 for last two years
The above growth rate is about 23% per year, sustained over forty years … which is pretty remarkable track record.
If it continues for another 40 years … well … it’s about one article published per second on DFT (and its successors).
This neatly solves the QIT job problem, but at a price: humanity won’t have time for any activities other than writing and reviewing articles on quantum simulation and quantum information theory.
More seriously, these numbers reinforce that if capabilities for simulating quantum system with classical resources continue to improve exponentially, the consequences for humanity will be profound.
So it is a very important question: How long will it be, before simulation techniques begin to hit fundamental QIT limits? The possibility of learning these limits is one reason why QIT is so
interesting to me personally, and so important to everyone practically.
qv Says:
Comment #146 November 26th, 2007 at 4:00 pm
If quantum computer is imposible then nothing to do with quantum simulation also becouse then it’s means that all tryings to simulate quantum systems are wrong and need search another method to
simulate quantum systems and which then probably don’t scals exponentionaly with number of particles.
Job (UG) Says:
Comment #147 November 26th, 2007 at 6:00 pm
Really qv you ought to try to phrase things in a more readable manner, the topic is complex enough as it is.
John Sidles Says:
Comment #148 November 27th, 2007 at 7:10 am
QV says: “If quantum computer is imposible then nothing to do with quantum simulation …”
Suppose we take “do” to be an active verb, and parse QV’s assertion as “If quantum computing is either technically infeasible, or feasible but only marginally more powerful than classical
computation, then quantum simulation has no useful or interesting applications.”
We can then observe that QV’s assertion neglects the already-huge, and exponentially growing, impact of quantum simulations in everyday life.
This enormous impact represents the confluence in quantum simulation of three inexorable trends.
The first is the trend toward technologies that are smaller, colder, faster, more efficient, and more sensitive. As more technologies press against quantum and thermodynamic limits, the need for
quantum analysis and simulation has become more widespread.
The second is the technical necessity of simulation: as technologies become more complex, “simulate it before you build it” becomes a practical necessity (this is true for both classical and quantum
system engineering).
The third is the social utility of simulation: as communities become globalized, simulation emerges as a powerful social tool for binding them together, in a way that is simultaneously creative,
disciplined, and productive.
Companies like Boeing and Intel find that simulation technologies serve the optimization, confidence-building, and social binding purposes very admirably—teams around the globe commit to simulated
787s and Penryn processors, preliminary to committing to fabricate them, and everyone has confidence that the airplanes will fly and the processors will compute.
Advances in both classical and quantum simulation therefore are beginning to play a unifying role in scientific, engineering, and team-building, that has become absolutely central to pretty much all
modern enterprises.
This is a point that enterprises like Boeing, Intel, and IBM have grasped far more thoroughly than the universities, IMHO.
John Sidles Says:
Comment #149 November 27th, 2007 at 7:42 am
As a follow-up to the above, and as a contrast to D-Wave’s rather opaque press releases, it is fun to listen to Intel Chip-Chat (not least because the retro-70s musical theme is hilarious IMHO).
A good example is the recent Chip-Chat Episode 11c: Introducing 32nm Logic Technology. Note Intel’s explicit emphasis on simulation-based design rules that are then experimentally validated in
prototype devices. The social role of simulation in binding Intel’s global enterprise is stated only implicitly … but it is at the beating heart of Intel’s enterprise culture, and of many simular
technology-driven companies.
Perhaps soon, D-Wave (or some similar company) will give detailed press conferences like Intel’s … to understand D-Wave’s design rules even in outline, would be a major advance for the whole
As for quantum system engineering considered more broadly, well, those enterprises are already launched, and they are emerging as strategically central to the global economy.
Math, science, engineering, technology, and jobs … this to me is what a vigorous science and technology ecosystem looks like.
Michael J. Biercuk Says:
Comment #150 November 27th, 2007 at 9:22 am
Hello All,
I wanted to add one observation regarding the omelette analogy. I think I can revise it – I dare not say refine – such that it also includes some criticisms of the hardware. My apologies in advance
if the following sounds a bit insensitive, but it should get the point across.
Imagine you wanted to make an omelette, and for some reason that was a reasonably complex task – it takes a while for ingredient prep, the eggs tend to stick, and the like. Now imagine that some
group of scientists postulated that 4-year-olds, under certain conditions, might be able to make perfect omelettes dramatically faster than adults. So you call on your 4-year old cousin, make an
omelette and announce to the world that you have accomplished the tremendous technical feat of realizing the scienitsts vision.
But then the questions start. Well, the omelette was really rather crude – just one egg, no filling, bits of shell – but it was similar qualitatively to the more complex western omelettes that
everyone wishes they could make more efficiently. Ok, ok, the 4-year old didn’t really make the omelette so much as help with one part of the process, and you pulled it together enough to do the rest
(it was a crude omelette after all). What was the cousin’s contribution? He handed you the eggs.
But that’s not the end of the story. Your cousin is in a persistent vegitative state, and while scientists believe it is possible that such a 4-year old could contribute meaningfully to omelette
production, no one really knows how. Such a child has to be brought back to consciousness, and made to interact with his/her surroundings in a particular way to help. But all anyone has ever achieved
with a comatose four-year old is getting them to blink their eyes. Proof enough that they might be useful one day, but not quite the same as being ready to make your omelette for you.
So you have admitted that the child only passed you the eggs, but you come under withering criticism for the vegitative state part. You ensure everyone that you really know that your cousin handed
you the eggs and you just can’t share the proof because you’re both under NDA.
But there is one more issue. Your cousin has a full-time nanny who is very helpful, and can do lots of menial tasks, so long as they aren’t too complicated. Under pressure you admit that you haven’t
really seen “proof” that your 4-year old cousin passed you the eggs. Indeed, you turned your back on your cousin and his/her nanny, waited, and then the eggs were right next to you. You’ve done a lot
of theoretical work to show that the four-year old in a persistent vegitative state could have gotten up and handed you the eggs, but you dont really know for sure, and you also can’t rule out
assistance from the nanny.
But you go on telling everyone that you have realized the dream, and are ready to sell restaurants access to your cousin for their omelette making tasks. So long as they never see what he/she is
actually doing and just send out for the omelettes to be prepared in Vancouver…
Greg Kuperberg Says:
Comment #151 November 27th, 2007 at 12:15 pm
So, Michael, what are you saying about the 16 qubits? That they didn’t show much quantum behavior? Or are you saying that their classical controller — presumably this “nanny” that you describe — did
MIS for 6-vertex graphs instead of having the 16 qubits do it?
Michael J. Biercuk Says:
Comment #152 November 27th, 2007 at 1:13 pm
Hi Greg,
I’m saying two things – the classical controller did the majority of the work, and what was left for the “quantum” part may very well have been achieved in a purely classical way. The nanny is the
classical process which may masquerade in place of the quantum part. “You”, as the true omelette chef are the classical controller performing the bulk of the work, while the true contribution from
the four-year-old is very difficult to discern.
John Sidles Says:
Comment #153 November 27th, 2007 at 2:15 pm
Michael, I think all the posters on this thread would agree that even a sketchy description of D-Wave’s design rules (both hardware and algorithm-related) would be very welcome.
Here “sketchy” means, “the minimum sufficient to suggest concrete avenues for research.” Most technology companies find ways to do this … mightn’t D-Wave manage it too?
Michael J. Biercuk Says:
Comment #154 November 27th, 2007 at 5:33 pm
I think the design rules would indeed be useful for this simple algorithmic implementation. However, on the hardware side, we know roughly how they make their devices based on published papers in
PRL, and the JPL fab process. However, we do not have a clear understanding of the “quantumness” of operations on these devices. DWave has produced an interesting, but thus-far unconvincing theory to
suggest why operations on their system are quantum. However, a demonstration of algorithmic performance plus a theoretical explanation of why the behavior could be quantum is startlingly
insufficient. Instead, definitive experimental evidence that the devices are performing quantum rather than classical analog operations is required (e.g. Chuang or Bacon’s ideas on varying
temperature). Otherwise, aside from being disingenuous at best, it is impossible to say if such devices have any chance of working in more complex forms.
Greg Kuperberg Says:
Comment #155 November 28th, 2007 at 1:52 am
Michael: I get what you are saying now. Maybe you could more artful with your analogy, especially since it came from my analogy.
Let us say, more simply, that D-Wave boasted that they have a 2-year-old child prodigy who can make an omelette. (That is, solve a Sudoku.) But the omelette was actually made by a house cook who only
asked the “prodigy” to fetch the eggs and onions. They insist that the toddler has special mental powers even he needs a lot of help in the kitchen, but actually he seems entirely ordinary. (That is,
Now another question is whether there actually was a nanny (meaning other supporting hardware) who stepped in for the toddler to fetch the eggs and onions. Or if on the day of the demo, the house
cook didn’t bother talking to the toddler at all because it was too much trouble. Varun Jain did what he could to make the story sound more honest, but his side of it only goes so far.
Tyler DiPietro Says:
Comment #156 November 28th, 2007 at 2:10 am
I’m surprised that with the omelette analogy being fully solidified in this thread, no one has made any jokes about how D-Wave might be thinking that you can’t make an omelette without breaking a few
eggs. Fill in the details.
Michael J. Biercuk Says:
Comment #157 November 28th, 2007 at 9:17 am
Oh Tyler…
Greg – sorry for not being more artful. I don’t really have a significant ability in that area. Note that I didn’t say I was refining your analogy – just revising it. But also note that the nanny in
my version isn’t just supporting classical hardware/software. It’s a completely parallel classical computational path using the superconducting hardware – thermal annealing. And it may have actually
done what little work was attributed to the “quantum computer.”
Herb Ivorous Says:
Comment #158 November 28th, 2007 at 9:24 am
Analogies and metaphors can add to confusion. For example, physicists’ analogy of a rubber sheet for spacetime is misleading. They show a ball depressing a rubber sheet and say that gravitation is
the result of the sheet’s shape. This is not conducive to understanding. The omelette analogy is also a mistake.
Michael J. Biercuk Says:
Comment #159 November 28th, 2007 at 10:53 am
I disagree. While a detailed technical discussion among an audience entirely composed of experts should not use analogies, the breadth of topics in question suggests that analogies may have a useful
role in highlighting how the pieces fit together in this discussion. Analogies serve to assist in understanding general concepts, and their use should be limited to such purposes, as is done here.
Jonathan Vos Post Says:
Comment #160 November 28th, 2007 at 12:06 pm
Analogies are very important in Science and Mathematics, just as a paper without a “narrative” is harder to get emotionally involved with.
In the very old days, I think that I could have convinced Euclid (if I spoke Greek) and much later, Newton, that in Euclidean space:
“An analogy is a parallelogram in semantic space. To say A is to B as C is to D, where A, B, C, D are statements about the physical universe, social universe, or an abstraction, can be translated.
The line segment jointing A to B is parallel to and the same length as the line segment joining C to D.”
This generalizes to vector spaces, or bimodules for noncommutative cases, and further to metric spaces. One actually sees this done, sort of, in Web 2.0 pages that analyze web pages and make 2-D or
3-D maps of the keywords or sub-domains based on a metric of distance.
In that Euclidean sense of analogies, what is an analogy between analogies? Is it a 3-D parallelopiped? How does that generalize? | {"url":"http://www.scottaaronson.com/blog/?p=291","timestamp":"2014-04-21T02:01:13Z","content_type":null,"content_length":"204679","record_id":"<urn:uuid:e012db80-7dd8-41ec-aa24-90ca49395640>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
HOW DO YOU USE OPEN STUDY?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
close this msg me ur questions plz
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Or ask an ambassador
Best Response
You've already chosen the best response.
Hey :) What are you trying to get help with?
Best Response
You've already chosen the best response.
This may or may not be a very complicated question o.o
Best Response
You've already chosen the best response.
Btw,This is the math portion and posting off-topics is not allowed on OS. So you would have to close this up. Please check the CoC-http://openstudy.com/code-of-conduct
Best Response
You've already chosen the best response.
hey @Traphik427 FIRST OF ALL \[\huge \color{green}{\text{WELCOME TO OPENSTUDY!}}\]
Best Response
You've already chosen the best response.
can you first close this question by clicking close button ? We have group chats where we can talk general matters. When we ask a question it needs to be on the respective group topic .In this
case we are now in mathematicss group.
Best Response
You've already chosen the best response.
thanks :)
Best Response
You've already chosen the best response.
Look at the figure.. Based on the figure, which pair of triangles is congruent by the Side Angle Side Postulate? now how do I apply the drawing?!
Best Response
You've already chosen the best response.
@Traphik427 i think it would be a better idea if you post that as a new question by clicking ask on top left corner
Best Response
You've already chosen the best response.
taht way more users would help you get to the answer
Best Response
You've already chosen the best response.
Omg thank you & I know how to insert pics now!!
Best Response
You've already chosen the best response.
good :) seeya around ..happy openstudying..and dont forget to click best response when someone helps you with your question..that would be the best way to thank them.
Best Response
You've already chosen the best response.
Okay. thank you all for welcoming me!! ♥
Best Response
You've already chosen the best response.
yw :)
Best Response
You've already chosen the best response.
hav good time here !
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50fd496de4b010aceb338c64","timestamp":"2014-04-21T12:31:19Z","content_type":null,"content_length":"69943","record_id":"<urn:uuid:cdbe782e-7932-47c7-9247-160d044d5bf5>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Getting Around
Quit (or control-D at the prompt). The default exit code is zero, indicating that the processes completed successfully.
Calls exit(0).
Register a zero-argument function to be called at exit.
Determine whether Julia is running an interactive session.
whos([Module,] [pattern::Regex])
Print information about global variables in a module, optionally restricted to those matching pattern.
edit(file::String[, line])
Edit a file optionally providing a line number to edit at. Returns to the julia prompt when you quit the editor. If the file name ends in ”.jl” it is reloaded when the editor closes the file.
edit(function[, types])
Edit the definition of a function, optionally specifying a tuple of types to indicate which method to edit. When the editor exits, the source file containing the definition is reloaded.
Load source files once, in the context of the Main module, on every active node, searching the system-wide LOAD_PATH for files. require is considered a top-level operation, so it sets the current
include path but does not use it to search for files (see help for include). This function is typically used to load library code, and is implicitly called by using to load packages.
Like require, except forces loading of files regardless of whether they have been loaded before. Typically used when interactively developing libraries.
Evaluate the contents of a source file in the current context. During including, a task-local include path is set to the directory containing the file. Nested calls to include will search
relative to that path. All paths refer to files on node 1 when running in parallel, and files will be fetched from node 1. This function is typically used to load source interactively, or to
combine files in packages that are broken into multiple source files.
Like include, except reads code from the given string rather than from a file. Since there is no file path involved, no path processing or fetching from node 1 is done.
Evaluate all expressions in the given file, and return the value of the last one. No other processing (path searching, fetching from node 1, etc.) is performed.
Supports conditional inclusion of a package or module. Equivalent to using name in a file, except it can be inside an if statement.
Get help for a function. name can be an object or a string.
Search documentation for functions related to string.
which(f, args...)
Show which method of f will be called for the given arguments.
Show all methods of f with their argument types.
methodswith(typ[, showparents])
Show all methods with an argument of type typ. If optional showparents is true, also show arguments with a parent type of typ, excluding type Any.
All Objects
Return the supertype of DataType T
subtype(type1, type2)
True if and only if all values of type1 are also of type2. Can also be written using the <: infix operator as type1 <: type2.
<:(T1, T2)
Subtype operator, equivalent to subtype(T1,T2).
Return a list of immediate subtypes of DataType T. Note that all currently loaded subtypes are included, including those not visible in the current module.
Return a nested list of all subtypes of DataType T. Note that all currently loaded subtypes are included, including those not visible in the current module.
The lowest value representable by the given (real) numeric type.
The highest value representable by the given (real) numeric type.
The smallest in absolute value non-subnormal value representable by the given floating-point type
The highest finite value representable by the given floating-point type
The largest integer losslessly representable by the given floating-point type
Size, in bytes, of the canonical binary representation of the given type, if any.
The distance between 1.0 and the next larger representable floating-point value of type. The only types that are sensible arguments are Float32 and Float64. If type is omitted, then eps(Float64)
is returned.
The distance between x and the next larger representable floating-point value of the same type as x.
promote_type(type1, type2)
Determine a type big enough to hold values of each argument type without loss, whenever possible. In some cases, where no type exists which to which both types can be promoted losslessly, some
loss is tolerated; for example, promote_type(Int64,Float64) returns Float64 even though strictly, not all Int64 values can be represented exactly as Float64 values.
promote_rule(type1, type2)
Specifies what type should be used by promote when given values of types type1 and type2. This function should not be called directly, but should have definitions added to it for new types as
getfield(value, name::Symbol)
Extract a named field from a value of composite type. The syntax a.b calls getfield(a, :b), and the syntax a.(b) calls getfield(a, b).
setfield(value, name::Symbol, x)
Assign x to a named field in value of composite type. The syntax a.b = c calls setfield(a, :b, c), and the syntax a.(b) = c calls setfield(a, b, c).
The byte offset of each field of a type relative to the data start. For example, we could use it in the following manner to summarize information about a struct type:
structinfo(T) = [zip(fieldoffsets(T),names(T),T.types)...]
fieldtype(value, name::Symbol)
Determine the declared type of a named field in a value of composite type.
True if value v is immutable. See Immutable Composite Types for a discussion of immutability.
True if T is a “plain data” type, meaning it is immutable and contains no references to other values. Typical examples are numeric types such as Uint8, Float64, and Complex{Float64}.
Determine whether T is a concrete type that can have instances, meaning its only subtypes are itself and None (but T itself is not None).
typejoin(T, S)
Compute a type that contains both T and S.
typeintersect(T, S)
Compute a type that contains the intersection of T and S. Usually this will be the smallest such type or one close to it.
Generic Functions
Sequential iteration is implemented by the methods start, done, and next. The general for loop:
is translated to:
state = start(I)
while !done(I, state)
(i, state) = next(I, state)
# body
The state object may be anything, and should be chosen appropriately for each iterable type.
Fully implemented by: Range, Range1, NDRange, Tuple, Real, AbstractArray, IntSet, ObjectIdDict, Dict, WeakKeyDict, EachLine, String, Set, Task.
General Collections
Fully implemented by: Range, Range1, Tuple, Number, AbstractArray, IntSet, Dict, WeakKeyDict, String, Set.
Iterable Collections
contains(itr, x) → Bool
Determine whether a collection contains the given value, x.
indexin(a, b)
Returns a vector containing the highest index in b for each value in a that is a member of b . The output vector contains 0 wherever a is not a member of b.
findin(a, b)
Returns the indices of elements in collection a that appear in collection b
Returns an array containing only the unique elements of the iterable itr, in the order that the first of each set of equivalent elements originally appears.
reduce(op, v0, itr)
Reduce the given collection with the given operator, i.e. accumulate v = op(v,elt) for each element, where v starts as v0. Reductions for certain commonly-used operators are available in a more
convenient 1-argument form: max(itr), min(itr), sum(itr), prod(itr), any(itr), all(itr).
Returns the largest element in a collection
Returns the smallest element in a collection
indmax(itr) → Integer
Returns the index of the maximum element in a collection
indmin(itr) → Integer
Returns the index of the minimum element in a collection
findmax(itr) -> (x, index)
Returns the maximum element and its index
findmin(itr) -> (x, index)
Returns the minimum element and its index
Returns the sum of all elements in a collection
sum(f, itr)
Sum the results of calling function f on each element of itr.
Returns the product of all elements of a collection
any(itr) → Bool
Test whether any elements of a boolean collection are true
all(itr) → Bool
Test whether all elements of a boolean collection are true
count(p, itr) → Integer
Count the number of elements in itr for which predicate p is true.
any(p, itr) → Bool
Determine whether any element of itr satisfies the given predicate.
all(p, itr) → Bool
Determine whether all elements of itr satisfy the given predicate.
map(f, c) → collection
Transform collection c by applying f to each element.
Example: map((x) -> x * 2, [1, 2, 3]) = [2, 4, 6]
map!(function, collection)
In-place version of map().
mapreduce(f, op, itr)
Applies function f to each element in itr and then reduces the result using the binary function op.
Example: mapreduce(x->x^2, +, [1:3]) == 1 + 4 + 9 == 14
Get the first element of an ordered collection.
Get the last element of an ordered collection.
Get the step size of a Range object.
Return an array of all items in a collection. For associative collections, returns (key, value) tuples.
Indexable Collections
getindex(collection, key...)
Retrieve the value(s) stored at the given key or index within a collection. The syntax a[i,j,...] is converted by the compiler to getindex(a, i, j, ...).
setindex!(collection, value, key...)
Store the given value at the given key or index within a collection. The syntax a[i,j,...] = x is converted by the compiler to setindex!(a, x, i, j, ...).
Fully implemented by: Array, DArray, AbstractArray, SubArray, ObjectIdDict, Dict, WeakKeyDict, String.
Partially implemented by: Range, Range1, Tuple.
Associative Collections
Dict is the standard associative collection. Its implementation uses the hash(x) as the hashing function for the key, and isequal(x,y) to determine equality. Define these two functions for custom
types to override how they are stored in a hash table.
ObjectIdDict is a special hash table where the keys are always object identities. WeakKeyDict is a hash table implementation where the keys are weak references to objects, and thus may be garbage
collected even when referenced in a hash table.
Dicts can be created using a literal syntax: {"A"=>1, "B"=>2}. Use of curly brackets will create a Dict of type Dict{Any,Any}. Use of square brackets will attempt to infer type information from the
keys and values (i.e. ["A"=>1, "B"=>2] creates a Dict{ASCIIString, Int64}). To explicitly specify types use the syntax: (KeyType=>ValueType)[...]. For example, (ASCIIString=>Int32)["A"=>1, "B"=>2].
As with arrays, Dicts may be created with comprehensions. For example, {i => f(i) for i = 1:10}.
Fully implemented by: ObjectIdDict, Dict, WeakKeyDict.
Partially implemented by: IntSet, Set, EnvHash, Array.
Set-Like Collections
add!(collection, key)
Add an element to a set-like collection.
Construct a Set with the given elements. Should be used instead of IntSet for sparse integer sets, or for sets of arbitrary objects.
Construct a sorted set of the given integers. Implemented as a bit string, and therefore designed for dense integer sets. If the set will be sparse (for example holding a single very large
integer), use Set instead.
union(s1, s2...)
Construct the union of two or more sets. Maintains order with arrays.
union!(s, iterable)
Union each element of iterable into set s in-place.
intersect(s1, s2...)
Construct the intersection of two or more sets. Maintains order and multiplicity of the first argument for arrays and ranges.
setdiff(s1, s2)
Construct the set of elements in s1 but not s2. Maintains order with arrays.
setdiff!(s, iterable)
Remove each element of iterable from set s in-place.
symdiff(s1, s2...)
Construct the symmetric difference of elements in the passed in sets or arrays. Maintains order with arrays.
symdiff!(s, n)
IntSet s is destructively modified to toggle the inclusion of integer n.
symdiff!(s, itr)
For each element in itr, destructively toggle its inclusion in set s.
symdiff!(s1, s2)
Construct the symmetric difference of IntSets s1 and s2, storing the result in s1.
Returns the set-complement of IntSet s.
Mutates IntSet s into its set-complement.
intersect!(s1, s2)
Intersects IntSets s1 and s2 and overwrites the set s1 with the result. If needed, s1 will be expanded to the size of s2.
Fully implemented by: IntSet, Set.
Partially implemented by: Array.
push!(collection, item) → collection
Insert an item at the end of a collection.
pop!(collection) → item
Remove the last item in a collection and return it.
unshift!(collection, item) → collection
Insert an item at the beginning of a collection.
shift!(collection) → item
Remove the first item in a collection.
insert!(collection, index, item)
Insert an item at the given index.
splice!(collection, index[, replacement]) → item
Remove the item at the given index, and return the removed item. Subsequent items are shifted down to fill the resulting gap. If specified, replacement values from an ordered collection will be
spliced in place of the removed item.
splice!(collection, range[, replacement]) → items
Remove items in the specified index range, and return a collection containing the removed items. Subsequent items are shifted down to fill the resulting gap. If specified, replacement values from
an ordered collection will be spliced in place of the removed items.
resize!(collection, n) → collection
Resize collection to contain n elements.
append!(collection, items) → collection
Add the elements of items to the end of a collection.
Fully implemented by: Vector (aka 1-d Array).
The number of characters in string s.
*(s, t)
Concatenate strings.
Example: "Hello " * "world" == "Hello world"
^(s, n)
Repeat string s n times.
Example: "Julia "^3 == "Julia Julia Julia "
Create a string from any values using the print function.
Create a string from any value using the show function.
Create a string from the address of a C (0-terminated) string. A copy is made; the ptr can be safely freed.
Convert a string to a contiguous byte array representation appropriate for passing it to C functions.
ascii(::Array{Uint8, 1})
Create an ASCII string from a byte array.
Convert a string to a contiguous ASCII string (all characters must be valid ASCII characters).
utf8(::Array{Uint8, 1})
Create a UTF-8 string from a byte array.
Convert a string to a contiguous UTF-8 string (all characters must be valid UTF-8 characters).
is_valid_ascii(s) → Bool
Returns true if the string or byte vector is valid ASCII, false otherwise.
is_valid_utf8(s) → Bool
Returns true if the string or byte vector is valid UTF-8, false otherwise.
is_valid_char(c) → Bool
Returns true if the given char or integer is a valid Unicode code point.
ismatch(r::Regex, s::String)
Test whether a string contains a match of the given regular expression.
lpad(string, n, p)
Make a string at least n characters long by padding on the left with copies of p.
rpad(string, n, p)
Make a string at least n characters long by padding on the right with copies of p.
search(string, chars[, start])
Search for the given characters within the given string. The second argument may be a single character, a vector or a set of characters, a string, or a regular expression (though regular
expressions are only allowed on contiguous strings, such as ASCII or UTF-8 strings). The third argument optionally specifies a starting index. The return value is a range of indexes where the
matching sequence is found, such that s[search(s,x)] == x. The return value is 0:-1 if there is no match.
replace(string, pat, r[, n])
Search for the given pattern pat, and replace each occurrence with r. If n is provided, replace at most n occurrences. As with search, the second argument may be a single character, a vector or a
set of characters, a string, or a regular expression. If r is a function, each occurrence is replaced with r(s) where s is the matched substring.
split(string, [chars, [limit,] [include_empty]])
Return an array of strings by splitting the given string on occurrences of the given character delimiters, which may be specified in any of the formats allowed by search‘s second argument (i.e. a
single character, collection of characters, string, or regular expression). If chars is omitted, it defaults to the set of all space characters, and include_empty is taken to be false. The last
two arguments are also optional: they are are a maximum size for the result and a flag determining whether empty fields should be included in the result.
rsplit(string, [chars, [limit,] [include_empty]])
Similar to split, but starting from the end of the string.
strip(string[, chars])
Return string with any leading and trailing whitespace removed. If a string chars is provided, instead remove characters contained in that string.
lstrip(string[, chars])
Return string with any leading whitespace removed. If a string chars is provided, instead remove characters contained in that string.
rstrip(string[, chars])
Return string with any trailing whitespace removed. If a string chars is provided, instead remove characters contained in that string.
beginswith(string, prefix)
Returns true if string starts with prefix.
endswith(string, suffix)
Returns true if string ends with suffix.
Returns string with all characters converted to uppercase.
Returns string with all characters converted to lowercase.
join(strings, delim)
Join an array of strings into a single string, inserting the given delimiter between adjacent strings.
Remove the last character from a string
Remove a trailing newline from a string
ind2chr(string, i)
Convert a byte index to a character index
chr2ind(string, i)
Convert a character index to a byte index
isvalid(str, i)
Tells whether index i is valid for the given string
nextind(str, i)
Get the next valid string index after i. Returns endof(str)+1 at the end of the string.
prevind(str, i)
Get the previous valid string index before i. Returns 0 at the beginning of the string.
thisind(str, i)
Adjust i downwards until it reaches a valid index for the given string.
Create a random ASCII string of length len, consisting of upper- and lower-case letters and the digits 0-9
Gives the number of columns needed to print a character.
Gives the number of columns needed to print a string.
Tests whether a character is alphanumeric.
Tests whether a character is alphabetic.
Tests whether a character belongs to the ASCII character set.
Tests whether a character is a tab or space.
Tests whether a character is a control character.
Tests whether a character is a numeric digit (0-9).
Tests whether a character is printable, and not a space.
Tests whether a character is a lowercase letter.
Tests whether a character is printable, including space.
Tests whether a character is printable, and not a space or alphanumeric.
Tests whether a character is any whitespace character.
Tests whether a character is an uppercase letter.
Tests whether a character is a valid hexadecimal digit.
Convert a string to a Symbol.
Global variable referring to the standard out stream.
Global variable referring to the standard error stream.
Global variable referring to the standard input stream.
open(file_name[, read, write, create, truncate, append]) → IOStream
Open a file in a mode specified by five boolean arguments. The default is to open files for reading only. Returns a stream for accessing the file.
open(file_name[, mode]) → IOStream
Alternate syntax for open, where a string-based mode specifier is used instead of the five booleans. The values of mode correspond to those from fopen(3) or Perl open, and are equivalent to
setting the following boolean groups:
│r │read │
│r+│read, write │
│w │write, create, truncate │
│w+│read, write, create, truncate │
│a │write, create, append │
│a+│read, write, create, append │
open(f::function, args...)
Apply the function f to the result of open(args...) and close the resulting file descriptor upon completion.
Example: open(readall, "file.txt")
IOBuffer([size]) → IOBuffer
Create an in-memory I/O stream, optionally specifying how much initial space is needed.
Obtain the contents of an IOBuffer as an array, without copying.
Obtain the contents of an IOBuffer as a string, without copying.
fdio([name::String], fd::Integer[, own::Bool]) → IOStream
Create an IOStream object from an integer file descriptor. If own is true, closing this object will close the underlying descriptor. By default, an IOStream is closed when it is garbage
collected. name allows you to associate the descriptor with a named file.
Commit all currently buffered writes to the given stream.
Close an I/O stream. Performs a flush first.
write(stream, x[, byteorder])
Write the canonical binary representation of a value to the given stream. For numeric types, the optional argument specifies the byte order or endianness: NetworkByteOrder for big-endian,
LittleByteOrder for little-endian, and HostByteOrder (the default) for the type of the host.
read(stream, type[, byteorder])
Read a value of the given type from a stream, in canonical binary representation. For numeric types, the optional argument specifies the byte order or endianness: NetworkByteOrder for big-endian,
LittleByteOrder for little-endian, and HostByteOrder (the default) for the type of the host.
read(stream, type[, byteorder], dims)
Read a series of values of the given type from a stream, in canonical binary representation. dims is either a tuple or a series of integer arguments specifying the size of Array to return.
Get the current position of a stream.
seek(s, pos)
Seek a stream to the given position.
Seek a stream to its beginning.
Seek a stream to its end.
skip(s, offset)
Seek a stream relative to the current position.
Tests whether an I/O stream is at end-of-file. If the stream is not yet exhausted, this function will block to wait for more data if necessary, and then return false. Therefore it is always safe
to read one byte after seeing eof return false.
Converts the endianness of a value from Network byte order (big-endian) to that used by the Host.
Converts the endianness of a value from that used by the Host to Network byte order (big-endian).
Converts the endianness of a value from Little-endian to that used by the Host.
Converts the endianness of a value from that used by the Host to Little-endian.
serialize(stream, value)
Write an arbitrary value to a stream in an opaque format, such that it can be read back by deserialize. The read-back value will be as identical as possible to the original. In general, this
process will not work if the reading and writing are done by different versions of Julia, or an instance of Julia with a different system image.
Read a value written by serialize.
Network I/O
connect([host], port) → TcpSocket
Connect to the host host on port port
connect(path) → NamedPipe
Connect to the Named Pipe/Domain Socket at path
listen([addr], port) → TcpServer
Listen on port on the address specified by addr. By default this listens on localhost only. To listen on all interfaces pass, IPv4(0) or IPv6(0) as appropriate.
listen(path) → PipeServer
Listens on/Creates a Named Pipe/Domain Socket
Gets the IP address of the host (may have to do a DNS lookup)
Text I/O
Write an informative text representation of a value to the current output stream. New types should overload show(io, x) where the first argument is a stream.
Write (to the default output stream) a canonical (un-decorated) text representation of a value if there is one, otherwise call show.
Print (using print()) x followed by a newline
@printf([io::IOStream], "%Fmt", args...)
Print arg(s) using C printf() style format specification string. Optionally, an IOStream may be passed as the first argument to redirect output.
@sprintf("%Fmt", args...)
Return @printf formatted output as string.
Show x, printing all elements of arrays
Write a thorough text representation of a value to the current output stream.
Read the entire contents of an I/O stream as a string.
Read a single line of text, including a trailing newline character (if one is reached before the end of the input).
readuntil(stream, delim)
Read a string, up to and including the given delimiter byte.
Read all lines as an array.
Create an iterable object that will yield each line from a stream.
readdlm(source, delim::Char; has_header=false, use_mmap=false, ignore_invalid_chars=false)
Read a matrix from the source where each line gives one row, with elements separated by the given delimeter. The source can be a text file, stream or byte array. Memory mapped filed can be used
by passing the byte array representation of the mapped segment as source.
If has_header is true the first row of data would be read as headers and the tuple (data_cells, header_cells) is returned instead of only data_cells.
If use_mmap is true the file specified by source is memory mapped for potential speedups.
If ignore_invalid_chars is true bytes in source with invalid character encoding will be ignored. Otherwise an error is thrown indicating the offending character position.
If all data is numeric, the result will be a numeric array. If some elements cannot be parsed as numbers, a cell array of numbers and strings is returned.
readdlm(source, delim::Char, T::Type; options...)
Read a matrix from the source with a given element type. If T is a numeric type, the result is an array of that type, with any non-numeric elements as NaN for floating-point types, or zero. Other
useful values of T include ASCIIString, String, and Any.
writedlm(filename, array, delim::Char)
Write an array to a text file using the given delimeter (defaults to comma).
readcsv(source, [T::Type]; options...)
Equivalent to readdlm with delim set to comma.
writecsv(filename, array)
Equivalent to writedlm with delim set to comma.
Memory-mapped I/O
mmap_array(type, dims, stream[, offset])
Create an Array whose values are linked to a file, using memory-mapping. This provides a convenient way of working with data too large to fit in the computer’s memory.
The type determines how the bytes of the array are interpreted (no format conversions are possible), and dims is a tuple containing the size of the array.
The file is specified via the stream. When you initialize the stream, use "r" for a “read-only” array, and "w+" to create a new array used to write values to disk. Optionally, you can specify an
offset (in bytes) if, for example, you want to skip over a header in the file.
Example: A = mmap_array(Int64, (25,30000), s)
This would create a 25-by-30000 Array{Int64}, linked to the file associated with stream s.
mmap_bitarray([type], dims, stream[, offset])
Create a BitArray whose values are linked to a file, using memory-mapping; it has the same purpose, works in the same way, and has the same arguments, as mmap_array(), but the byte representation
is different. The type parameter is optional, and must be Bool if given.
Example: B = mmap_bitarray((25,30000), s)
This would create a 25-by-30000 BitArray, linked to the file associated with stream s.
Forces synchronization between the in-memory version of a memory-mapped Array or BitArray and the on-disk version. You may not need to call this function, because synchronization is performed at
intervals automatically by the operating system. Hower, you can call this directly if, for example, you are concerned about losing the result of a long-running calculation.
mmap(len, prot, flags, fd, offset)
Low-level interface to the mmap system call. See the man page.
munmap(pointer, len)
Low-level interface for unmapping memory (see the man page). With mmap_array you do not need to call this directly; the memory is unmapped for you when the array goes out of scope.
Standard Numeric Types
Bool Int8 Uint8 Int16 Uint16 Int32 Uint32 Int64 Uint64 Float32 Float64 Complex64 Complex128
Mathematical Functions
Unary minus operator.
+(x, y)
Binary addition operator.
-(x, y)
Binary subtraction operator.
*(x, y)
Binary multiplication operator.
/(x, y)
Binary left-division operator.
\(x, y)
Binary right-division operator.
^(x, y)
Binary exponentiation operator.
.+(x, y)
Element-wise binary addition operator.
.-(x, y)
Element-wise binary subtraction operator.
.*(x, y)
Element-wise binary multiplication operator.
./(x, y)
Element-wise binary left division operator.
.\(x, y)
Element-wise binary right division operator.
.^(x, y)
Element-wise binary exponentiation operator.
div(a, b)
Compute a/b, truncating to an integer
fld(a, b)
Largest integer less than or equal to a/b
mod(x, m)
Modulus after division, returning in the range [0,m)
rem(x, m)
Remainder after division
%(x, m)
Remainder after division. The operator form of rem.
mod1(x, m)
Modulus after division, returning in the range (0,m]
//(num, den)
Rational division
Numerator of the rational representation of x
Denominator of the rational representation of x
<<(x, n)
Left shift operator.
>>(x, n)
Right shift operator.
>>>(x, n)
Unsigned right shift operator.
:(start[, step], stop)
Range operator. a:b constructs a range from a to b with a step size of 1, and a:s:b is similar but uses a step size of s. These syntaxes call the function colon. The colon is also used in
indexing to select whole dimensions.
colon(start[, step], stop)
Called by : syntax for constructing ranges.
==(x, y)
Equality comparison operator.
!=(x, y)
Not-equals comparison operator.
<(x, y)
Less-than comparison operator.
<=(x, y)
Less-than-or-equals comparison operator.
>(x, y)
Greater-than comparison operator.
>=(x, y)
Greater-than-or-equals comparison operator.
.==(x, y)
Element-wise equality comparison operator.
.!=(x, y)
Element-wise not-equals comparison operator.
.<(x, y)
Element-wise less-than comparison operator.
.<=(x, y)
Element-wise less-than-or-equals comparison operator.
.>(x, y)
Element-wise greater-than comparison operator.
.>=(x, y)
Element-wise greater-than-or-equals comparison operator.
cmp(x, y)
Return -1, 0, or 1 depending on whether x<y, x==y, or x>y, respectively
Boolean not
Bitwise not
&(x, y)
Bitwise and
|(x, y)
Bitwise or
$(x, y)
Bitwise exclusive or
isapprox(x::Number, y::Number; rtol::Real=cbrt(maxeps), atol::Real=sqrt(maxeps))
Inexact equality comparison - behaves slightly different depending on types of input args:
□ For FloatingPoint numbers, isapprox returns true if abs(x-y) <= atol + rtol*max(abs(x), abs(y)).
□ For Integer and Rational numbers, isapprox returns true if abs(x-y) <= atol. The rtol argument is ignored. If one of x and y is FloatingPoint, the other is promoted, and the method above is
called instead.
□ For Complex numbers, the distance in the complex plane is compared, using the same criterion as above.
For default tolerance arguments, maxeps = max(eps(abs(x)), eps(abs(y))).
Compute sine of x, where x is in radians
Compute cosine of x, where x is in radians
Compute tangent of x, where x is in radians
Compute sine of x, where x is in degrees
Compute cosine of x, where x is in degrees
Compute tangent of x, where x is in degrees
Compute hyperbolic sine of x
Compute hyperbolic cosine of x
Compute hyperbolic tangent of x
Compute the inverse sine of x, where the output is in radians
Compute the inverse cosine of x, where the output is in radians
Compute the inverse tangent of x, where the output is in radians
atan2(y, x)
Compute the inverse tangent of y/x, using the signs of both x and y to determine the quadrant of the return value.
Compute the inverse sine of x, where the output is in degrees
Compute the inverse cosine of x, where the output is in degrees
Compute the inverse tangent of x, where the output is in degrees
Compute the secant of x, where x is in radians
Compute the cosecant of x, where x is in radians
Compute the cotangent of x, where x is in radians
Compute the secant of x, where x is in degrees
Compute the cosecant of x, where x is in degrees
Compute the cotangent of x, where x is in degrees
Compute the inverse secant of x, where the output is in radians
Compute the inverse cosecant of x, where the output is in radians
Compute the inverse cotangent of x, where the output is in radians
Compute the inverse secant of x, where the output is in degrees
Compute the inverse cosecant of x, where the output is in degrees
Compute the inverse cotangent of x, where the output is in degrees
Compute the hyperbolic secant of x
Compute the hyperbolic cosecant of x
Compute the hyperbolic cotangent of x
Compute the inverse hyperbolic sine of x
Compute the inverse hyperbolic cosine of x
Compute the inverse hyperbolic cotangent of x
Compute the inverse hyperbolic secant of x
Compute the inverse hyperbolic cosecant of x
Compute the inverse hyperbolic cotangent of x
Compute \(\sin(\pi x) / (\pi x)\) if \(x \neq 0\), and \(1\) if \(x = 0\).
Compute \(\cos(\pi x) / x - \sin(\pi x) / (\pi x^2)\) if \(x \neq 0\), and \(0\) if \(x = 0\). This is the derivative of sinc(x).
Convert x from degrees to radians
Convert x from radians to degrees
hypot(x, y)
Compute the \(\sqrt{x^2+y^2}\) without undue overflow or underflow
Compute the natural logarithm of x
Compute the natural logarithm of x to base 2
Compute the natural logarithm of x to base 10
Accurate natural logarithm of 1+x
frexp(val, exp)
Return a number x such that it has a magnitude in the interval [1/2, 1) or 0, and val = \(x \times 2^{exp}\).
Compute \(e^x\)
Compute \(2^x\)
Compute \(10^x\)
ldexp(x, n)
Compute \(x \times 2^n\)
Return a tuple (fpart,ipart) of the fractional and integral parts of a number. Both parts have the same sign as the argument.
Accurately compute \(e^x-1\)
round(x[, digits[, base]])
round(x) returns the nearest integral value of the same type as x to x. round(x, digits) rounds to the specified number of digits after the decimal place, or before if negative, e.g., round(pi,2)
is 3.14. round(x, digits, base) rounds using a different base, defaulting to 10, e.g., round(pi, 3, 2) is 3.125.
ceil(x[, digits[, base]])
Returns the nearest integral value of the same type as x not less than x. digits and base work as above.
floor(x[, digits[, base]])
Returns the nearest integral value of the same type as x not greater than x. digits and base work as above.
trunc(x[, digits[, base]])
Returns the nearest integral value of the same type as x not greater in magnitude than x. digits and base work as above.
iround(x) → Integer
Returns the nearest integer to x.
iceil(x) → Integer
Returns the nearest integer not less than x.
ifloor(x) → Integer
Returns the nearest integer not greater than x.
itrunc(x) → Integer
Returns the nearest integer not greater in magnitude than x.
signif(x, digits[, base])
Rounds (in the sense of round) x so that there are digits significant digits, under a base base representation, default 10. E.g., signif(123.456, 2) is 120.0, and signif(357.913, 4, 2) is 352.0.
min(x, y)
Return the minimum of x and y
max(x, y)
Return the maximum of x and y
clamp(x, lo, hi)
Return x if lo <= x <= y. If x < lo, return lo. If x > hi, return hi.
Absolute value of x
Squared absolute value of x
copysign(x, y)
Return x such that it has the same sign as y
Return +1 if x is positive, 0 if x == 0, and -1 if x is negative.
Returns 1 if the value of the sign of x is negative, otherwise 0.
flipsign(x, y)
Return x with its sign flipped if y is negative. For example abs(x) = flipsign(x,x).
Return \(\sqrt{x}\)
Integer square root.
Return \(x^{1/3}\)
Compute the error function of x, defined by \(\frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} dt\) for arbitrary complex x.
Compute the complementary error function of x, defined by \(1 - \operatorname{erf}(x)\).
Compute the scaled complementary error function of x, defined by \(e^{x^2} \operatorname{erfc}(x)\). Note also that \(\operatorname{erfcx}(-ix)\) computes the Faddeeva function \(w(x)\).
Compute the imaginary error function of x, defined by \(-i \operatorname{erf}(ix)\).
Compute the Dawson function (scaled imaginary error function) of x, defined by \(\frac{\sqrt{\pi}}{2} e^{-x^2} \operatorname{erfi}(x)\).
Compute the inverse error function of a real x, defined by \(\operatorname{erf}(\operatorname{erfinv}(x)) = x\).
Compute the inverse error complementary function of a real x, defined by \(\operatorname{erfc}(\operatorname{erfcinv}(x)) = x\).
Return the real part of the complex number z
Return the imaginary part of the complex number z
Return both the real and imaginary parts of the complex number z
Compute the complex conjugate of a complex number z
Compute the phase angle of a complex number z
Return cos(z) + i*sin(z) if z is real. Return (cos(real(z)) + i*sin(real(z)))/exp(imag(z)) if z is complex
binomial(n, k)
Number of ways to choose k out of n items
Factorial of n
factorial(n, k)
Compute factorial(n)/factorial(k)
Compute the prime factorization of an integer n. Returns a dictionary. The keys of the dictionary correspond to the factors, and hence are of the same type as n. The value associated with each
key indicates the number of times the factor appears in the factorization.
Example: \(100=2*2*5*5\); then, factor(100) -> [5=>2,2=>2]
gcd(x, y)
Greatest common divisor
lcm(x, y)
Least common multiple
gcdx(x, y)
Greatest common divisor, also returning integer coefficients u and v that solve ux+vy == gcd(x,y)
Test whether n is a power of two
Next power of two not less than n
Previous power of two not greater than n
nextpow(a, n)
Next power of a not less than n
prevpow(a, n)
Previous power of a not greater than n
nextprod([a, b, c], n)
Next integer not less than n that can be written a^i1 * b^i2 * c^i3 for integers i1, i2, i3.
prevprod([a, b, c], n)
Previous integer not greater than n that can be written a^i1 * b^i2 * c^i3 for integers i1, i2, i3.
invmod(x, m)
Inverse of x, modulo m
powermod(x, p, m)
Compute mod(x^p, m)
Compute the gamma function of x
Compute the logarithm of absolute value of gamma(x)
Compute the logarithmic factorial of x
Compute the digamma function of x (the logarithmic derivative of gamma(x))
airy(k, x)
kth derivative of the Airy function \(\operatorname{Ai}(x)\).
Airy function \(\operatorname{Ai}(x)\).
Airy function derivative \(\operatorname{Ai}'(x)\).
Airy function derivative \(\operatorname{Ai}'(x)\).
Airy function \(\operatorname{Bi}(x)\).
Airy function derivative \(\operatorname{Bi}'(x)\).
Bessel function of the first kind of order 0, \(J_0(x)\).
Bessel function of the first kind of order 1, \(J_1(x)\).
besselj(nu, x)
Bessel function of the first kind of order nu, \(J_\nu(x)\).
Bessel function of the second kind of order 0, \(Y_0(x)\).
Bessel function of the second kind of order 1, \(Y_1(x)\).
bessely(nu, x)
Bessel function of the second kind of order nu, \(Y_\nu(x)\).
hankelh1(nu, x)
Bessel function of the third kind of order nu, \(H^{(1)}_\nu(x)\).
hankelh2(nu, x)
Bessel function of the third kind of order nu, \(H^{(2)}_\nu(x)\).
besseli(nu, x)
Modified Bessel function of the first kind of order nu, \(I_\nu(x)\).
besselk(nu, x)
Modified Bessel function of the second kind of order nu, \(K_\nu(x)\).
beta(x, y)
Euler integral of the first kind \(\operatorname{B}(x,y) = \Gamma(x)\Gamma(y)/\Gamma(x+y)\).
lbeta(x, y)
Natural logarithm of the absolute value of the beta function \(\log(|\operatorname{B}(x,y)|)\).
Dirichlet eta function \(\eta(s) = \sum^\infty_{n=1}(-)^{n-1}/n^{s}\).
Riemann zeta function \(\zeta(s)\).
bitmix(x, y)
Hash two integers into a single integer. Useful for constructing hash functions.
ndigits(n, b)
Compute the number of digits in number n written in base b.
Data Formats
Random Numbers
Random number generateion in Julia uses the Mersenne Twister library. Julia has a global RNG, which is used by default. Multiple RNGs can be plugged in using the AbstractRNG object, which can then be
used to have multiple streams of random numbers. Currently, only MersenneTwister is supported.
srand([rng], seed)
Seed the RNG with a seed, which may be an unsigned integer or a vector of unsigned integers. seed can even be a filename, in which case the seed is read from a file. If the argument rng is not
provided, the default global RNG is seeded.
Create a MersenneTwister RNG object. Different RNG objects can have their own seeds, which may be useful for generating different streams of random numbers.
Generate a Float64 random number uniformly in [0,1)
rand!([rng], A)
Populate the array A with random number generated from the specified RNG.
rand(rng::AbstractRNG[, dims...])
Generate a random Float64 number or array of the size specified by dims, using the specified RNG object. Currently, MersenneTwister is the only available Random Number Generator (RNG), which may
be seeded using srand.
rand(dims or [dims...])
Generate a random Float64 array of the size specified by dims
rand(Int32|Uint32|Int64|Uint64|Int128|Uint128[, dims...])
Generate a random integer of the given type. Optionally, generate an array of random integers of the given type by specifying dims.
rand(r[, dims...])
Generate a random integer from the inclusive interval specified by Range1 r (for example, 1:n). Optionally, generate a random integer array.
Generate a random boolean value. Optionally, generate an array of random boolean values.
Fill an array with random boolean values. A may be an Array or a BitArray.
randn(dims or [dims...])
Generate a normally-distributed random number with mean 0 and standard deviation 1. Optionally generate an array of normally-distributed random numbers.
Basic functions
ndims(A) → Integer
Returns the number of dimensions of A
Returns a tuple containing the dimensions of A
Returns the type of the elements contained in A
iseltype(A, T)
Tests whether A or its elements are of type T
length(A) → Integer
Returns the number of elements in A (note that this differs from MATLAB where length(A) is the largest dimension of A)
Counts the number of nonzero values in array A (dense or sparse)
Convert an array to its complex conjugate in-place
stride(A, k)
Returns the distance in memory (in number of elements) between adjacent elements in dimension k
Returns a tuple of the memory strides in each dimension
ind2sub(dims, index) → subscripts
Returns a tuple of subscripts into an array with dimensions dims, corresponding to the linear index index
Example i, j, ... = ind2sub(size(A), indmax(A)) provides the indices of the maximum element
sub2ind(dims, i, j, k...) → index
The inverse of ind2sub, returns the linear index corresponding to the provided subscripts
Array(type, dims)
Construct an uninitialized dense array. dims may be a tuple or a series of integer arguments.
getindex(type[, elements...])
Construct a 1-d array of the specified type. This is usually called with the syntax Type[]. Element values can be specified using Type[a,b,c,...].
Construct an uninitialized cell array (heterogeneous array). dims can be either a tuple or a series of integer arguments.
zeros(type, dims)
Create an array of all zeros of specified type
ones(type, dims)
Create an array of all ones of specified type
infs(type, dims)
Create an array where every element is infinite and of the specified type
nans(type, dims)
Create an array where every element is NaN of the specified type
Create a Bool array with all values set to true
Create a Bool array with all values set to false
fill(v, dims)
Create an array filled with v
fill!(A, x)
Fill array A with value x
reshape(A, dims)
Create an array with the same data as the given array, but with different dimensions. An implementation for a particular type of array may choose whether the data is copied or shared.
similar(array, element_type, dims)
Create an uninitialized array of the same type as the given array, but with the specified element type and dimensions. The second and third arguments are both optional. The dims argument may be a
tuple or a series of integer arguments.
reinterpret(type, A)
Construct an array with the same binary data as the given array, but with the specified element type
n-by-n identity matrix
eye(m, n)
m-by-n identity matrix
linspace(start, stop, n)
Construct a vector of n linearly-spaced elements from start to stop.
logspace(start, stop, n)
Construct a vector of n logarithmically-spaced numbers from 10^start to 10^stop.
Mathematical operators and functions
All mathematical operations and functions are supported for arrays
Indexing, Assignment, and Concatenation
getindex(A, inds...)
Returns a subset of array A as specified by inds, where each ind may be an Int, a Range, or a Vector.
sub(A, inds...)
Returns a SubArray, which stores the input A and inds rather than computing the result immediately. Calling getindex on a SubArray computes the indices on the fly.
slicedim(A, d, i)
Return all the data of A where the index for dimension d equals i. Equivalent to A[:,:,...,i,:,:,...] where i is in position d.
setindex!(A, X, inds...)
Store values from array X within some subset of A as specified by inds.
broadcast_getindex(A, inds...)
Broadcasts the inds arrays to a common size like broadcast, and returns an array of the results A[ks...], where ks goes over the positions in the broadcast.
broadcast_setindex!(A, X, inds...)
Broadcasts the X and inds arrays to a common size and stores the value from each position in X at the indices given by the same positions in inds.
cat(dim, A...)
Concatenate the input arrays along the specified dimension
Concatenate along dimension 1
Concatenate along dimension 2
hvcat(rows::(Int...), values...)
Horizontal and vertical concatenation in one call. This function is called for block matrix syntax. The first argument specifies the number of arguments to concatenate in each block row. For
example, [a b;c d e] calls hvcat((2,3),a,b,c,d,e).
flipdim(A, d)
Reverse A in dimension d.
Equivalent to flipdim(A,1).
Equivalent to flipdim(A,2).
circshift(A, shifts)
Circularly shift the data in an array. The second argument is a vector giving the amount to shift in each dimension.
Return a vector of the linear indexes of the non-zeros in A.
find(f, A)
Return a vector of the linear indexes of A where f returns true.
Return a vector of indexes for each dimension giving the locations of the non-zeros in A.
Return a vector of the non-zero values in array A.
Return the index of the first non-zero value in A.
findfirst(A, v)
Return the index of the first element equal to v in A.
findfirst(predicate, A)
Return the index of the first element that satisfies the given predicate in A.
permutedims(A, perm)
Permute the dimensions of array A. perm is a vector specifying a permutation of length ndims(A). This is a generalization of transpose for multi-dimensional arrays. Transpose is equivalent to
ipermutedims(A, perm)
Like permutedims(), except the inverse of the given permutation is applied.
squeeze(A, dims)
Remove the dimensions specified by dims from array A
vec(Array) → Vector
Vectorize an array using column-major convention.
Array functions
mean(v[, region])
Compute the mean of whole array v, or optionally along the dimensions in region.
std(v[, region])
Compute the sample standard deviation of a vector or array``v``, optionally along dimensions in region. The algorithm returns an estimator of the generative distribution’s standard deviation
under the assumption that each entry of v is an IID draw from that generative distribution. This computation is equivalent to calculating sqrt(sum((v - mean(v)).^2) / (length(v) - 1)).
stdm(v, m)
Compute the sample standard deviation of a vector v with known mean m.
var(v[, region])
Compute the sample variance of a vector or array``v``, optionally along dimensions in region. The algorithm will return an estimator of the generative distribution’s variance under the assumption
that each entry of v is an IID draw from that generative distribution. This computation is equivalent to calculating sum((v - mean(v)).^2) / (length(v) - 1).
varm(v, m)
Compute the sample variance of a vector v with known mean m.
Compute the median of a vector v.
hist(v[, n]) → e, counts
Compute the histogram of v, optionally using approximately n bins. The return values are a range e, which correspond to the edges of the bins, and counts containing the number of elements of v in
each bin.
hist(v, e) → e, counts
Compute the histogram of v using a vector/range e as the edges for the bins. The result will be a vector of length length(e) - 1, with the i``th element being ``sum(e[i] .< v .<= e[i+1]).
histrange(v, n)
Compute nice bin ranges for the edges of a histogram of v, using approximately n bins. The resulting step sizes will be 1, 2 or 5 multiplied by a power of 10.
Compute the midpoints of the bins with edges e. The result is a vector/range of length length(e) - 1.
quantile(v, p)
Compute the quantiles of a vector v at a specified set of probability values p.
Compute the quantiles of a vector v at the probability values [.0, .2, .4, .6, .8, 1.0].
cov(v1[, v2])
Compute the Pearson covariance between two vectors v1 and v2. If called with a single element v, then computes covariance of columns of v.
cor(v1[, v2])
Compute the Pearson correlation between two vectors v1 and v2. If called with a single element v, then computes correlation of columns of v.
Signal Processing
FFT functions in Julia are largely implemented by calling functions from FFTW
fft(A[, dims])
Performs a multidimensional FFT of the array A. The optional dims argument specifies an iterable subset of dimensions (e.g. an integer, range, tuple, or array) to transform along. Most efficient
if the size of A along the transformed dimensions is a product of small primes; see nextprod(). See also plan_fft() for even greater efficiency.
A one-dimensional FFT computes the one-dimensional discrete Fourier transform (DFT) as defined by \(\operatorname{DFT}[k] = \sum_{n=1}^{\operatorname{length}(A)} \exp\left(-i\frac{2\pi (n-1)
(k-1)}{\operatorname{length}(A)} \right) A[n]\). A multidimensional FFT simply performs this operation along each transformed dimension of A.
fft!(A[, dims])
Same as fft(), but operates in-place on A, which must be an array of complex floating-point numbers.
ifft(A[, dims])
Multidimensional inverse FFT.
A one-dimensional backward FFT computes \(\operatorname{BDFT}[k] = \sum_{n=1}^{\operatorname{length}(A)} \exp\left(+i\frac{2\pi (n-1)(k-1)}{\operatorname{length}(A)} \right) A[n]\). A
multidimensional backward FFT simply performs this operation along each transformed dimension of A. The inverse FFT computes the same thing divided by the product of the transformed dimensions.
ifft!(A[, dims])
Same as ifft(), but operates in-place on A.
bfft(A[, dims])
Similar to ifft(), but computes an unnormalized inverse (backward) transform, which must be divided by the product of the sizes of the transformed dimensions in order to obtain the inverse. (This
is slightly more efficient than ifft() because it omits a scaling step, which in some applications can be combined with other computational steps elsewhere.)
bfft!(A[, dims])
Same as bfft(), but operates in-place on A.
plan_fft(A[, dims[, flags[, timelimit]]])
Pre-plan an optimized FFT along given dimensions (dims) of arrays matching the shape and type of A. (The first two arguments have the same meaning as for fft().) Returns a function plan(A) that
computes fft(A, dims) quickly.
The flags argument is a bitwise-or of FFTW planner flags, defaulting to FFTW.ESTIMATE. e.g. passing FFTW.MEASURE or FFTW.PATIENT will instead spend several seconds (or more) benchmarking
different possible FFT algorithms and picking the fastest one; see the FFTW manual for more information on planner flags. The optional timelimit argument specifies a rough upper bound on the
allowed planning time, in seconds. Passing FFTW.MEASURE or FFTW.PATIENT may cause the input array A to be overwritten with zeros during plan creation.
plan_fft!() is the same as plan_fft() but creates a plan that operates in-place on its argument (which must be an array of complex floating-point numbers). plan_ifft() and so on are similar but
produce plans that perform the equivalent of the inverse transforms ifft() and so on.
plan_ifft(A[, dims[, flags[, timelimit]]])
Same as plan_fft(), but produces a plan that performs inverse transforms ifft().
plan_bfft(A[, dims[, flags[, timelimit]]])
Same as plan_fft(), but produces a plan that performs an unnormalized backwards transform bfft().
plan_fft!(A[, dims[, flags[, timelimit]]])
Same as plan_fft(), but operates in-place on A.
plan_ifft!(A[, dims[, flags[, timelimit]]])
Same as plan_ifft(), but operates in-place on A.
plan_bfft!(A[, dims[, flags[, timelimit]]])
Same as plan_bfft(), but operates in-place on A.
rfft(A[, dims])
Multidimensional FFT of a real array A, exploiting the fact that the transform has conjugate symmetry in order to save roughly half the computational time and storage costs compared with fft().
If A has size (n_1, ..., n_d), the result has size (floor(n_1/2)+1, ..., n_d).
The optional dims argument specifies an iterable subset of one or more dimensions of A to transform, similar to fft(). Instead of (roughly) halving the first dimension of A in the result, the
dims[1] dimension is (roughly) halved in the same way.
irfft(A, d[, dims])
Inverse of rfft(): for a complex array A, gives the corresponding real array whose FFT yields A in the first half. As for rfft(), dims is an optional subset of dimensions to transform, defaulting
to 1:ndims(A).
d is the length of the transformed real array along the dims[1] dimension, which must satisfy d == floor(size(A,dims[1])/2)+1. (This parameter cannot be inferred from size(A) due to the
possibility of rounding by the floor function here.)
brfft(A, d[, dims])
Similar to irfft() but computes an unnormalized inverse transform (similar to bfft()), which must be divided by the product of the sizes of the transformed dimensions (of the real output array)
in order to obtain the inverse transform.
plan_rfft(A[, dims[, flags[, timelimit]]])
Pre-plan an optimized real-input FFT, similar to plan_fft() except for rfft() instead of fft(). The first two arguments, and the size of the transformed result, are the same as for rfft().
plan_irfft(A, d[, dims[, flags[, timelimit]]])
Pre-plan an optimized inverse real-input FFT, similar to plan_rfft() except for irfft() and brfft(), respectively. The first three arguments have the same meaning as for irfft().
dct(A[, dims])
Performs a multidimensional type-II discrete cosine transform (DCT) of the array A, using the unitary normalization of the DCT. The optional dims argument specifies an iterable subset of
dimensions (e.g. an integer, range, tuple, or array) to transform along. Most efficient if the size of A along the transformed dimensions is a product of small primes; see nextprod(). See also
plan_dct() for even greater efficiency.
dct!(A[, dims])
Same as dct!(), except that it operates in-place on A, which must be an array of real or complex floating-point values.
idct(A[, dims])
Computes the multidimensional inverse discrete cosine transform (DCT) of the array A (technically, a type-III DCT with the unitary normalization). The optional dims argument specifies an iterable
subset of dimensions (e.g. an integer, range, tuple, or array) to transform along. Most efficient if the size of A along the transformed dimensions is a product of small primes; see nextprod().
See also plan_idct() for even greater efficiency.
idct!(A[, dims])
Same as idct!(), but operates in-place on A.
plan_dct(A[, dims[, flags[, timelimit]]])
Pre-plan an optimized discrete cosine transform (DCT), similar to plan_fft() except producing a function that computes dct(). The first two arguments have the same meaning as for dct().
plan_dct!(A[, dims[, flags[, timelimit]]])
Same as plan_dct(), but operates in-place on A.
plan_idct(A[, dims[, flags[, timelimit]]])
Pre-plan an optimized inverse discrete cosine transform (DCT), similar to plan_fft() except producing a function that computes idct(). The first two arguments have the same meaning as for idct().
plan_idct!(A[, dims[, flags[, timelimit]]])
Same as plan_idct(), but operates in-place on A.
Swap the first and second halves of each dimension of x.
fftshift(x, dim)
Swap the first and second halves of the given dimension of array x.
ifftshift(x[, dim])
Undoes the effect of fftshift.
filt(b, a, x)
Apply filter described by vectors a and b to vector x.
deconv(b, a)
Construct vector c such that b = conv(a,c) + r. Equivalent to polynomial division.
conv(u, v)
Convolution of two vectors. Uses FFT algorithm.
xcorr(u, v)
Compute the cross-correlation of two vectors.
The following functions are defined within the Base.FFTW module.
r2r(A, kind[, dims])
Performs a multidimensional real-input/real-output (r2r) transform of type kind of the array A, as defined in the FFTW manual. kind specifies either a discrete cosine transform of various types (
FFTW.REDFT00, FFTW.REDFT01, FFTW.REDFT10, or FFTW.REDFT11), a discrete sine transform of various types (FFTW.RODFT00, FFTW.RODFT01, FFTW.RODFT10, or FFTW.RODFT11), a real-input DFT with
halfcomplex-format output (FFTW.R2HC and its inverse FFTW.HC2R), or a discrete Hartley transform (FFTW.DHT). The kind argument may be an array or tuple in order to specify different transform
types along the different dimensions of A; kind[end] is used for any unspecified dimensions. See the FFTW manual for precise definitions of these transform types, at http://www.fftw.org/doc.
The optional dims argument specifies an iterable subset of dimensions (e.g. an integer, range, tuple, or array) to transform along. kind[i] is then the transform type for dims[i], with kind[end]
being used for i > length(kind).
See also plan_r2r() to pre-plan optimized r2r transforms.
r2r!(A, kind[, dims])
Same as r2r(), but operates in-place on A, which must be an array of real or complex floating-point numbers.
plan_r2r(A, kind[, dims[, flags[, timelimit]]])
Pre-plan an optimized r2r transform, similar to Base.plan_fft() except that the transforms (and the first three arguments) correspond to r2r() and r2r!(), respectively.
plan_r2r!(A, kind[, dims[, flags[, timelimit]]])
Similar to Base.plan_fft(), but corresponds to r2r!().
Numerical Integration
Although several external packages are available for numeric integration and solution of ordinary differential equations, we also provide some built-in integration support in Julia.
quadgk(f, a, b, c...; reltol=sqrt(eps), abstol=0, maxevals=10^7, order=7)
Numerically integrate the function f(x) from a to b, and optionally over additional intervals b to c and so on. Keyword options include a relative error tolerance reltol (defaults to sqrt(eps) in
the precision of the endpoints), an absolute error tolerance abstol (defaults to 0), a maximum number of function evaluations maxevals (defaults to 10^7), and the order of the integration rule
(defaults to 7).
Returns a pair (I,E) of the estimated integral I and an estimated upper bound on the absolute error E. If maxevals is not exceeded then either E <= abstol or E <= reltol*norm(I) will hold. (Note
that it is useful to specify a positive abstol in cases where norm(I) may be zero.)
The endpoints a etcetera can also be complex (in which case the integral is performed over straight-line segments in the complex plane). If the endpoints are BigFloat, then the integration will
be performed in BigFloat precision as well (note: it is advisable to increase the integration order in rough proportion to the precision, for smooth integrands). More generally, the precision is
set by the precision of the integration endpoints (promoted to floating-point types).
The integrand f(x) can return any numeric scalar, vector, or matrix type, or in fact any type supporting +, -, multiplication by real values, and a norm (i.e., any normed vector space).
The algorithm is an adaptive Gauss-Kronrod integration technique: the integral in each interval is estimated using a Kronrod rule (2*order+1 points) and the error is estimated using an embedded
Gauss rule (order points). The interval with the largest error is then subdivided into two intervals and the process is repeated until the desired error tolerance is achieved.
These quadrature rules work best for smooth functions within each interval, so if your function has a known discontinuity or other singularity, it is best to subdivide your interval to put the
singularity at an endpoint. For example, if f has a discontinuity at x=0.7 and you want to integrate from 0 to 1, you should use quadgk(f, 0,0.7,1) to subdivide the interval at the point of
discontinuity. The integrand is never evaluated exactly at the endpoints of the intervals, so it is possible to integrate functions that diverge at the endpoints as long as the singularity is
integrable (for example, a log(x) or 1/sqrt(x) singularity).
For real-valued endpoints, the starting and/or ending points may be infinite. (A coordinate transformation is performed internally to map the infinite interval to a finite one.)
Parallel Computing
addprocs(n) → List of process identifiers
Add processes on the local machine. Can be used to take advantage of multiple cores.
addprocs({"host1", "host2", ...}; tunnel=false, dir=JULIA_HOME, sshflags::Cmd=``, cman::ClusterManager) → List of process identifiers
Add processes on remote machines via SSH or a custom cluster manager. Requires julia to be installed in the same location on each node, or to be available via a shared file system.
Keyword arguments:
tunnel : if true then SSH tunneling will be used to connect to the worker.
dir : specifies the location of the julia binaries on the worker nodes.
sshflags : specifies additional ssh options, e.g. sshflags=`-i /home/foo/bar.pem` .
cman : Workers are started using the specified cluster manager.
For example Beowulf clusters are supported via a custom cluster manager implemented in package ClusterManagers.
See the documentation for package ClusterManagers for more information on how to write a custom cluster manager.
addprocs_sge(n) - DEPRECATED from Base, use ClusterManagers.addprocs_sge(n)
Adds processes via the Sun/Oracle Grid Engine batch queue, using qsub.
Get the number of available processors.
Get the number of available worker processors. This is one less than nprocs(). Equal to nprocs() if nprocs() == 1.
Returns a list of all process identifiers.
Returns a list of all worker process identifiers.
Removes the specified workers.
Get the id of the current processor.
pmap(f, c)
Transform collection c by applying f to each element in parallel. If nprocs() > 1, the calling process will be dedicated to assigning tasks. All other available processes will be used as parallel
remotecall(id, func, args...)
Call a function asynchronously on the given arguments on the specified processor. Returns a RemoteRef.
Block the current task until some event occurs, depending on the type of the argument:
□ RemoteRef: Wait for a value to become available for the specified remote reference.
□ Condition: Wait for notify on a condition.
□ Process: Wait for the process to exit, and get its exit code.
□ Task: Wait for a Task to finish, returning its result value.
Wait for and get the value of a remote reference.
remotecall_wait(id, func, args...)
Perform wait(remotecall(...)) in one message.
remotecall_fetch(id, func, args...)
Perform fetch(remotecall(...)) in one message.
put(RemoteRef, value)
Store a value to a remote reference. Implements “shared queue of length 1” semantics: if a value is already present, blocks until the value is removed with take.
Fetch the value of a remote reference, removing it so that the reference is empty again.
Make an uninitialized remote reference on the local machine.
Make an uninitialized remote reference on processor n.
timedwait(testcb::Function, secs::Float64; pollint::Float64=0.1)
Waits till testcb returns true or for secs` seconds, whichever is earlier. testcb is polled every pollint seconds.
Execute an expression on an automatically-chosen processor, returning a RemoteRef to the result.
Accepts two arguments, p and an expression, and runs the expression asynchronously on processor p, returning a RemoteRef to the result.
Equivalent to fetch(@spawn expr).
Equivalent to fetch(@spawnat p expr).
Schedule an expression to run on the local machine, also adding it to the set of items that the nearest enclosing @sync waits for.
Wait until all dynamically-enclosed uses of @async, @spawn, and @spawnat complete.
Distributed Arrays
DArray(init, dims[, procs, dist])
Construct a distributed array. init is a function that accepts a tuple of index ranges. This function should allocate a local chunk of the distributed array and initialize it for the specified
indices. dims is the overall size of the distributed array. procs optionally specifies a vector of processor IDs to use. dist is an integer vector specifying how many chunks the distributed array
should be divided into in each dimension.
For example, the dfill function that creates a distributed array and fills it with a value v is implemented as:
dfill(v, args...) = DArray(I->fill(v, map(length,I)), args...)
dzeros(dims, ...)
Construct a distributed array of zeros. Trailing arguments are the same as those accepted by darray.
dones(dims, ...)
Construct a distributed array of ones. Trailing arguments are the same as those accepted by darray.
dfill(x, dims, ...)
Construct a distributed array filled with value x. Trailing arguments are the same as those accepted by darray.
drand(dims, ...)
Construct a distributed uniform random array. Trailing arguments are the same as those accepted by darray.
drandn(dims, ...)
Construct a distributed normal random array. Trailing arguments are the same as those accepted by darray.
Convert a local array to distributed
Get the local piece of a distributed array
A tuple describing the indexes owned by the local processor
Get the vector of processors storing pieces of d
Run a command object, constructed with backticks. Throws an error if anything goes wrong, including the process exiting with a non-zero status.
Run a command object asynchronously, returning the resulting Process object.
Run a command object, constructed with backticks, and tell whether it was successful (exited with a code of 0).
Determine whether a process is currently running.
Determine whether a process has exited.
Get the exit status of an exited process. The result is undefined if the process is still running. Use wait(p) to wait for a process to exit, and get its exit status.
kill(p::Process, signum=SIGTERM)
Send a signal to a process. The default is to terminate the process.
Starts running a command asynchronously, and returns a tuple (stream,process). The first value is a stream reading from the process’ standard output.
Starts running a command asynchronously, and returns a tuple (stream,process). The first value is a stream writing to the process’ standard input.
Starts running a command asynchronously, and returns a tuple (stdout,stdin,process) of the output stream and input stream of the process, and the process object itself.
Mark a command object so that running it will not throw an error if the result code is non-zero.
Mark a command object so that it will be run in a new process group, allowing it to outlive the julia process, and not have Ctl-C interrupts passed to it.
Redirect standard input or output of a process.
Example: run(`ls` |> "out.log") Example: run("file.txt" |> `cat`)
Redirect standard output of a process, appending to the destination file.
Redirect the standard error stream of a process.
gethostname() → String
Get the local machine’s host name.
getipaddr() → String
Get the IP address of the local machine, as a string of the form “x.x.x.x”.
pwd() → String
Get the current working directory.
Set the current working directory. Returns the new current directory.
cd(f[, dir])
Temporarily changes the current working directory (HOME if not specified) and applies function f before returning.
mkdir(path[, mode])
Make a new directory with name path and permissions mode. mode defaults to 0o777, modified by the current file creation mask.
mkpath(path[, mode])
Create all directories in the given path, with permissions mode. mode defaults to 0o777, modified by the current file creation mask.
Remove the directory named path.
getpid() → Int32
Get julia’s process ID.
Get the system time in seconds since the epoch, with fairly high (typically, microsecond) resolution. When passed a TmStruct, converts it to a number of seconds since the epoch.
Get the time in nanoseconds. The time corresponding to 0 is undefined, and wraps every 5.8 years.
strftime([format], time)
Convert time, given as a number of seconds since the epoch or a TmStruct, to a formatted string using the given format. Supported formats are the same as those in the standard C library.
strptime([format], timestr)
Parse a formatted time string into a TmStruct giving the seconds, minute, hour, date, etc. Supported formats are the same as those in the standard C library. On some platforms, timezones will not
be parsed correctly. If the result of this function will be passed to time to convert it to seconds since the epoch, the isdst field should be filled in manually. Setting it to -1 will tell the C
library to use the current system settings to determine the timezone.
Convert a number of seconds since the epoch to broken-down format, with fields sec, min, hour, mday, month, year, wday, yday, and isdst.
Set a timer to be read by the next call to toc() or toq(). The macro call @time expr can also be used to time evaluation.
Print and return the time elapsed since the last tic().
Return, but do not print, the time elapsed since the last tic().
A macro to execute and expression, printing time it took to execute and the total number of bytes its execution caused to be allocated, before returning the value of the expression.
A macro to evaluate an expression, discarding the resulting value, instead returning the number of seconds it took to execute as a floating-point number.
A macro to evaluate an expression, discarding the resulting value, instead returning the total number of bytes allocated during evaluation of the expression.
EnvHash() → EnvHash
A singleton of this type provides a hash table interface to environment variables.
Reference to the singleton EnvHash, providing a dictionary interface to system environment variables.
Given @unix? a : b, do a on Unix systems (including Linux and OS X) and b elsewhere. See documentation for Handling Platform Variations in the Calling C and Fortran Code section of the manual.
Given @osx? a : b, do a on OS X and b elsewhere. See documentation for Handling Platform Variations in the Calling C and Fortran Code section of the manual.
Given @linux? a : b, do a on Linux and b elsewhere. See documentation for Handling Platform Variations in the Calling C and Fortran Code section of the manual.
Given @windows? a : b, do a on Windows and b elsewhere. See documentation for Handling Platform Variations in the Calling C and Fortran Code section of the manual.
C Interface
ccall((symbol, library) or fptr, RetType, (ArgType1, ...), ArgVar1, ...)
Call function in C-exported shared library, specified by (function name, library) tuple (String or :Symbol). Alternatively, ccall may be used to call a function pointer returned by dlsym, but
note that this usage is generally discouraged to facilitate future static compilation.
cglobal((symbol, library) or ptr[, Type=Void])
Obtain a pointer to a global variable in a C-exported shared library, specified exactly as in ccall. Returns a Ptr{Type}, defaulting to Ptr{Void} if no Type argument is supplied. The values can
be read or written by unsafe_load or unsafe_store!, respectively.
cfunction(fun::Function, RetType::Type, (ArgTypes...))
Generate C-callable function pointer from Julia function. Type annotation of the return value in the callback function is a must for situations where Julia cannot infer the return type
For example:
function foo()
# body
bar = cfunction(foo, Float64, ())
dlopen(libfile::String[, flags::Integer])
Load a shared library, returning an opaque handle.
The optional flags argument is a bitwise-or of zero or more of RTLD_LOCAL, RTLD_GLOBAL, RTLD_LAZY, RTLD_NOW, RTLD_NODELETE, RTLD_NOLOAD, RTLD_DEEPBIND, and RTLD_FIRST. These are converted to the
corresponding flags of the POSIX (and/or GNU libc and/or MacOS) dlopen command, if possible, or are ignored if the specified functionality is not available on the current platform. The default is
RTLD_LAZY|RTLD_DEEPBIND|RTLD_LOCAL. An important usage of these flags, on POSIX platforms, is to specify RTLD_LAZY|RTLD_DEEPBIND|RTLD_GLOBAL in order for the library’s symbols to be available for
usage in other shared libraries, in situations where there are dependencies between shared libraries.
dlsym(handle, sym)
Look up a symbol from a shared library handle, return callable function pointer on success.
dlsym_e(handle, sym)
Look up a symbol from a shared library handle, silently return NULL pointer on lookup failure.
Close shared library referenced by handle.
Call free() from C standard library.
unsafe_load(p::Ptr{T}, i::Integer)
Dereference the pointer p[i] or *p, returning a copy of type T.
unsafe_store!(p::Ptr{T}, x, i::Integer)
Assign to the pointer p[i] = x or *p = x, making a copy of object x into the memory at p.
pointer(a[, index])
Get the native address of an array element. Be careful to ensure that a julia reference to a exists as long as this pointer will be used.
pointer(type, int)
Convert an integer to a pointer of the specified element type.
pointer_to_array(p, dims[, own])
Wrap a native pointer as a Julia Array object. The pointer element type determines the array element type. own optionally specifies whether Julia should take ownership of the memory, calling free
on the pointer when the array is no longer referenced.
find_library(names, locations)
Searches for the first library in names in the paths in the locations list, DL_LOAD_PATH, or system library paths (in that order) which can successfully be dlopen’d. On success, the return value
will be one of the names (potentially prefixed by one of the paths in locations). This string can be assigned to a global const and used as the library name in future ccall‘s. On failure, it
returns the empty string.
When calling dlopen, the paths in this list will be searched first, in order, before searching the system locations for a valid library handle.
Create a Task (i.e. thread, or coroutine) to execute the given function. The task exits when this function returns.
yieldto(task, args...)
Switch to the given task. The first time a task is switched to, the task’s function is called with args. On subsequent switches, args are returned from the task’s last call to yieldto.
Get the currently running Task.
Tell whether a task has exited.
Receive the next value passed to produce by the specified task.
Send the given value to the last consume call, switching to the consumer task.
For scheduled tasks, switch back to the scheduler to allow another scheduled task to run. A task that calls this function is still runnable, and will be restarted immediately if there are no
other runnable tasks.
Look up the value of a symbol in the current task’s task-local storage.
task_local_storage(symbol, value)
Assign a value to a symbol in the current task’s task-local storage.
Create an edge-triggered event source that tasks can wait for. Tasks that call wait on a Condition are suspended and queued. Tasks are woken up when notify is later called on the Condition. Edge
triggering means that only tasks waiting at the time notify is called can be woken up. For level-triggered notifications, you must keep extra state to keep track of whether a notification has
happened. The RemoteRef type does this, and so can be used for level-triggered events.
notify(condition, val=nothing; all=true, error=false)
Wake up tasks waiting for a condition, passing them val. If all is true (the default), all waiting tasks are woken, otherwise only one is. If error is true, the passed value is raised as an
exception in the woken tasks.
Add a task to the scheduler’s queue. This causes the task to run constantly when the system is otherwise idle, unless the task performs a blocking operation such as wait.
Wrap an expression in a Task and add it to the scheduler’s queue.
Wrap an expression in a Task executing it, and return the Task. This only creates a task, and does not run it.
Block the current task for a specified number of seconds.
Perform garbage collection. This should not generally be used.
Disable garbage collection. This should be used only with extreme caution, as it can cause memory use to grow without bound.
Re-enable garbage collection after calling gc_disable. | {"url":"http://docs.julialang.org/en/release-0.1/stdlib/base/","timestamp":"2014-04-21T15:12:50Z","content_type":null,"content_length":"372323","record_id":"<urn:uuid:da8195cf-cefa-44d2-8548-e49715d42215>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math and Literature
Our family likes to read. Are there any good books that make math part of the story?
Find math where you least expect it—in your child’s books and reading materials.
Math turns up in the most unexpected places! You'll find it somewhere between the covers of almost any book—novels, mysteries, biographies, legends, and adventures.
For ideas on how you and your child might talk about the math you discover in books you read, look over these selections and the questions that follow below them. But they're only starters.
You'll soon start finding the math in everything you read. Talking about it is one more way to bring your family together. Enjoy!
Doing mathematics as you read literature is a real possibility if you choose the right books. The following passages from contemporary literature, and the accompanying questions, can give you some
ideas on how to mix math and your reading.
The Phantom Tollbooth by Norton Jester
Bullseye Books, New York, 1961
“Oh, we’re just the average family,” he said thoughtfully; “mother, father, and 2.58 children—and, as I explained, I’m the .58.”
“It must be rather odd being only part of a person,” Milo remarked.
“Not at all,” said the child. “Every average family has 2.58 children, so I always have someone to play with. Besides, each family also has an average of 1.3 automobiles, and since I’m the only
one who can drive three tenths of a car, I get to use it all the time.”
□ What are averages? When are they useful? Are they “real” numbers?
□ Every middle school child should be able to discuss averages. What does average mean to you?
Hatchet by Gary Paulsen
Penguin Books, New York, 1987
[Brian] looked at the dashboard of the plane, studied the dials and hoped to get some help, hoped to find a compass, but it was all so confusing, a jumble of numbers and lights.
He tried to figure out the dials…. He thought he might know which was speed—it was a lighted number that read 160—but he didn’t know if that was actual miles an hour, or kilometers, or if it just
meant how fast the plane was moving through the air and not over the ground.
When the pilot had jerked he had moved the plane, but Brian could not remember how much or if it had come back to its original course. Since he did not know the original course anyway and could
only guess at which display might be the compass—the one reading 342—he did not know where he had been or where he was going….
□ How are 160 miles an hour and 160 kilometers an hour different?
□ What markings are on a compass? Can a compass have a reading of 342°?
Holes by Louis Sachar Frances
Foster Books, New York, 1998
“Instead of twenty-six letters. There are really fifty-two.”
Stanley looked at him surprised. “I guess that’s right. How’d you figure that out?” he asked.
Zero said nothing.
“Did you add?”
Zero said nothing.
“Did you multiply?”
“That’s just how many there are.”
“It’s good math,” said Stanley.
“I’m not stupid,” Zero said. “I know everybody thinks I am. I just don’t like answering their questions.”
□ How many letters of the English alphabet are there?
□ Why did Zero say there are 52?
□ What type of reasoning is being used? | {"url":"http://www.nctm.org/resources/content.aspx?id=2880","timestamp":"2014-04-24T17:22:08Z","content_type":null,"content_length":"46733","record_id":"<urn:uuid:a30cba79-d733-4522-b3e3-78a69c5752e6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
Partial Correlations
Nadine Nemec & Peter Wilde nadpete at cruzio.com
Thu Nov 9 01:22:02 EST 1995
Partial correlations are based on the model that a dependent variable (y) is
correlated with a set of independent variables (x1,x2,..xn). Each partial
correlation measures the correlation of y with a given xi, with the other
independent variables held constant - the correlation with just that independent
variable alone. A good discussion can be found in Zar, Biostatistics.
A significant negative partial correlation means that y varies inversely with the
given xi (i.e., as xi increases, y decreases and vica versa). In other words,
there may be a linear relationship between the variables with a negative slope. So,
a significant negative partial correlation means the same as a positive one, just
that the sign of the relationship is negative. A non-significant, or zero, partial
correlation is just that: statistically, there is no correlation between the y and
xi - variation in xi is not reflected with a variation in y.
The preceeding is best stated in terms of null hypotheses. See the texts for
that. An important point with correlation is that a non-zero correlation between
two variables does not imply there is a causal relationship between the two, only
that they vary together in a linear way.
Pete Wilde
Kinnetic Laboratories, Inc.
kinnetic at cruzio.com
More information about the Comp-bio mailing list | {"url":"http://www.bio.net/bionet/mm/comp-bio/1995-November/000800.html","timestamp":"2014-04-16T04:44:03Z","content_type":null,"content_length":"3569","record_id":"<urn:uuid:8bfaa47f-2977-4206-aaa2-2e037e127e4e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Geometry terms/ Tessellation/ correction
Woody, You are confusing me.
Yes, a Rhombus will tessellate. A rhombus (4 congruent sides with 2
equal angles of less than 45 and 2 equal angles greater than 45
totaling 360 degrees) is a squashed leaning over square (4 congruent
sides with four 45 degree angles).
Test this out: the sum of the angles that share a vertex must = 360
degrees. 6 equilateral triangles arranged around a vertex = 360
degrees and will tessellate. Lay 3 hexagons around a point and they =
360 degrees and will tessellate. Lay 4 pentagons around a point and
they overlap because the sum is 432 degrees and won't tessellate.
Bunki! Help me out here! What a day to be without a protractor!
>From: Woody Duncan <wduncan@kc.rr.com>
>Subject: Re: Geometry terms/ Tessellation/ correction
>Date: Wed, Apr 18, 2001, 8:00 PM
>> A shape that will tessellate is a shape which has angles that total
>> 360 degrees. Any other shape will not tessellate.
> A Rhombus has more than 360 degrees
> and A Rhombus will tessellate
>> The three angles of
>> a triangle will add up to 360 degrees therefore a triangle will
>> tessellate.
> What triangle's 3 angles do not add up to 360 degrees ?
> Will every triangle tessellate ? I use equilateral triangles,
> will others tessellate ?
>> A square has angles that add up to 360 degrees therefore a
>> square will tessellate.
> Just some questions, Woody in KC | {"url":"http://www.getty.edu/education/teacherartexchange/archive/Apr01/1137.html","timestamp":"2014-04-21T05:13:47Z","content_type":null,"content_length":"15472","record_id":"<urn:uuid:a842a9b9-3b09-42fa-a3df-4277722e06f4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normal matrix
November 9th 2008, 08:09 PM
Normal matrix
Please help!
Show that if A is Mn(C) is normal then Ax=0 if and only if A*x=0
Thank you!
Does this hold in general? I am not sure.
November 10th 2008, 10:34 AM
Use the inner product: $Ax=0\ \Longleftrightarrow\ \langle Ax,Ax\rangle=0 \ \Longleftrightarrow\ \langle A^*Ax,x\rangle=0\ \Longleftrightarrow\ \ldots$ (now use the fact that A is normal).
The result is false in general, for example if $A = \begin{bmatrix}0&1\\0&0\end{bmatrix},\ x = \begin{bmatrix}1\\0\end{bmatrix}$. | {"url":"http://mathhelpforum.com/advanced-algebra/58667-normal-matrix-print.html","timestamp":"2014-04-21T05:38:13Z","content_type":null,"content_length":"4192","record_id":"<urn:uuid:ae5cec21-3873-415b-b8db-595a913c761e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generalizations of Hölder’s and Some Related Integral Inequalities
Journal of Function Spaces and Applications
Volume 2013 (2013), Article ID 198405, 9 pages
Research Article
Generalizations of Hölder’s and Some Related Integral Inequalities on Fractal Space
Department of Construction and Information Engineering, Guangxi Modern Vocational Technology College, Hechi, Guangxi 547000, China
Received 5 May 2013; Accepted 8 July 2013
Academic Editor: Miguel Martin
Copyright © 2013 Guang-Sheng Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Based on the local fractional calculus, we establish some new generalizations of Hölder’s inequality. By using it, some related results on the generalized integral inequality in fractal space are
investigated in detail.
1. Introduction
Let , , , and be continuous real-valued functions on . Then, the famous Hölders inequality reads as The renowned inequality of Hölder [1] is well celebrated for its beauty and its wide range of
important applications to real and complex analysis and functional analysis, as well as many disciplines in applied mathematics. A large number of new proofs, various generalizations, refinements,
variations, and applications of Hölder inequality have been investigated in the literature in [2–11]. Recently, it comes to our attention that an interesting local fractional integral Hölder’s
inequality, which was established by Yang [12], is as follows.
Let , , , . Then,
Recently, the local fractional calculus has attracted a lot of interest for scientists and engineers. Local fractional derivative had been introduced in [12–36]; that is the local fractional
derivative was structured in [12–18, 26, 30–36], Jumarie gave the modified Riemann-Liouville derivative in [19, 20], the fractal derivative was considered in [21–25, 27–29], and the generalized
fractal derivative was proposed by Chen et al. [25]. As a consequence, the theory of local fractional calculus becomes important for modelling problems for fractal mathematics and engineering on
Cantor sets and it plays important role in many applications in several fields such as the theoretical physics [14, 18], the elasticity and fracture mechanics [14], the heat conduction theory [14, 16
, 27], signal analysis [12, 13], the fluid mechanics [14], tensor analysis [14], Fourier and wavelet analysis [12, 13], optimization method [14], and complex analysis [12, 13]. For example, the local
fractional Fokker-Planck equation was proposed in [18]. The local fractional Stieltjes transform was established in [37]. The fractal heat conduction problems were presented in [14, 27]. Local
fractional improper integral was obtained in [38]. The principles of virtual work, minimum potential, and complementary energy in the mechanics of fractal media were investigated in [14]. Mean value
theorems for local fractional integrals were considered in [39]. The diffusion problems in fractal media were reported in [24].
The purpose of this work is to establish some generalizations of inequality (2) and give its corresponding reverse version. Moreover, the obtained results will be applied to establish local
fractional integral reverse Minkowski inequality, Dresher’s inequality, and its corresponding reverse version. This paper is divided into four sections. In Section 2, we recall some basic facts about
local fractional calculus; in Section 3, we give some generalizations of the local fractional integral Hölder inequality and establish its corresponding reverse version; in Section 4, we apply the
obtained results to establish reverse Minkowski inequality, Dresher’s inequality, and its reverse form involving local fractional integral; some extensions of Minkowski and Dreshers inequalities are
considered also.
2. Preliminaries
In this section, we recall the basic notions of local fractional calculus (see [12–14]).
2.1. Local Fractional Continuity of Functions
In order to study the local fractional continuity of nondifferentiable functions on fractal sets, we first give the following results.
Lemma 1 (see [14]). Assume that is a subset of the real line and is a fractal. Let be a bi-Lipschitz mapping. Then, there exist two positive constants , and , such that for all ,
From Lemma 1, we obtain easily such that where is fractal dimension of . The result that is directly deduced from fractal geometry is related to fractal coarse-grained mass function which reads with
where is dimensional Hausdorff measure.
Notice that we consider the dimensions of any fractal spaces (e.g., Cantor spaces or like-Cantor spaces) as a positive number. It looks like Euclidean space because its dimension is also a positive
number. The detailed results had been considered in [12–14].
Definition 2 (see [12, 14]). If there exists with , for and , then is called local fractional continuous at , denoted by . is local fractional continuous on the interval , denoted by if (9) holds for
Definition 3 (see [13, 14]). Assume that is a nondifferentiable function of exponent , , which satisfies Hölder function of exponent , then, for such that
Definition 4 (see [13, 14]). A function is continuous of order , , or shortly continuous, if
Remark 5. Compared with (12), (9) is standard definition of local fractional continuity. Here, (11) is unified local fractional continuity [14].
2.2. Local Fractional Derivatives and Integrals
Definition 6 (see [12–14]). Let . Local fractional derivative of of order at is given by where .
For any , there exists denoted by Local fractional derivative of high order is derived as and local fractional partial derivative of high order is derived as
Definition 7 (see [12–14]). Let . Local fractional integral of of order in the interval is deduced by where , , and , , , is a partition of the interval .
For convenience, we assume that For any , we can get denoted by
Remark 8. If , or then we have
3. Some Generalizations of Hölder Inequality and Its Reverse Form
In the section, we give some generalizations of the inequality (2) and establish its reverse form.
Theorem 9 (reverse Hölder inequality). Let , , and let , . Then,
Proof. Set , , and then we have . By inequality (2), we obtain In (24), multiplying both sides by yields Inequality (25) implies
Combining inequality (2) and Theorem 9, we can derive the following generalization.
Corollary 10. Let , let , , and let . Then,(1)for , one has (2)for , , , one has
Proof. (1) If , , are two positive constants and . In particular, setting , , inequality (27) becomes inequality (2). Suppose (27) holds when . Using mathematical induction, let be real numbers with
and , ; we must have for . In particular, we have By using Hölder’s inequality (2), we obtain since Using induction hypothesis and inequality (30), we can get Hence, this completes the proof.
(2) The Proof of (28) is similar to the proof of (27). Clearly when , inequality (28) becomes Hölder’s inequality (23). Now, suppose that (28) holds for some integer . Let , and let be such that ,
and let , . Note that , since
Observing Höder’s inequality (23), we have
unless .
Since Combining induction hypothesis and (34), we obtain unless for some .
4. Some Related Results
Theorem 11 (Minkowski inequality see [12]). Let , , . Then,
Next, we give reverse version of inequality (37).
Theorem 12 (reverse Minkowski's inequality). Let , , . Then,
Proof. Let By the Hölder inequality, in view of , we have By inequality (40), reverse Minkowski's inequality and the theorem are completely proved.
Corollary 13. Let , .(1)For , one has (2)For , one has
Proof. (1) Using Theorem 11, we have Multiplying to two sides of (43), we get that (41) holds.
(2) The proof of (42) is similar to the proof of (38), so we omit it here.
Corollary 14. Let , . Then,(1)for , one has (2)for , one has
Theorem 15 (Dresher’s inequality). Let , , and let ; then
Proof. Combining inequality (2) and Theorem 11, we have Using reverse Minkowski inequality (38), we have By (47) and (48), we deduce that (46) holds. This completes the proof of the theorem.
Corollary 16. Let , , . Then,
Theorem 17 (reverse Dresher’s inequality). Let , , . Then,
Proof. Let , , , , and , using Radon’s inequality (see [3]) We have if and only if sequence and sequence are proportional. Let and set . Observing (52)–(53), we have Since , let , and let , and by
Theorem 12, we obtain, respectively,
Observing (54)-(55), we obtain the desired results, and the theorem is completely proved.
Corollary 18. Let , and let , . Then,
The authors would like to thank the anonymous referees for their valuable comments on the original version of this paper. This work was supported by the NNSFC (no. 11201433) and Scientific Research
Project of Guangxi Education Department (no. 201204LX672).
1. E. Hewitt and K. Stromberg, Real and Abstract Analysis. A Modern Treatment of the Theory of Functions of a Real Variable, Second Printing Corrected, Springer, New York, NY, USA, 1969. View at
Zentralblatt MATH · View at MathSciNet
2. J. Kuang, Applied Inequalities, Shandong Science Press, Jinan, China, 2003.
3. G. Hardy, J. E. Littlewood, and G. Pólya, Inequalities, Cambridge University Press, Cambridge, UK, 2nd edition, 1953.
4. X. Yang, “A generalization of Hölder inequality,” Journal of Mathematical Analysis and Applications, vol. 247, no. 1, pp. 328–330, 2000. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
5. X. Yang, “Refinement of Hölder inequality and application to Ostrowski inequality,” Applied Mathematics and Computation, vol. 138, no. 2-3, pp. 455–461, 2003. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
6. X. Yang, “A note on Hölder inequality,” Applied Mathematics and Computation, vol. 134, no. 2-3, pp. 319–323, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at
7. X. Yang, “Hölder's inequality,” Applied Mathematics Letters, vol. 16, no. 6, pp. 897–903, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
8. S. Wu and L. Debnath, “Generalizations of Aczél's inequality and Popoviciu's inequality,” Indian Journal of Pure and Applied Mathematics, vol. 36, no. 2, pp. 49–63, 2005. View at Zentralblatt
MATH · View at MathSciNet
9. S. H. Wu, “Generalization of a sharp Hölder's inequality and its application,” Journal of Mathematical Analysis and Applications, vol. 332, no. 1, pp. 741–750, 2007. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
10. S. Wu, “A new sharpened and generalized version of Hölder's inequality and its applications,” Applied Mathematics and Computation, vol. 197, no. 2, pp. 708–714, 2008. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
11. E. G. Kwon and E. K. Bae, “On a continuous form of Hölder inequality,” Journal of Mathematical Analysis and Applications, vol. 343, no. 1, pp. 585–593, 2008. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
12. X. Yang, Local Fractional Functional Analysis and Its Applications, Asian Academic publisher Limited, Hong Kong, 2011.
13. X. J. Yang, “Local fractional integral transforms,” Progress in Nonlinear Science, vol. 4, pp. 1–225, 2011.
14. X. J. Yang, Advanced Local Fractional Calculus and Its Applications, World Science Publisher, New York, NY, USA, 2013.
15. W.-H. Su, D. Baleanu, X.-J. Yang, and H. Jafari, “Damped wave equation and dissipative wave equation in fractal strings within the local fractional variational iteration method,” Fixed Point
Theory and Applications, vol. 2013, no. 1, pp. 89–102, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
16. M.-S. Hu, D. Baleanu, and X.-J. Yang, “One-phase problems for discontinuous heat transfer in fractal media,” Mathematical Problems in Engineering, vol. 2013, Article ID 358473, 3 pages, 2013.
View at Publisher · View at Google Scholar · View at MathSciNet
17. K. M. Kolwankar and A. D. Gangal, “Hölder exponents of irregular signals and local fractional derivatives,” Pramana, vol. 48, no. 1, pp. 49–68, 1997. View at Scopus
18. K. M. Kolwankar and A. D. Gangal, “Local fractional Fokker-Planck equation,” Physical Review Letters, vol. 80, no. 2, pp. 214–217, 1998. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
19. G. Jumarie, “The Minkowski's space-time is consistent with differential geometry of fractional order,” Physics Letters A, vol. 363, no. 1-2, pp. 5–11, 2007. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
20. G. Jumarie, “Modified Riemann-Liouville derivative and fractional Taylor series of nondifferentiable functions further results,” Computers & Mathematics with Applications, vol. 51, no. 9-10, pp.
1367–1376, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
21. A. Parvate and A. D. Gangal, “Calculus on fractal subsets of real line. I. Formulation,” Fractals, vol. 17, no. 1, pp. 53–81, 2009. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
22. A. Parvate and A. D. Gangal, “Fractal differential equations and fractal-time dynamical systems,” Pramana, vol. 64, no. 3, pp. 389–409, 2005. View at Scopus
23. W. Chen, “Time-space fabric underlying anomalous diffusion,” Chaos, Solitons and Fractals, vol. 28, no. 4, pp. 923–929, 2006. View at Publisher · View at Google Scholar · View at Scopus
24. W. Chen, X. D. Zhang, and D. Korošak, “Investigation on fractional and fractal derivative relaxation-oscillation models,” International Journal of Nonlinear Sciences and Numerical Simulation,
vol. 11, no. 1, pp. 3–9, 2010. View at Scopus
25. W. Chen, H. Sun, X. Zhang, and D. Korošak, “Anomalous diffusion modeling by fractal and fractional derivatives,” Computers & Mathematics with Applications, vol. 59, no. 5, pp. 1754–1758, 2010.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
26. F. B. Adda and J. Cresson, “About non-differentiable functions,” Journal of Mathematical Analysis and Applications, vol. 263, no. 2, pp. 721–737, 2001. View at Publisher · View at Google Scholar
· View at Zentralblatt MATH · View at MathSciNet
27. J. H. He, “A new fractal derivation,” Thermal Science, vol. 15, no. 1, pp. 145–147, 2011.
28. J.-H. He, “Asymptotic methods for solitary solutions and compactons,” Abstract and Applied Analysis, vol. 2012, Article ID 916793, 130 pages, 2012. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
29. J. Fan and J. He, “Fractal derivative model for air permeability in hierarchic porous media,” Abstract and Applied Analysis, vol. 2012, Article ID 354701, 7 pages, 2012. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
30. Y. J. Yang, D. Baleanu, and X. J. Yang, “A local fractional variational iteration method for Laplace equation within local fractional operators,” Abstract and Applied Analysis, vol. 2013, Article
ID 202650, 6 pages, 2013. View at Publisher · View at Google Scholar
31. F. Gao, X. Yang, and Z. Kang, “Local fractional newton's method derived from modified local fractional calculus,” in Proceedings of the 2nd International Joint Conference on Computational
Sciences and Optimization (CSO '09), pp. 228–232, IEEE Computer Society, April 2009. View at Publisher · View at Google Scholar · View at Scopus
32. X. Yang and F. Gao, “The fundamentals of local fractional derivative of the one-variable nondifferentiable functions,” Science & Technology, vol. 31, no. 5, pp. 920–921, 2009.
33. X. Yang and F. Gao, “Fundamentals of Local fractional iteration of the continuously non-differentiable functions derived from local fractional calculus,” in Proceedings of the 2011 International
Conference on Computer Science and Information Engineering (CSIE '11), pp. 398–404, Springer, 2011.
34. X. Yang, L. Li, and R. Yang, “Problems of local fractional definite integral of the one-variable nondifferentiable function,” Science & Technology, vol. 31, no. 4, pp. 722–724, 2009.
35. X. Yang, L. Li, and R. Yang, “Problems of local fractional definite integral of the one-variable nondifferentiable function,” Science & Technology, vol. 31, no. 4, pp. 722–724, 2009.
36. W.-H. Su, X.-J. Yang, H. Jafari, and D. Baleanu, “Fractional complex transform method for wave equations on Cantor sets within local fractional differential operator,” Advances in Difference
Equations, vol. 2013, no. 1, pp. 97–107, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
37. G. S. Chen, “The local fractional Stieltjes Transform in fractal space,” Advances in Intelligent Transportation Systems, vol. 1, no. 1, pp. 29–31, 2012.
38. G. S. Chen, “Local fractional Improper integral in fractal space,” Advances in Information Technology and Management, vol. 1, no. 1, pp. 4–8, 2012.
39. G. S. Chen, “Mean value theorems for local fractional integrals on fractal space,” Advances in Mechanical Engineering and Its Applications, vol. 1, no. 1, pp. 5–8, 2012. | {"url":"http://www.hindawi.com/journals/jfs/2013/198405/","timestamp":"2014-04-20T04:02:16Z","content_type":null,"content_length":"748879","record_id":"<urn:uuid:dc14159c-fb7a-4537-8fb4-f0cc4804c601>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Radicals or Fractional Exponents
Is it better to have radicals or fractional exponents in a simplified answer? Is there a general convention?
depends of course but ussually in answers that have exponents is to use the $\sqrt{x}$ rather than $x^{\frac{1}{2}}$ as an example and have the radicals in the numerator if there are exponents in the
answer you want them to be positive. do you have example of what you need to do. | {"url":"http://mathhelpforum.com/algebra/118319-solved-radicals-fractional-exponents.html","timestamp":"2014-04-19T03:01:01Z","content_type":null,"content_length":"32004","record_id":"<urn:uuid:717c5102-ce65-40ed-9f9d-fdbbe252c643>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Schedule & Location
MW 3:15-4:30, Econ 140
Computing environment
We will use R, which you can read about here.
An introductory statistics course, such as STATS 60.
Course description
By the end of the course, students should be able to:
□ Enter tabular data using R.
□ Plot data using R, to help in exploratory data analysis.
□ Formulate regression models for the data, while understanding some of the limitations and assumptions implicit in using these models.
□ Fit models using R and interpret the output.
□ Test for associations in a given model.
□ Use diagnostic plots and tests to assess the adequacy of a particular model.
□ Find confidence intervals for the effects of different explanatory variables in the model.
□ Use some basic model selection procedures, as found in R, to find a best model in a class of models.
□ Fit simple ANOVA models in R, treating them as special cases of multiple regression models.
□ Fit simple logistic and Poisson regression models.
For those taking 4 units:
• 5 assignments (50%)
• data analysis project (20%)
• final exam (30%) (according to Stanford calendar: F 12/14 @ 12:15PM)
For those taking 3 units:
• 5 assignments (70%)
• final exam (30%) (according to Stanford calendar: F 12/14 @ 12:15PM)
Practice exam
You can find a practice exam | {"url":"http://statweb.stanford.edu/~jtaylo/courses/stats191/index.html","timestamp":"2014-04-23T13:19:09Z","content_type":null,"content_length":"21990","record_id":"<urn:uuid:51770511-87d4-4dc5-bdd0-be1c31253c25>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
the Magic of Compound Interest
How Savings Accounts Grow from the Magic of Compound Interest
Have you ever used any of the following excuses as reasons to not start saving for retirement?
• "I'm young. I'll start saving in five years."
• "Who cares if my money earns $20 in a savings account? That's not enough to even bother."
• "I don't make enough money to contribute to my savings."
When you're young, it seems like retirement is too far away to worry about or you don't know how to get started with a savings account. You think you have plenty of time. You think you'll make more
money and can start saving later. While you might have decades ahead of you, and your salary is likely to increase over the years, the one thing you can't make up for is lost time. And compound
interest gives the biggest returns to those who have been in the game the longest.
Sadly, the trend is that many young workers aren't getting in the game at all. According to a survey by the tax information service CCH, less than a third of eligible workers aged 25 and under
contribute to employer-sponsored 401(k)s, and only 4% max out their workplace retirement accounts. What's more, the survey found that just 19% of young workers planned to contribute to an IRA that
year, and the majority of those 18 to 24 years of age aren't even sure to which retirement savings accounts they are eligible to contribute.
Saving for retirement just isn't in the minds of most young people--and that's a shame, because starting a retirement savings plan early can really be to your advantage.
Start a Savings Account Now--Waiting Can Cost You
Let's look at the following example: Sarah is 25 years old, and she decides to open a savings account dedicated to her retirement. She plans to contribute $5,000 per year, which is the equivalent of
about $417 per month.
• If Sarah earns a 3% annual return and her bank offers compounding four times a year, by age 65 she'll have $387,095.59 saved for retirement.
• If Sarah waits five years and opens the account at age 30--and still contributes the same amount and gets the same annual return--she'll have $310,054.26 saved by age 65. The difference between
starting at age 25 and starting at 30 is $77,041.33!
• If she waits even a few more years--say, until she's 35--she'll have just $243,707.04 in the savings account when she retires.
Even if Sarah "catches up" with those missed $5,000 annual contributions when she's 30 or 35, her total won't be as high as when she starts saving regularly at age 25. The difference is due to
Compound Interest Favors the Young
Sarah can see big returns by starting at age 25 because of the magic of compound interest. Compound interest works like this: Over time, each dollar you invest earns interest. Then the interest
you've earned becomes a part of your principal, and in the next period, you earn interest on that as well. Your interest earns interest.
It might seem like you aren't earning very much in the beginning, but short-term earnings aren't important. It's the long-term--30 or 40 years from now--where you'll see big gains.
Take the example of Sarah's initial $5,000 investment. Even if she stopped contributing new money, after two years she would have $5,307.99, which means she'll have made $307.99 on her investment
over those two years. After investing that $5,000 for 40 years, though, she'll have $16,526.42 total, or $11,526.42 on her first year's investment. Again, that's without adding any more new savings.
That's a serious chunk of change!
If Sarah invested retirement funds in an IRA or other brokerage account, it's possible that she could get even higher average annual returns--which would magnify the effect of compounding--and
probably receive tax advantages to boot. Of course, higher returns come with higher risk, but the point is still that starting early is the key to reaping the gains of the magic of compounding.
With Retirement Savings Account, Every Little Bit Helps
You might be thinking, "Well, that's great for Sarah, but I don't have $5,000 per year to save!" The problem with that line of thinking is that you'll always wait for a "better time" to get started,
and in the process you'll never start at all.
Socking away money in a retirement savings account is not an all-or-nothing deal. Start small, with whatever you can contribute - even saving spare change helps! Set up automatic contributions so
that the money is invested before you can spend it elsewhere. As your income increases, you can increase your contributions. If you get a raise, add to your monthly investment rather than increasing
your living expenses. This prevents lifestyle inflation that often further defers retirement savings. In other words, if you get a raise and decide to treat yourself to a brand new sports car, those
brand new sports car payments will likely bring you back to square one--not enough cash to save for retirement.
It literally pays to get started today. And one final bit of advice: once the money is invested, don't touch it! The magic of compound interest will only work if the money is left to grow in your
savings account. | {"url":"http://www.savingsaccounts.com/money/how-to-save/how-savings-accounts-grow-from-the-magic-of-compound-interest.html","timestamp":"2014-04-20T10:47:58Z","content_type":null,"content_length":"44830","record_id":"<urn:uuid:59cb6685-71b2-4e42-9518-0c9f0e05272f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayesian Methods for Neural Networks: Theory and Application, Course notes for Neural Networks Summer School
, 1998
"... This paper discusses these issues exploring the potentiality of Bayesian ideas in the analysis of NN models. Buntine and Weigend (1991) and MacKay (1992) have provided frameworks for their
Bayesian analysis based on Gaussian approximations and Neal (1993) has applied hybrid Monte Carlo ideas. Ripley ..."
Cited by 31 (0 self)
Add to MetaCart
This paper discusses these issues exploring the potentiality of Bayesian ideas in the analysis of NN models. Buntine and Weigend (1991) and MacKay (1992) have provided frameworks for their Bayesian
analysis based on Gaussian approximations and Neal (1993) has applied hybrid Monte Carlo ideas. Ripley (1993) and Cheng and Titterington (1994) have dwelt on the power of these ideas, specially as
far as interpretation and architecture selection are concerned. See MacKay (1995) for a recent review. From a statistical modeling point of view NN's are a special instance of mixture models. Many
issues about posterior multimodality and computational strategies in NN modeling are of relevance in the wider class of mixture models. Related recent references in the Bayesian literature on mixture
models include Diebolt and Robert (1994), Escobar and West (1994), Robert and Mengersen (1995), Roeder and Wasserman (1995), West (1994), West and Cao (1993), West, Muller and Escobar (1994), and
West and Turner (1994). We concentrate on approximation problems, though many of our suggestions can be translated to other areas. For those problems, NN's are viewed as highly nonlinear
(semiparametric) approximators, where parameters are typically estimated by least squares. Applications of interest for practicioners include nonlinear regression, stochastic optimisation and
regression metamodels for simulation output. The main issue we address here is how to undertake a Bayesian analysis of a NN model, and the uses of it we may make. Our contributions include: an
evaluation of computational approaches to Bayesian analysis of NN models, including a novel Markov chain Monte Carlo scheme; a suggestion of a scheme for handling a variable architecture model and a
scheme for combining NN models with more ...
- IEEE Transaction on Neural Networks , 2007
"... Abstract—Internet traffic identification is an important tool for network management. It allows operators to better predict future traffic matrices and demands, security personnel to detect
anomalous behavior, and researchers to develop more realistic traffic models. We present here a traffic classi ..."
Cited by 19 (1 self)
Add to MetaCart
Abstract—Internet traffic identification is an important tool for network management. It allows operators to better predict future traffic matrices and demands, security personnel to detect anomalous
behavior, and researchers to develop more realistic traffic models. We present here a traffic classifier that can achieve a high accuracy across a range of application types without any source or
destination host-address or port information. We use supervised machine learning based on a Bayesian trained neural network. Though our technique uses training data with categories derived from
packet content, training and testing were done using features derived from packet streams consisting of one or more packet headers. By providing classification without access to the contents of
packets, our technique offers wider application than methods that require full packet/payloads for classification. This is a powerful advantage, using samples of classified traffic to permit the
categorization of traffic based only upon commonly available information. Index Terms—Internet traffic, network operations, neural network applications, pattern recognition, traffic identification.
, 2006
"... A novel approach to phylogenetic tree construction using stochastic optimization and clustering ..."
- In Proccedings of the ACNN '98 , 1998
"... Bayesian learning provides a theoretical way to prevent neural networks from overfitting. It is possible to determine the weight decay parameter during the training process without using a
validation set. This is done by maximizing the evidence p(Djff; fi) of the hyperparameters ff and fi. In this ..."
Cited by 2 (2 self)
Add to MetaCart
Bayesian learning provides a theoretical way to prevent neural networks from overfitting. It is possible to determine the weight decay parameter during the training process without using a validation
set. This is done by maximizing the evidence p(Djff; fi) of the hyperparameters ff and fi. In this papers two new methods are described that improve the determination of the hyperparameters. The
first method defines an iteration process in order to get the optimal value of ff. We proof that this iteration process always converge to the optimal solution. The second one takes into account the
fact, that ff and fi are so-called scale parameters and therefore have a natural a priori probability that differs significantly from the a priori probability that is used in general. The new methods
are applied to a very noisy data set, namely the prediction of the foreign exchange rate of the US Dollar against the German Mark and demonstrate a substantial improvement with respect to the
, 1997
"... this paper are neural networks whose forecasts are combined by another neural network, a gate. For regression problems such an architecture was shown to partly remedy the two main problems in
forecasting real world time series: nonstationarity and overfitting. The goal of this paper is to compare th ..."
Cited by 1 (1 self)
Add to MetaCart
this paper are neural networks whose forecasts are combined by another neural network, a gate. For regression problems such an architecture was shown to partly remedy the two main problems in
forecasting real world time series: nonstationarity and overfitting. The goal of this paper is to compare the forecasting ability of gated experts (GE) with a that of a single neural network expert
on a time series classification task, which corresponds to decisions of taking a long position in a stock, a short position, or doing nothing. A new error function and a weight update rule were
derived for this problem. The architecture was tested on the actual stock market data, and the errors on both training and testing data were smaller than errors for the best expert. This suggests
that the performance of any single stock market forecasting system can be improved by making several copies of it and training them under the GE framework. In addition, an algorithm is presented for
the GE architecture that makes it possible for the model to modify the data to fit the model better. Such a modification is done only if the decrease in the model cost associated with the output
error is less than the increase in the input cost associated with moving the data away from its initial values. This idea corresponds to a bi-directional search for the true model, which was shown in
AI to cut in half the exponent in the search time in comparison to the standard unidirectional search used by most connectionist architectures. The implementation of this algorithm was show to
further decrease overfitting on the testing data.
, 1997
"... this paper we understand a real world structure or process which is characterized by a set of structural and behavioral patterns. These patterns can be viewed as reflecting the "style" of the
object. The objects are assumed to have a relatively high level of stationarity and the patterns characteriz ..."
Cited by 1 (1 self)
Add to MetaCart
this paper we understand a real world structure or process which is characterized by a set of structural and behavioral patterns. These patterns can be viewed as reflecting the "style" of the object.
The objects are assumed to have a relatively high level of stationarity and the patterns characterizing an object are assumed to be probabilistically dependent on each other. For stock market
modeling, these objects do not have to represent physical entities such as company's assets or other objects used in fundamental analysis. The objects can be structures that were created as a result
of complex interactions of physical entities. The subject of behavioral finance deals with a class of objects such as fads and fashions present in the market. The stock market is extremely sensitive
to its environment, and many objects related to the stock market contribute their patterns to the stock price. The goal is to extract patterns related to each object and build a model of the object
from these patterns. For the purpose of risk management, patterns not related to any object will be considered nonstationary and will thus be classified as noise. A similar idea was proposed in
Weigend, Zimmermann and Neuneier (1996), who describe an architecture in which the data is accepted for analysis only if it confirms the model. In the AI term, their algorithm implements a
bi-directional search, which was proven to give better results than a one-sided search. The objects in the stock market contribute patterns to the stock price at different time scales. This idea is
gaining a wide recognition which is reflected in the growing number of research in multi-resolution analysis. See Bjorn and Weigend (1996) for discussion. For example, investors and traders operate
at different time horizons, and t...
, 1998
"... In recent years, more and more researchers have been aware of the effectiveness of using the extended Kalman filter (EKF) in neural network learning since some information such as the Kalman
gain and error covariance matrix can be obtained during the progress of training. It would be interesting to ..."
Cited by 1 (1 self)
Add to MetaCart
In recent years, more and more researchers have been aware of the effectiveness of using the extended Kalman filter (EKF) in neural network learning since some information such as the Kalman gain and
error covariance matrix can be obtained during the progress of training. It would be interesting to inquire if there is any possibility of using an EKF method together with pruning in order to speed
up the learning process, as well as to determine the size of a trained network. In this dissertation, certain extended Kalman filter based pruning algorithms for feedforward neural network (FNN) and
recurrent neural network (RNN) are proposed and several aspects of neural network learning are presented. For FNN, a weight importance measure linking up prediction error sensitivity and the
by-products obtained from EKF training is derived. Comparison results demonstrate that the proposed measure can better approximate the prediction error sensitivity than using the forgetting recursive
least squa...
"... ) model (Lewis and Stevens, 1991; Lewis et al., 1994). The modelling is done by letting the predictor variables for the øth value in the time series fy ø g be given by y ø \Gamma1 (= x ø;1 ); y
ø \Gamma2 (= x ø;2 ); : : : ; y ø \Gammap (= x ø;p ). Note that if we combined these predictors to form a ..."
Add to MetaCart
) model (Lewis and Stevens, 1991; Lewis et al., 1994). The modelling is done by letting the predictor variables for the øth value in the time series fy ø g be given by y ø \Gamma1 (= x ø;1 ); y ø \
Gamma2 (= x ø;2 ); : : : ; y ø \Gammap (= x ø;p ). Note that if we combined these predictors to form a linear additive function we would just be modelling the time series as a usual AR(p) process.
However, the ASTAR method involves modelling these lagged predictors variables using a MARS model. Thus the predictor 5.6. MODELLING TIME SERIES USING BAYESIAN MARS 127 variables can have both
threshold terms, because of the form of the truncated linear spline basis functions, and interactions
"... research. However, it leads to difficult computational problems, stemming from nonnormality and multimodality of posterior distributions, which hinder the use of methods like Laplace
integration, Gaussian quadrature and Monte Carlo importance sampling. Multimodality issues have predated discussions ..."
Add to MetaCart
research. However, it leads to difficult computational problems, stemming from nonnormality and multimodality of posterior distributions, which hinder the use of methods like Laplace integration,
Gaussian quadrature and Monte Carlo importance sampling. Multimodality issues have predated discussions in neural network research, see e.g. Ripley (1993), and are relevant as well for mixture
models, see West, Muller and Escobar (1994) and Crawford (1994), of which FFNN's are a special case. There are three main reasons for multimodality of posterior models in FFNN's. The first one is
symmetries due to relabeling; we mitigate this problem introducing appropriate inequality constraints among parameters. The second, and most worrisome, is the inclusion of several copies of the same
term, in our case, terms with the same fl vector. Node duplication may be actually viewed as a manifestation of model mixing. The third one is inherent
"... Abstract. Artificial neural networks are a well established tool in high energy physics, ..."
Add to MetaCart
Abstract. Artificial neural networks are a well established tool in high energy physics, | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=550396","timestamp":"2014-04-18T17:17:58Z","content_type":null,"content_length":"38724","record_id":"<urn:uuid:8ed2e307-515e-4c48-99c8-6bf5872cf265>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: hostility toward f.o.m.
Stephen G Simpson simpson at math.psu.edu
Tue Jul 21 14:55:50 EDT 1998
Thomas Forster 9 Jul 1998 22:03:01 writes:
> If you want harder evidence of hostility - of the kind you mention
> - there's plenty of that too, unfortunately.
Yes, I think so. Let's air this a little bit. For example, in 1990
the math department here at Penn State voted to change its PhD
requirements so as to make it very difficult for PhD students to
specialize in f.o.m. This was done explicitly in order to "clip the
wings" of f.o.m., which attracts the interest of many new graduate
students. Have you experienced similar hostile acts?
> My feeling is that an important source of hostility to set theory
> arises from mathematicians interpreting the foundational claims of
> set theory as somehow deconstructing their activity, and nobody
> likes being deconstructed!
Please define what you mean by "deconstruct". This vague term
(borrowed from modern literary theory of the politically correct
variety) blurs a lot of important distinctions.
Do you mean that many mathematicians don't like outsiders to analyze
what they do in terms of general intellectual interest?
This could get really interesting. It could lead to a general
discussion of the question: What impact does f.o.m. have on
-- Steve
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-July/001893.html","timestamp":"2014-04-20T21:03:40Z","content_type":null,"content_length":"3576","record_id":"<urn:uuid:feb6b1bf-0bd6-4054-99d1-d4e532428592>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
warning c4700
12-02-2007 #1
Registered User
Join Date
Dec 2007
warning c4700
learning functions and I am having trouble with some local variables. I keep getting "warning C4700: uninitialized local variable 'y' used", same for the variable x. Are my loops in combinations
with cin >> messing up the code? I am pretty new to programming, any help would be greatly appreciated. Heres the code:
#include <iostream>
using namespace std;
//function prototype
double calculateRetail(double, double);
int main()
double x, y, z;
while(x < 0)
cout << "This progam calculates and displays an items retail price. Enter the whole sale price...\n";
cin >> x;
while(y < 0)
cout << "Enter the markup percentage...\n";
cin >> y;
z = calculateRetail(x, y);
cout << z << " is the retail price.\n";
return 0;
double calculateRetail(double x, double y)
return x * y * .01;
As the message says, you compare x < 0 and y < 0 before initialising x and y with some value. Perhaps a do while loop is what you want, but note that you are assuming that the user enters
integral input.
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
Thank you. The Do-while loop worked perfectly, I also had my math wrong lol =).
Anyways, heres what the code is now.
#include <iostream>
using namespace std;
//function prototype
double calculateRetail(double, double);
int main()
double x, y, z;
cout << "This progam calculates and displays an items retail price. Enter the whole sale price...\n";
cin >> x;
}while(x < 0);
cout << "Enter the markup percentage...\n";
cin >> y;
}while(y < 0);
z = calculateRetail(x, y);
cout << z << " is the retail price.\n";
return 0;
double calculateRetail(double x, double y)
return x * (y + 100) * .01;
Incidentally, instead of naming your variables x, y and z, use descriptive names like whole_sale_price, markup_percentage, and retail_price.
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
Thank you for the tip, I will make sure to do that in the future =)
12-02-2007 #2
12-02-2007 #3
Registered User
Join Date
Dec 2007
12-02-2007 #4
12-02-2007 #5
Registered User
Join Date
Dec 2007 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/96526-warning-c4700.html","timestamp":"2014-04-19T18:24:54Z","content_type":null,"content_length":"55080","record_id":"<urn:uuid:3b11026d-4e60-4662-9227-49bbe63ba3d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
An e¢ cient branch and bound algorithm for the capacitated warehouse location problem
- In: Proc. of 1 st Int. Conf. on Communications (ICC). (2004 , 2004
"... Abstract — The battery resource of the sensor nodes should be managed efficiently, in order to prolong network lifetime in wireless sensor networks. Moreover, in large-scale networks with a
large number of sensor nodes, multiple sink nodes should be deployed, not only to increase the manageability o ..."
Cited by 39 (0 self)
Add to MetaCart
Abstract — The battery resource of the sensor nodes should be managed efficiently, in order to prolong network lifetime in wireless sensor networks. Moreover, in large-scale networks with a large
number of sensor nodes, multiple sink nodes should be deployed, not only to increase the manageability of the network, but also to reduce the energy dissipation at each node. In this paper, we focus
on the multiple sink location problems in large-scale wireless sensor networks. Different problems depending on the design criteria are presented. We consider locating sink nodes to the sensor
environment, where we are given a time constraint that states the minimum required operational time for the sensor network. We use simulation techniques to evaluate the quality of our solution.
Keywords—wireless sensor networks; power efficiency; multiple sink. I.
, 1992
"... This paper describes an experimental code that has been developed to solve zero-one mixed integer linear programs. The experimental code uses a primal--dual interior point method to solve the
linear programming subproblems that arise in the solution of mixed integer linear programs by the branch and ..."
Cited by 13 (7 self)
Add to MetaCart
This paper describes an experimental code that has been developed to solve zero-one mixed integer linear programs. The experimental code uses a primal--dual interior point method to solve the linear
programming subproblems that arise in the solution of mixed integer linear programs by the branch and bound method. Computational results for a number of test problems are provided. Introduction
Mixed integer linear programming problems are often solved by branch and bound methods. Branch and bound codes, such as the ones described in [7, 11, 12], normally use the simplex algorithm to solve
linear programming subproblems that arise. In this paper, we describe an experimental branch and bound code for zero--one mixed integer linear programming problems that uses an interior point method
to solve the LP subproblems. This project was motivated by the observation that interior point methods tend to quickly find feasible solutions with good objective values, but take a relatively long
time to ...
- Mathematical Programming , 1989
"... Recently, several successful applications of strong cutting plane methods to combinatorial optimization problems have renewed interest in cutting plane methods, and polyhedral characterizations,
of integer programming problems. In this paper, we investigate the polyhedral structure of the capacitate ..."
Cited by 9 (1 self)
Add to MetaCart
Recently, several successful applications of strong cutting plane methods to combinatorial optimization problems have renewed interest in cutting plane methods, and polyhedral characterizations, of
integer programming problems. In this paper, we investigate the polyhedral structure of the capacitated plant location problem. Our purpose is to identify facets and valid inequalities for a wide
range of capacitated fixed charge problems that contain this prototype problem as a substructure. The first part of the paper introduces a family of facets for a version of the capacitated plant
location problem with constant capacity K for all plants. These facet inequalities depend on K and thus differ fundamentally from the valid inequalities for the uncapacitated version of the problem.
We also introduce a second formulation for a model with indivisible cus-tomer demand and show that it is equivalent to a vertex packing problem on a derived graph. We identify facets and valid
inequalities for this version of the problem by applying known results for the vertex packing polytope.
"... Abstract: In this paper, a squared-Euclidean distance multifacility location problem with inseparable demands under balanced transportation constraints is analyzed. Using calculus to project the
problem onto the space of allocation variables, the problem becomes minimizing concave quadratic integer ..."
Add to MetaCart
Abstract: In this paper, a squared-Euclidean distance multifacility location problem with inseparable demands under balanced transportation constraints is analyzed. Using calculus to project the
problem onto the space of allocation variables, the problem becomes minimizing concave quadratic integer programming problem. The algorithm based on extreme point ranking method combining with
logical techniques is developed. The numerical experiments are randomly generated to test efficiency of the proposed algorithm compared with a linearization algorithm. The results show that the
proposed algorithm provides a better solution on average with less processing time for all various sizes of problems.
, 1999
"... Container transport operations have been extending inland, providing more comprehensive service across the shipping network. Accordingly, container transport operators are making extensive
capital investments in deploying inland container depot (ICD) networks. Optimizing the location of such facilit ..."
Add to MetaCart
Container transport operations have been extending inland, providing more comprehensive service across the shipping network. Accordingly, container transport operators are making extensive capital
investments in deploying inland container depot (ICD) networks. Optimizing the location of such facilities financially is vital for both capital and operating efficiencies. Currently, there are no
models at the regional network level to guide container operators in locating ICDs on their networks. This research studies the ICD location problem and develops a comprehensive ICD location model.
Based on comprehensive analysis of the container transport industry, focusing on ICD operations, this thesis developed a useful formulation of the ICD location problem. It recognizes and emphasizes
the need to embody the endogenous demand and market competitiveness in the container transport business. The formulation combines the multinomial logit model of discrete choice analysis to
quantitatively describe the shipper’s behaviors and preferences, addressing both the endogenous demand and market competitiveness. Fixed charge facility location problems are considered or proven to
be NP complete.
, 2008
"... location This paper investigates a model for pricing the demand for a set of goods when suppliers operate discount schedules based on total business value. We formulate the buyers’s decision
problem as a mixed binary integer program (MIP) which is a generalization of the capacitated facility locatio ..."
Add to MetaCart
location This paper investigates a model for pricing the demand for a set of goods when suppliers operate discount schedules based on total business value. We formulate the buyers’s decision problem
as a mixed binary integer program (MIP) which is a generalization of the capacitated facility location problem (CFLP). A branch and bound procedure using Lagrangean relaxation and subgradient
optimization is developed for solving large-scale problems that can arise when suppliers’ discount schedules contain multiple price breaks. Results of computer trials on specially adapted large
benchmark instances of the CFLP, con…rm that a subgradient optimization procedure based on Shor and Zhurbenko’s r-algorithm, which employs a space dilation strategy in the direction of the di¤erence
between two successive subgradients, can solve such instances e ¢ ciently. 1.
, 2006
"... This paper investigates a model for pricing the demand for a set of goods when multiple suppliers operate discount schedules based on total business value. We formulate the buyers’s decision
problem as a mixed binary integer program (MIP) which is a generalization of the capacitated facility locatio ..."
Add to MetaCart
This paper investigates a model for pricing the demand for a set of goods when multiple suppliers operate discount schedules based on total business value. We formulate the buyers’s decision problem
as a mixed binary integer program (MIP) which is a generalization of the capacitated facility location problem (CFLP) and can be solved using Lagrangean heuristics. We have investigated commercially
available MIP-solvers (LINGO, Xpress-MP) to solve small-scale examples. A branch-and-bound procedure using Lagrangean relaxation and subgradient optimization is developed for solving large-scale
problems that can arise when suppliers’discount schedules contain multiple price breaks. Results of computer trials on specially adapted large benchmark instances of the CFLP, con…rm that a
subgradient algorithm can solve such instances e ¢ ciently. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=410178","timestamp":"2014-04-21T02:46:16Z","content_type":null,"content_length":"29790","record_id":"<urn:uuid:b7982288-2893-4158-a2f7-d2ee779ebaf5>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modeling Distributions of Data
Comments (0)
Please log in to add your comment.
In this chapter, we learned about describing and identifying normal distributions and how to do standard normal calculations. Cumulative Relative Frequency Graphs Allows you to examine location
within a distribution.
Helps you estimate & interpret the percentile of a distribution. Normal Distribution Described by a normal curve
specified by mean & standard deviation
Mean is the center of the symmetric normal curve Two ways to measure the position within a distribution Percentiles:
Is the value with p percent of the observations less than it.
Z-Score ( Standard Value):
•A measure that quantifies the distance a data point is from the mean of a data set. Chapter 2:
Modeling Distributions of Data Empirical Rule (68-95-99.7 Rule) Density Curve Has an area of 1 underneath curve
Always on or above the horizontal axis
Describes the overall pattern of a distribution Normal Density Curve
(Bell Shaped Density Curve)
(Symmetric Density Curve) Skewed Right Density Curve Skewed Left Density Curve Median & Mean of a Density Curve Median:
is the equal area point
point that divdes the area under the curve in half
Balance point
Which the curve would balance if made of solid material Mean& Median The long tail pulls the mean to the right. The long tail pulls the mean to the left 68% of the observations fall within of the
95% of the observation falls within 2 of the mean
99.7% of the observation falls within 3 of the mean
mean of 0
Standard deviation of 1 Standard Normal Distribution 80th percentile 1. Plot the data using a graph: dotplot
histogram 2. Figure out the pattern: Shape
Spread 3. Calculate a numerical summary to briefly describe center & spread 4. Sometimes a pattern of large number observations can be described as a smooth curve Abbreviate the Normal Distribution
with mean & standard deviation as
N( , ) 0 + - = 9 8 7 1 2 3 4 5 6 c Standard Normal Distribution Is the normal distribution with a mean of 0 & standard deviation of 1. N(0,1) Using the
Standard Normal Table Is a table of areas under the standard deviation The table entry for each value is the area under the curve to the left of z. How to Solve Problems Involving Normal
Distributions State:
Express the problem
Draw distribution & shade area of interest under curve
Write conclusion Transforming Data When adding or subtracting the same number (p) to each observation:
(p) would also be added or subtracted to the mean, median, quartiles, percentiles.
This would not change the shape or spread. Transforming Data When multiplying or dividing the same number (p) to each observation:
(p) would also be multiplied or divided to mean, median, quartiles, percentiles & the spread.
This would not change the shape. | {"url":"http://prezi.com/y1havel1jahw/modeling-distributions-of-data/","timestamp":"2014-04-20T10:02:09Z","content_type":null,"content_length":"55906","record_id":"<urn:uuid:087c7ed7-cd10-4f82-b67e-973328f9c1fc>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Re: foundationalism (fwd)
Martin Davis davism at cs.nyu.edu
Sat Sep 26 14:09:07 EDT 1998
On Fri, 25 Sep 1998, Reuben Hersh wrote:
> THANKS, MARTIN, FOR YOUR THOUGHTFUL CRITIQUE.
> i THINK IT'S ONLY FAIR FOR ME TO CRITICIZE YOUR CRITIQUE. I WILL HAVE TO
> REALLY?" I DON'T KNOW IF YOU HAVE THE FIRST, BUT I KNOW FOR SURE YOU
> HAVE THE SECOND, AND HAVE EVEN READ IT.
> IN BETWEEN QUOTES FROM MY LETTER ABD YOUR CRITICISM, I WILL PLACE MY
> CRITICISM OF YOUR CRITIQUE IN ALL CAPS, FOR CLARITY.
> On Thu, 24 Sep 1998, Reuben Hersh wrote:
> Here in foundations the story was different. There were three
> schools, logicism, formalism, and intuitionism. All sought to
> repair the foundations of mathematics, after the damage they had suffered
> froom the Antinomies. None of them succeeded in their mission. In the
> course of an unsuccessful philosophic quest, they all created some
> original and important mathematics.
> YOU CRITICIZED:
> " Bad history. Logicism was begun by Frege; his work was completed and
> his magnum opus at the printers when he learned of the Russell paradox.
> The effort to avoid methods though to be illegitimate because of
> non-constructivity or use of impredicative definitions goes back at least
> to Kronecker. The Weierstrass-Cantor-Dedekind foundation for analysis
> involved modes of thought troubling to many mathemticians including
> Poincare, Borel, and Weyl and leading to rich philosophical
> discussions.
> THIS IS INFORMATIVE, BUT I DON'T SEE HOW IT RELATES TO WHAT I
> WROTE.
Sorry I was unclear. My paragraph criticized your sentence beginning "All
sought ..."
> YOU WROTE:
> "There is no reason to think that Brouwer cared especially
> about the antinomies: he drew the line well below the level at which they
> became issues.
> I AGREE, THAT WAS A SLIP-UP.
> YOU WROTE:
> Hilbert certainly was concerned with the antinomies. But he
> was at least at much bothered by the attack on the core of modern analysis
> by Brouwer and Weyl.
> I WROTE:> I was bothered not so much by the impasse in which
> foundationalist philosophy of mathematics found itself.
> I was much more disturbed by the obvious (to me) fact that
> all three were incredible. The pictures of mathematics they
> offered did not at all resemble the mathematics I knew as student,
> teacher, and researcher.
> Platonism (including its special case, logicism) invoked
> a mathematical world eternal, unchanging, and independent of human
> actions. How this ghost world related to the material world wasn't
> even recognized as a problem! (Later I learned about Benacerraf
> throwing the same problem at his fellow philosophers of math.)
> YOU CRITICIZED:
> "Logicism as a movement was mainly concerned to provide a seamless
> development of mathematics beginning with purely logical notions and
> eventually leading up to the totality of mathematical discourse. As such
> it can be followed without any necessary ontological commitments.
> YOU SAY "LOGICISM CAN BE FOLLOWED WITHOUT ANY NECESSARY
> ONTOLOGICAL COMMITMENTS". I SAY "LOGICISM PRODUCED FRUITFUL
> WAS UNSUCCESSFUL." IT LOOKS TO ME LIKE WE'RE SAYING THE SAME THING,
> HAD THEY BEEEN SUCCESSFUL, YOU WOULDN'T NEED TO PUSH THEM ASIDE.
Again I was evidently unclear. I was criticizing your assertion that
logicism is a special case of Platonism.
> If (as Dedekind seems to have) one thinks of "set" as a logical notion, the
> logicist program has been a great success, tacitly followed by
> mathematicians from Halmos to Bourbaki.
> SOME AUTHORS (E.G. THE KNEALES, AS CITED IN W.I.M.R.), THINK
> THAT THE INTRODUCTION OF THE AXIOM OF INFINITY MEANT THE FAILURE
> OF LOGICISM. IT'S BELIEVED THAT AS AN AXIOM OF LOGIC THE AXIOM
> OF INFINITY IS NOT COMPELLING.
> AS I SAID MORE THAN ONCE IN THE MATHEMATICAL EXPERIENCE AND IN
> W.I.M.R., IN QUOTATIONS PROVIDED TO YOU, LOGICISM AND
> ACCOUNT OF THE NATURE OF MATHEMATICS. YOU MAY REMEMBER THE QUOTE FROM
> RUSSELL, IN BOTH BOOKS,WHERE HE SAYS THAT HE WAS SEEKING A FIRM BELIEF TO
> REPLACE HIS LOST BELIEF IN CHRISTIANITY. HE SAYS HE THOUGHT HE COULD FIND
> TO GIVE IT A SOLID FOUNDATION, AS HE SAYS, BY SETTING IT FIRST ON AN
> ELEPHANT, THEN ON A TORTOISE, ETC. UNTIL "AFTER MANY YEARS OF ARDUOUS
> TOIL" HE GAVE UP.
> DOESN'T SOUND LIKE HE THOUGHT LOGICISM WAS A BIG SUCCESS!
> WHAT ABOUT FREGE? IN HIS LATER YEARS HE DECIDED THAT HIS
> ATTEMPT TO FOUND MATHEMATICS ON LOGIC WAS NOT ONLY A FAILURE, BUT
> INTUITIONS INSTEAD OF LOGIC.
> TOO BAD HE DIDN'T NOTICE WHAT A HUGE SUCCESS LOGICISM WAS.
> I THINK THIS DISAGREEMENT IS A MATTER OF EMPHASIS. YOU
> PROGRAM OF LOGICISM DOESN'T INTEREST YOU. SO IT'S SUCCESS
> OR FAILURE ARE OF NO ACCOUNT. WHAT MATTERS IS ITS CONTRIBUTION
> TO LOGIC AND MATHEMATICS.
> I, ON THE OTHER HAND, AM STRONGLY INTERESTED IN PHILOSOPHICAL
> SO I SAY LOGICISM WAS A FAILURE IN ITS PHILOSOPHICAL
> PROGRAM. THEN YOU TELL ME THAT IT'S REALLY A GREAT SUCCESS, PROVIDED YOU
> IGNORE IT'S PHILOSOPHICAL SIDE. JUST SAYING THE SAME
> THING, WITH DIFFERENT EMPHASIS.
> YOU CALL MY LETTER "BAD HISTORY." IS IT GOOD HISTORY TO
> MAYBE YOU DO THINK THAT'S GOOD.
Alonzo Church has characterized logicism as the view that logic and
mathematics are related as the elementary and advanced part of the same
subject. So whether you regard the program as a success depends on where
you draw the line. If the assumption (which Dedekind and Frege thought
they could prove) that there is a set containing infinitely many elements
is not taken to be part of logic, then logicism failed. Logicism was
primarily a scientific program and has to be judged by its accomplishments
and failures, not by what various folks have said about it at different
The sense in which logicism succeeded is that there is a seamless
development of all of mathematics from a simple foundation involving the
the purest of abstractions. This is not just a contribution to logic. It
is a satisfactory durable largely accepted F.O.M.
> AS TO FORMALISM, YOU SEEM TO IDENTIFY THE FORMALIST POSITION
> ON THE NATURE OF MATHEMATICS WITH THE VIEWS OF ONE GREAT FORMALIST,
> DAVID HILBERT. BUT HILBERT'S BRAND OF FORMALISM IS NO LONGER AROUND.
> FORMALIST THINKING IS AROUND, AND I WAS
> TALKING ABOUT FORMALISM AS YOU MAY HEAR IT EXPOUNDED NOW.
If formalism is not the philosophical views and mathematical program of
Hilbert, then I have no idea what you might be talking about. Is it gossip
in math common rooms you're trying to attack? Which thinkers? And what
coherent did they have to offer as a program for foundations?
> MEANINGLESS. I KNOW THAT. (LAST SENTENCE OF MARTIN'S LETTER, SEE BELOW.)
> REFER TO PAGE 336 OF THE MATHEMATICAL EXPERIENCE:
> ABOUT REALITY IS TRUE. IF HE WAS PREPARED TO ADVOCATE A FORMALIST
> FOR THE SAKE OF OBTAINING CERTAINTY."
The point is just that he was not advocating "a formalist interpretation
of mathematics". He was proposing to *use* a formalization of mathematics
as a *tool* for proving consistency. It is as though you explain an
applied mathematician's using PDEs to study fluid flow by saying "X is
prepared to countenance a PDE interpretation of fluid flow as the price for
obtaining useful results."
As to the rest: I was replying to what I presumed was a careful considered
exposition of your developing views. All of your "see - I knew that,
because look what I wrote" is beside the point. You say P. I criticize P.
You can say:
1. Oh, I guess you're right, or
2. No, you didn't understand, let me be clearer, by P I meant P1, P2, and
P3, or
3. In criticizing my statement P, you said Q, and Q is ridiculous for the
follwoing reasons ..., or even
4. Your criticism shows such a combination of malice and ignorance that
it's useless for me to continue this.
What makes no sense is for you to say: "How can you think I need to be
told Q, look what I wrote, It proves that I knew Q all along." To which I
of course retort: "If you knew Q how could you have said P?"
Be well,
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-September/002221.html","timestamp":"2014-04-18T16:07:39Z","content_type":null,"content_length":"13174","record_id":"<urn:uuid:c91c0d1c-bbd3-4a61-a530-f9e7d0acbd03>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
234 farenheit to celcius
You asked:
234 farenheit to celcius
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/234_farenheit_to_celcius","timestamp":"2014-04-19T14:55:17Z","content_type":null,"content_length":"52081","record_id":"<urn:uuid:35550c6b-5ca2-4b98-8bda-20dfeb4671c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Approximation by Lupas-Type Operators and Szász-Mirakyan-Type Operators
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 546784, 28 pages
Research Article
Approximation by Lupas-Type Operators and Szász-Mirakyan-Type Operators
^1Department of Mathematics Education, Sungkyunkwan University, Seoul 110-745, Republic of Korea
^2Department of Mathematics, Meijo University, Nagoya 468-8502, Japan
Received 28 July 2011; Accepted 5 January 2012
Academic Editor: Yuantong Gu
Copyright © 2012 Hee Sun Jung and Ryozi Sakai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Lupas-type operators and Szász-Mirakyan-type operators are the modifications of Bernstein polynomials to infinite intervals. In this paper, we investigate the convergence of Lupas-type operators and
Szász-Mirakyan-type operators on .
1. Introduction and Main Results
For , Bernstein operator is defined as follows: Let and then we define Derriennic [1] gave a modified operator of such as and obtained the result that for , Lupas investigated a family of linear
positive operators which mapped the class of all bounded and continuous functions on into such that Moreover, Sahai and Prasad [2] modified Lupas operators as follows: Let be integrable on and let be
a positive integer. Then we define where In this paper, we assume that is a positive integer. Then they obtained the following;
Theorem 1.1 (see [2], Theorem 1). If is integrable on and admits its th and th derivatives, which are bounded at a point , and ( is a positive integer ) as , then
Theorem 1.1 holds only for bounded , so it does not mean the norm convergence on . In this paper, we improve Theorem 1.1 with respect to the norm convergence on .
Let and let be a positive weight, that is, for . For a function on , we define the norm by For convenience, for nonnegative integers , , and , we let Then we have the following results:
Theorem 1.2. Let . Let and be nonnegative integers and . Let satisfy Then we have uniformly for and , In particular, if , then we have uniformly for ,
Remark 1.3. (a) We see that for nonnegative integers , , and ,
(b) The following weight is useful.
Theorem 1.4. Let and be nonnegative integers and . Let satisfy Then we have uniformly for and ,
Let us define the weighted modulus of smoothness by where
Theorem 1.5. Let and be nonnegative integers and . Let . Then we have uniformly for and ,
The Szász-Mirakyan operators are also generalizations of Bernstein polynomials on infinite intervals. They are defined by: where
In [3], the class of Szász-Mirakyan operators was defined as follows: where and
Theorem 1.6 (see [3]). Let and be fixed numbers. Then there exists . depending only on and such that, for every uniformly continuous and bounded function on , the following inequalities hold;(a)(b)
where . (c) for every fixed , we have for every continuous with , , bounded on ,
Now, we modify the Szász-Mirakyan operators as follows: let be integrable on , then we define where is a nonnegative integer. Then we have the following results:
Theorem 1.7. Let , and be nonnegative integers. Let satisfies Then one has uniformly for and , In particular, let . If one supposes , then one has uniformly for and ,
Remark 1.8. (a) We note that for nonnegative integers and ,
(b) The following weight is useful. where is defined in Remark 1.3.
Theorem 1.9. Let , , and be nonnegative integers. Let satisfies Then one has uniformly for and ,
Theorem 1.10. Let , , and be nonnegative integers. Then one has for ,
2. Proofs of Results
First, we will prove results for Lupas-type operators such as Theorems 1.2, 1.4, and 1.5. To prove theorems, we need some lemmas.
Lemma 2.1. Let and be nonnegative integers and . Let Then(i),(ii) (iii)for , where ; (iv)for , where is a polynomial of degree such that the coefficients are bounded independently of and they are
positive for .
Proof. (i), (ii), and (iii) have been proved in [2, Lemma 1]. So we may show only the part of (2.4). For , (2.4) holds. Let us assume (2.4) for . We note So, we have by the assumption of induction,
Here, if is even, then and if is odd, then Hence, we have and here we see that is a polynomial of degree such that the coefficients of are bounded independently of . Moreover, we see from (2.6) that
the coefficients of are positive for .
Lemma 2.2 (see [2, Lemma 2]). Let be a nonnegative integer and . Then one has for :
Let Then we have where is defined by (1.10).
Proof of Theorem 1.2. Let . By the second inequality in (1.11), Let , First, we see by (2.13) and Lemma 2.1, Next, we estimate . By the first inequality in (1.11), Here, using and the notation: we
have Then, we obtain Here, we used the following that for , because And we know that Thus, we obtain Therefore, we have uniformly on , Here, if we let , then we have that is, (1.12) is proved. So, we
also have a norm convergence (1.13).
Proof of Theorem 1.4. We know that for , where . Then we obtain from (2.10) and (2.27), and from (2.28), Using , we have Therefore, we have Since we know that for , we have
Lemma 2.3. Let and be nonnegative integers and . Let satisfies Then one has uniformly for , and ,
Proof. Using , we have The assumption (2.35) means Then we can obtain by (2.10), Consequently, since is uniformly bounded on , we have the result.
The Steklov function for is defined as follows:
Then for the Steklov function with respect to , we have the following properties.
Lemma 2.4 (cf.[4]). Let and be a positive and nonincreasing function on . Then (i) ;
Proof. (i) For , we have the Steklov functions and as follows. We note Then, we can see from (2.44), Similarly to (2.44), we know Therefore, we have from (2.46), Therefore, (i) is proved.
(ii) We easily see from (2.44) that
(iii) From (2.46), we have
(iv) From (2.47), we have
Proof of Theorem 1.5. We know that for , Then, we have From (2.51) and (2.41) of Lemma 2.4, Here, we suppose and then we know that From Theorem 1.4, (2.51), (2.42), and (2.43) of Lemma 2.4, we have
Therefore, we have If we let , then because .
From now on, we will prove Theorems 1.7, 1.9, and 1.10, which are the results for the Szász-Mirakyan operators, analogously to the case of Lupas-type operators.
Lemma 2.5. Let be a nonnegative integer. Then one has for ,
Proof. We know that Therefore, we have
Lemma 2.6. Let , , and be nonnegative integers. Then one has(i) and ; (ii)For (iii) where is a polynomial of degree such that the coefficients of are bounded independently of .
Proof. Let . Then (i)
(ii) Using , we obtain Here, we see Then substituting (2.66) for (2.65), we consider the following; Then, we have Here the last equation follows by parts of integration. Furthermore, we have
Therefore, we have
(iii) It is proved by the same method as the proof of Lemma 2.1 (iv).
Proof of Theorem 1.7. Let . By the second inequality in (1.30), Let , , First, we see that by (2.71) and Lemma 2.6(i), Next, to estimate , we split it into two parts: First, we estimate Then, using
the following facts: we have Then, using (2.18) and Lemma 2.6, we have | {"url":"http://www.hindawi.com/journals/jam/2012/546784/","timestamp":"2014-04-16T21:55:21Z","content_type":null,"content_length":"1046315","record_id":"<urn:uuid:e5f833f2-5bb3-4640-bfea-1ade9c114587>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Porbability of selecting balls from boxes
up vote -1 down vote favorite
There are three boxes. B1, B2, B3 The probability of selecting them is 0.2, 0.2 , 0.6 respectively.
B1 contains 3 red balls and 7 green balls. B2 contains 5 red balls and 5 green balls. B3 contains 2 red balls and 8 green balls.
If we select a box and then a ball from the box what is the probability that the ball is of red color.
If we select the a ball and it turns out to be of green color what is the probability that it comes from B3 ?
3 MO is not really for such questions; have you read the FAQ? – Qiaochu Yuan Aug 21 '10 at 6:36
add comment
closed as too localized by Qiaochu Yuan, Robin Chapman, Yemon Choi, Victor Protsak, Harry Gindi Aug 21 '10 at 7:06
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally
applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center
, please edit the question.
1 Answer
active oldest votes
Well, this looks more like someone trying to get their homework done, but for the first part:
$ p = 0.2 * \frac{3}{3+7} + 0.2 * \frac{5}{5+5} + 0.6 * \frac{2}{2+8}$
$ p = 0.06 + 0.10 + 0.12 $
$ p = 0.28 $
Showed the work for you too.
So if the probability that a chosen ball is red is 28%, then the probability that a chosen ball is green is 72%.
So what is {probability that chosen ball came from B3 | chosen ball is green}? Look up conditional probability, look up bayesian, etc.
up vote 0 down vote $ p_g = 0.2 * \frac{7}{3+7} + 0.2 * \frac{5}{5+5} + 0.6 * \frac{8}{2+8}$
$ p_g = 0.14 + 0.10 + 0.48 $
$ p_g = 0.72 $
{ $p_g3$ | green ball} = (picked from box 3 and green) / (picked green)
= 0.48 / 0.72
= 2 /3
Please do your homework yourself. Showed the work for you too.
Oh no the problem I am solving is entirely different. Just wanted to check that my line of thinking is correct. – Akshar Prabhu Desai Aug 21 '10 at 6:50
2 -1 : as a matter of policy, if you think that something is homework then you shouldn't answer it. – Andy Putman Aug 21 '10 at 14:54
2 Andy, I'm sorry that I didn't know about the restriction on what to answer. Should I edit the answer away? – sleepless in beantown Aug 22 '10 at 0:01
@sleepless : At this point, editing it is kind of pointless. Just keep it in mind in the future (I certainly understand the urge to answer questions even if they seem too
easy!). – Andy Putman Aug 24 '10 at 2:08
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question. | {"url":"http://mathoverflow.net/questions/36263/porbability-of-selecting-balls-from-boxes?sort=oldest","timestamp":"2014-04-21T15:35:27Z","content_type":null,"content_length":"49262","record_id":"<urn:uuid:e2ff792b-a159-4986-a407-02f323f24a0e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics updates and multi-index summation
• New option in Setup: redefinesum, so that the sum command is redefined in such a way that
a) the sum arguments are processed in a way avoiding premature evaluation and related unexpected results or error interruptions
b) the sum command includes new functionality present in Physics:-Library:-Add to perform sum over integer values of many indices, as in
New option: redefine sum so that its arguments are processed by the more modern Physics:-Library:-Add and so that it can perform multiindice summation.
By default, the sum command is not redefined, so the value of redefinesum is
Consider this multiindex summation functionality of the Physics:-Library:-Add command
For instance, for n = 2,
This functionality can be plugged directly into the sum command. For that purpose, set redefinesum to true
You can now compute directly with sum. The left-hand side is inert while the right-hand side is computed
The formula for the integer power of a sum
Verify whether this equation is true
Besides this new functionality, the redefined sum does a more modern handling of its arguments, consider a typical problem posted in Maple primes
In the following summation, j is a dummy summation index, so the value just assigned, , is not expected to interfer with the summation. This is the case with the redefined sum
while without redefining sum the input above is interrupted with an error message. Likely, in this other case also reported in Mapleprimes
the following two summations can be performed after having redefining sum:
For the summation above, without redefining sum, it returns 0 instead of unevaluated, because of a premature evaluation of the function with an unassigned index i before performing the summation.
Returning unevaluated as (1.16) permits evaluate the sum at a latter moment, for instance attributing a value to f
And this other sum where f is given from the begining also returns 0 without redefining sum
Problems like this other one reported in Mapleprimes here also get resolved with this redefinition of sum. | {"url":"http://www.mapleprimes.com/posts/200118-Physics-Updates-And-Multiindex-Summation?ref=Feed:MaplePrimes:New%20Questions","timestamp":"2014-04-16T07:25:20Z","content_type":null,"content_length":"129966","record_id":"<urn:uuid:ba83ed1f-a7e1-40b8-8da6-f5b1ca7af29a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems
January 2012 (vol. 18 no. 1)
pp. 17-29
ASCII Text x
Mark Howison, E. Wes Bethel, Hank Childs, "Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems," IEEE Transactions on Visualization and Computer Graphics, vol. 18, no.
1, pp. 17-29, January, 2012.
BibTex x
@article{ 10.1109/TVCG.2011.24,
author = {Mark Howison and E. Wes Bethel and Hank Childs},
title = {Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems},
journal ={IEEE Transactions on Visualization and Computer Graphics},
volume = {18},
number = {1},
issn = {1077-2626},
year = {2012},
pages = {17-29},
doi = {http://doi.ieeecomputersociety.org/10.1109/TVCG.2011.24},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Visualization and Computer Graphics
TI - Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems
IS - 1
SN - 1077-2626
EPD - 17-29
A1 - Mark Howison,
A1 - E. Wes Bethel,
A1 - Hank Childs,
PY - 2012
KW - Volume visualization
KW - parallel processing.
VL - 18
JA - IEEE Transactions on Visualization and Computer Graphics
ER -
With the computing industry trending toward multi- and many-core processors, we study how a standard visualization algorithm, raycasting volume rendering, can benefit from a hybrid parallelism
approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting
shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and
strong scaling studies, at levels of concurrency ranging up to 216,000, and with data sets as large as 12.2 trillion cells. The greatest benefit from hybrid parallelism lies in the communication
portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.
[1] K. Asanovic, R. Bodik, B.C. Catanzaro, J.J. Gebis, P. Husbands, K. Keutzer, D.A. Patterson, W.L. Plishker, J. Shalf, S.W. Williams, and K.A. Yelick, “The Landscape of Parallel Computing Research:
A View from Berkeley,” Technical Report UCB/EECS-2006-183, EECS Dept., Univ. of California, Berkeley, http://www.eecs.berkeley.edu/Pubs/TechRpts/ 2006EECS-2006-183.html, Dec. 2006.
[2] C.T. Silva, A.E. Kaufman, and C. Pavlakos, “PVR: High-Performance Volume Rendering,” IEEE Computational Science and Eng., vol. 3, no. 4, pp. 18-28, Dec. 1996.
[3] J. Kniss, P. McCormick, A. McPherson, J. Ahrens, J. Painter, A. Keahey, and C. Hansen, “Interactive Texture-Based Volume Rendering for Large Data Sets,” IEEE Computer Graphics and Applications,
vol. 21, no. 4, pp. 52-61, July 2001.
[4] T. Peterka, H. Yu, R. Ross, K.L. Ma, and R. Latham, “End-to-End Study of Parallel Volume Rendering on the IBM Blue Gene/P,” Proc. Int'l Conf. Parallel Processing (ICPP '09), Sept. 2009.
[5] M. Howison, E.W. Bethel, and H. Childs, “MPI-Hybrid Parallelism for Volume Rendering on Large, Multicore Clusters,” Proc. Eurographics Parallel Visualization and Graphics, May 2010.
[6] M. Levoy, “Display of Surfaces from Volume Data,” IEEE Computer Graphics and Applications, vol. 8, no. 3, pp. 29-37, May 1988.
[7] R.A. Drebin, L. Carpenter, and P. Hanrahan, “Volume Rendering,” ACM SIGGRAPH Computer Graphics, vol. 22, no. 4, pp. 65-74, 1988.
[8] A. Kaufman and K. Mueller, “Overview of Volume Rendering,” The Visualization Handbook, C.D. Hansen and C.R. Johnson, eds., pp. 127-174, Elsevier, 2005.
[9] K.L. Ma, J.S. Painter, C.D. Hansen, and M.F. Krogh, “A Data Distributed, Parallel Algorithm for Ray-Traced Volume Rendering,” Proc. Symp. Parallel Rendering, pp. 15-22, Oct. 1993.
[10] R. Tiwari and T.L. Huntsberger, “A Distributed Memory Algorithm for Volume Rendering,” Proc. Scalable High Performance Computing Conf., May 1994.
[11] K.L. Ma, “Parallel Volume Ray-Casting for Unstructured-Grid Data on Distributed-Memory Architectures,” PRS '95: Proc. IEEE Symp. Parallel Rendering, pp. 23-30, 1995.
[12] C. Bajaj, I. Ihm, G. Joo, and S. Park, “Parallel Ray Casting of Visibly Human on Distributed Memory Architectures,” Proc. VisSym Joint EUROGRAPHICS-IEEE TVCG Symp. Visualization, pp. 269-276,
[13] P. Sabella, “A Rendering Algorithm for Visualizing 3D Scalar Fields,” ACM SIGGRAPH Computer Graphics, vol. 22, no. 4, pp. 51-58, 1988.
[14] C. Upson and M. Keeler, “V-Buffer: Visible Volume Rendering,” Proc. ACM SIGGRAPH, pp. 59-64, 1988.
[15] J. Nieh and M. Levoy, “Volume Rendering on Scalable Shared-Memory MIMD Architectures,” Proc. Workshop Vol. Visualization, pp. 17-24, Oct. 1992.
[16] C. Müller, M. Strengert, and T. Ertl, “Optimized Volume Raycasting for Graphics-Hardware-Based Cluster Systems,” Proc. Eurographics Parallel Graphics and Visualization, pp. 59-66, 2006.
[17] B. Moloney, M. Ament, D. Weiskopf, and T. Moller, “Sort First Parallel Volume Rendering,” IEEE Trans. Visualization and Computer Graphics, vol. 99, Sept. 2010.
[18] H. Childs, M.A. Duchaineau, and K.L. Ma, “A Scalable, Hybrid Scheme for Volume Rendering Massive Data Sets,” Proc. Eurographics Symp. Parallel Graphics and Visualization, pp. 153-162, 2006.
[19] T. Peterka, D. Goodell, R. Ross, H.W. Shen, and R. Thakur, “A Configurable Algorithm for Parallel Image-Compositing Applications,” Supercomputing '09: Proc. ACM/IEEE Conf. Supercomputing, pp.
1-10, 2009.
[20] W. Kendall, T. Peterka, J. Huang, H.W. Shen, and R. Ross, “Accelerating and Benchmarking k Image Compositing at Large Scale,” Proc. Eurographics Parallel Visualization and Graphics, May 2010.
[21] G. Hager, G. Jost, and R. Rabenseifner, “Communication Characteristics and Hybrid MPI/OpenMP Parallel Programming on Clusters of Multi-Core SMP Nodes,” Proc. Cray User Group Conf., 2009.
[22] D. Mallón, G. Taboada, C. Teijeiro, J. Tourino, B. Fraguela, A. Gómez, R. Doallo, and J. Mourino, “Performance Evaluation of MPI, UPC and OpenMP on Multicore Architectures,” Proc. 16th European
PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface (EuroPVM/MPI '09), Sept. 2009.
[23] T. Peterka, R. Ross, H. Yu, K.L. Ma, W. Kendall, and J. Huang, “Assessing Improvements to the Parallel Volume Rendering Pipeline at Large Scale,” UltraVis '08: Proc. Workshop Ultrascale
Visualization, 2008.
[24] T. Fogal, H. Childs, S. Shankar, J. Krüger, D. Bergeron, and P. Hatcher, “Large Data Visualization on Distributed Memory Multi-GPU Clusters,” Proc. Conf. High Performance Graphics, 2010.
[25] M. Snir, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra, MPI - The Complete Reference: The MPI Core, second ed. MIT Press, 1998.
[26] D.R. Butenhof, Programming with POSIX Threads. Addison-Wesley Longman Publishing Co., Inc., 1997.
[27] R. Chandra, L. Dagum, D. Kohr, D. Maydan, J. McDonald, and R. Menon, Parallel Programming in OpenMP. Morgan Kaufmann Publishers, Inc., 2001.
[28] NVIDIA Corporation, NVIDIA ${\rm CUDA}$ Programming Guide Version 3.0, http://developer.nvidia.com/objectcuda_3_0_downloads. html , 2010.
[29] “The Top 500 Supercomputers,” http:/www.top500.org, 2009.
[30] J.K. Lawder and P.J.H. King, “Using Space-Filling Curves for Multi-Dimensional Indexing,” Proc. British Nat'l Conf. Databases: Advances in Databases, pp. 20-35, 2000.
Index Terms:
Volume visualization, parallel processing.
Mark Howison, E. Wes Bethel, Hank Childs, "Hybrid Parallelism for Volume Rendering on Large-, Multi-, and Many-Core Systems," IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 1,
pp. 17-29, Jan. 2012, doi:10.1109/TVCG.2011.24
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tg/2012/01/ttg2012010017-abs.html","timestamp":"2014-04-20T19:18:06Z","content_type":null,"content_length":"56683","record_id":"<urn:uuid:6cf17a40-a27d-4e92-a037-4da469f5d080>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
User’s guide to viscosity solutions of second order partial differential equations
Results 1 - 10 of 557
, 1997
"... A novel scheme for the detection of object boundaries is presented. The technique is based on active contours evolving in time according to intrinsic geometric measures of the image. The
evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both in ..."
Cited by 1073 (43 self)
Add to MetaCart
A novel scheme for the detection of object boundaries is presented. The technique is based on active contours evolving in time according to intrinsic geometric measures of the image. The evolving
contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active
contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric is defined by the image content. This geodesic approach for
object segmentation allows to connect classical "snakes" based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active
contours are improved, allowing stable boundary detection when their gradients suffer from large variations, including gaps. Formal results concerning existence, uniqueness, stability, and
correctness of the evolution are presented as well. The scheme was implemented using an efficient algorithm for curve evolution. Experimental results of applying the scheme to real images including
objects with holes and medical data imagery demonstrate its power. The results may be extended to 3D object segmentation as well.
- International Journal of Computer Vision , 2002
"... We propose a new multiphase level set framework for image segmentation using the Mumford and Shah model, for piecewise constant and piecewise smooth optimal approximations. The proposed method
is also a generalization of an active contour model without edges based 2-phase segmentation, developed by ..."
Cited by 316 (21 self)
Add to MetaCart
We propose a new multiphase level set framework for image segmentation using the Mumford and Shah model, for piecewise constant and piecewise smooth optimal approximations. The proposed method is
also a generalization of an active contour model without edges based 2-phase segmentation, developed by the authors earlier in T. Chan and L. Vese (1999. In Scale-Space'99, M. Nilsen et al. (Eds.),
LNCS, vol. 1682, pp. 141--151) and T. Chan and L. Vese (2001. IEEE-IP, 10(2):266--277). The multiphase level set formulation is new and of interest on its own: by construction, it automatically
avoids the problems of vacuum and overlap; it needs only log n level set functions for n phases in the piecewise constant case; it can represent boundaries with complex topologies, including triple
junctions; in the piecewise smooth case, only two level set functions formally suffice to represent any partition, based on The Four-Color Theorem. Finally, we validate the proposed models by
numerical results for signal and image denoising and segmentation, implemented using the Osher and Sethian level set method.
"... We show that the porous medium equation has a gradient flow structure which is both physically and mathematically natural. In order to convince the reader that it is mathematically natural, we
show the time asymptotic behavior can be easily understood in this framework. We use the intuition and the ..."
Cited by 224 (10 self)
Add to MetaCart
We show that the porous medium equation has a gradient flow structure which is both physically and mathematically natural. In order to convince the reader that it is mathematically natural, we show
the time asymptotic behavior can be easily understood in this framework. We use the intuition and the calculus of Riemannian geometry to quantify this asymptotic behavior. Contents 1 The porous
medium equation as a gradient flow 2 1.1 The porous medium equation . . . . . . . . . . . . . . . . . . 2 1.2 Abstract gradient flow . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Two
interpretations of the porous medium equation as gradient flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 A physical argument in favor of the new gradient flow 6 3 A mathematical
argument in favor of new gradient flow 9 3.1 Self similar solutions and asymptotic behaviour . . . . . . . . 9 3.2 A new asymptotic result . . . . . . . . . . . . . . . . . . . . . 10 3.3 The
asymptotic result express...
, 1997
"... A new boundary detection approach for shape modeling is presented. It detects the global minimum of an active contour model’s energy between two end points. Initialization is made easier and the
curve is not trapped at a local minimum by spurious edges. We modify the “snake” energy by including the ..."
Cited by 196 (65 self)
Add to MetaCart
A new boundary detection approach for shape modeling is presented. It detects the global minimum of an active contour model’s energy between two end points. Initialization is made easier and the
curve is not trapped at a local minimum by spurious edges. We modify the “snake” energy by including the internal regularization term in the external potential term. Our method is based on finding a
path of minimal length in a Riemannian metric. We then make use of a new efficient numerical method to find this shortest path. It is shown that the proposed energy, though based only on a potential
integrated along the curve, imposes a regularization effect like snakes. We explore the relation between the maximum curvature along the resulting contour and the potential generated from the image.
The method is capable to close contours, given only one point on the objects’ boundary by using a topology-based saddle search routine. We show examples of our method applied to real aerial and
medical images.
- IEEE TRANSACTIONS ON AUTOMATIC CONTROL , 1998
"... Complex natural and engineered systems typically possess a hierarchical structure, characterized by continuousvariable dynamics at the lowest level and logical decision-making at the highest.
Virtually all control systems today---from flight control to the factory floor---perform computer-coded chec ..."
Cited by 183 (8 self)
Add to MetaCart
Complex natural and engineered systems typically possess a hierarchical structure, characterized by continuousvariable dynamics at the lowest level and logical decision-making at the highest.
Virtually all control systems today---from flight control to the factory floor---perform computer-coded checks and issue logical as well as continuous-variable control commands. The interaction of
these different types of dynamics and information leads to a challenging set of "hybrid" control problems. We propose a very general framework that systematizes the notion of a hybrid system,
combining differential equations and automata, governed by a hybrid controller that issues continuous-variable commands and makes logical decisions. We first identify the phenomena that arise in
real-world hybrid systems. Then, we introduce a mathematical model of hybrid systems as interacting collections of dynamical systems, evolving on continuous-variable state spaces and subject to
continuous controls and discrete transitions. The model captures the identified phenomena, subsumes previous models, yet retains enough structure on which to pose and solve meaningful control
problems. We develop a theory for synthesizing hybrid controllers for hybrid plants in an optimal control framework. In particular, we demonstrate the existence of optimal (relaxed) and near-optimal
(precise) controls and derive "generalized quasi-variational inequalities" that the associated value function satisfies. We summarize algorithms for solving these inequalities based on a generalized
Bellman equation, impulse control, and linear programming.
, 1992
"... Following an idea of G. Nguetseng, the author defines a notion of "two-scale" convergence, which is aimed at a better description of sequences of oscillating functions. Bounded sequences in L2
(f) are proven to be relatively compact with respect to this new type of convergence. A corrector-type theor ..."
Cited by 176 (11 self)
Add to MetaCart
Following an idea of G. Nguetseng, the author defines a notion of "two-scale" convergence, which is aimed at a better description of sequences of oscillating functions. Bounded sequences in L2(f) are
proven to be relatively compact with respect to this new type of convergence. A corrector-type theorem (i.e., which permits, in some cases, replacing a sequence by its "two-scale " limit, up to a
strongly convergent remainder in L2(12)) is also established. These results are especially useful for the homogenization of partial differential equations with periodically oscillating coefficients.
In particular, a new method for proving the convergence of homogenization processes is proposed, which is an alternative to the so-called energy method of Tartar. The power and simplicity of the
two-scale convergence method is demonstrated on several examples, including the homogenization of both linear and nonlinear second-order elliptic equations.
- SIAM J. Sci. Comput , 1997
"... In this paper, we present a weighted ENO (essentially non-oscillatory) scheme to approximate the viscosity solution of the Hamilton-Jacobi equation: OE t +H(x 1 ; \Delta \Delta \Delta ; x d ; t;
OE; OE x1 ; \Delta \Delta \Delta ; OE xd ) = 0: This weighted ENO scheme is constructed upon and has the ..."
Cited by 151 (0 self)
Add to MetaCart
In this paper, we present a weighted ENO (essentially non-oscillatory) scheme to approximate the viscosity solution of the Hamilton-Jacobi equation: OE t +H(x 1 ; \Delta \Delta \Delta ; x d ; t; OE;
OE x1 ; \Delta \Delta \Delta ; OE xd ) = 0: This weighted ENO scheme is constructed upon and has the same stencil nodes as the 3 rd order ENO scheme but can be as high as 5 th order accurate in the
smooth part of the solution. In addition to the accuracy improvement, numerical comparisons between the two schemes also demonstrate that, the weighted ENO scheme is more robust than the ENO scheme.
Key words. ENO, weighted ENO, Hamilton-Jacobi equation, shape from shading, level set. AMS(MOS) subject classification. 35L99, 65M06. 1 Introduction The Hamilton-Jacobi equation: OE t +H(x; t; OE;
DOE) = 0; OE(x; 0) = OE 0 (x) (1.1) 1 Research supported by ONR N00014-92-J-1890. Email: gsj@math.ucla.edu. 2 Research supported by NSF DMS-94 04942. Email: dpeng@math.ucla.edu. where x 2 R d ...
, 1999
"... The eikonal equation and variants of it are of significant interest for problems in computer vision and image processing. It is the basis for continuous versions of mathematical morphology,
stereo, shape-from-shading and for recent dynamic theories of shape. Its numerical simulation can be delicate, ..."
Cited by 119 (12 self)
Add to MetaCart
The eikonal equation and variants of it are of significant interest for problems in computer vision and image processing. It is the basis for continuous versions of mathematical morphology, stereo,
shape-from-shading and for recent dynamic theories of shape. Its numerical simulation can be delicate, owing to the formation of singularities in the evolving front and is typically based on level
set methods. However, there are more classical approaches rooted in Hamiltonian physics which have yet to be widely used by the computer vision community. In this paper we review the Hamiltonian
formulation, which offers specific advantages when it comes to the detection of singularities or shocks. We specialize to the case of Blum's grass fire flow and measure the average outward ux of the
vector field that underlies the Hamiltonian system. This measure has very different limiting behaviors depending upon whether the region over which it is computed shrinks to a singular point or a
non-singular one. Hence, it is an effective way to distinguish between these two cases. We combine the ux measurement with a homotopy preserving thinning process applied in a discrete lattice. This
leads to a robust and accurate algorithm for computing skeletons in 2D as well as 3D, which has low computational complexity. We illustrate the approach with several computational examples.
, 1995
"... In this paper, we analyze geometric active contour models from a curve evolution point of view and propose some modifications based on gradient flows relative to certain new feature-based
Riemannian metrics. This leads to a novel edge-detection paradigm in which the feature of interest may be consid ..."
Cited by 117 (30 self)
Add to MetaCart
In this paper, we analyze geometric active contour models from a curve evolution point of view and propose some modifications based on gradient flows relative to certain new feature-based Riemannian
metrics. This leads to a novel edge-detection paradigm in which the feature of interest may be considered to lie at the bottom of a potential well. Thus an edge-seeking curve is attracted very
naturally and efficiently to the desired feature. Comparison with the Allen-Cahn model clarifies some of the choices made in these models, and suggests inhomogeneous models which may in return be
useful in phase transitions. We also consider some 3-D active surface models based on these ideas. The justification of this model rests on the careful study of the viscosity solutions of evolution
equations derived from a level-set approach. Key words: Active vision, antiphase boundary, visual tracking, edge detection, segmentation, gradient flows, Riemannian metrics, viscosity solutions,
geometric heat equ...
- SIAM J. Numer. Anal , 1997
"... This paper is concerned with a classical denoising and deblurring problem in image recovery. Our approach is based on a variational method. By using the Legendre-Fenchel transform, we show how
the nonquadratic criterion to be minimized can be split into a sequence of half-quadratic problems easier t ..."
Cited by 101 (22 self)
Add to MetaCart
This paper is concerned with a classical denoising and deblurring problem in image recovery. Our approach is based on a variational method. By using the Legendre-Fenchel transform, we show how the
nonquadratic criterion to be minimized can be split into a sequence of half-quadratic problems easier to solve numerically. First we prove an existence and uniqueness result, and then we describe the
algorithm for computing the solution and we give a proof of convergence. Finally, we present some experimental results for synthetic and real images. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=115734","timestamp":"2014-04-18T20:04:40Z","content_type":null,"content_length":"41190","record_id":"<urn:uuid:5a751253-a28a-43da-b8a0-78176953fd56>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Maximum Entropy Probability Density Functions
The principle of maximum entropy can be used to find the probability distribution, subject to a specified constraint, that is maximally noncommittal regarding missing information about the
distribution. In this Demonstration the principle of maximum entropy is used to find the probability density function of discrete random variables defined on the interval subject to user-specified
constraints regarding the mean and variance . The resulting probability distribution is referred to as an distribution [1]. The mean of the distribution associated with a proposition is the
probability of that proposition, and the variance of the distribution is a measure of the amount of confidence associated with predicting the probability of the proposition. When only the mean is
specified, the entropy of the distribution is maximal when the specified mean probability is When both mean and variance are specified, the entropy of the distribution decreases as the specified
variance decreases.
Probabilities are used to characterize the likelihood of events or propositions. In some circumstances, predictions of probability carry a high degree of confidence. For example, an individual can
confidently predict that a fair coin will produce “heads” in one flip with probability . By way of contrast, there is more uncertainty associated with a weather prediction that states the
probability of rain tomorrow as . E. T. Jaynes developed the concept of the distribution to deal with what he described as different states of external and internal knowledge. In the terminology of
Jaynes, the probability of the proposition is found by computing the mean of the distribution, and the variance of the distribution is a measure of the amount of confidence associated with the
prediction of the mean. In situations where you have high states of internal knowledge, like the case of the coin, the variance of the distribution is small. In fact, for the case of coin, the
variance of the distribution is 0.
The entropy is a measure of the amount of disorder in a probability density function. The principle of maximum entropy can be used to find distributions in circumstances where the only specified
information is the mean of the distribution or the mean and variance of the distribution. The distributions in this Demonstration are evaluated at the points for . If the probability density at
these points is denoted by , then the mean , variance , and entropy of the distribution are respectively given by
, , .
If the mean of the distribution is specified, then the corresponding maximum entropy probability distribution can be found using the technique of Lagrange multipliers [2]. This requires finding the
maximum of the quantity
where the unknowns are the probabilities and the Lagrange multipliers and . If the mean and the variance of the distributions are both specified, then it is necessary to find the maximum value of
the quantity
where is an additional Lagrange multiplier.
[1] E. T. Jaynes,
Probability Theory: The Logic of Science
, New York: Cambridge University Press, 2003.
[2] P. Gregory,
Bayesian Logical Data Analysis for the Physical Sciences
, Cambridge: Cambridge University Press, 2005. | {"url":"http://demonstrations.wolfram.com/MaximumEntropyProbabilityDensityFunctions/","timestamp":"2014-04-21T09:42:39Z","content_type":null,"content_length":"51238","record_id":"<urn:uuid:cf080de2-ea3f-4080-a7f2-7b49ff860b77>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
A few Calculus problems.
February 2nd 2009, 06:49 PM #1
Feb 2009
A few Calculus problems.
I have been working on my AP Calculus homework, and there are a few things I can't understand. Can you help? These are three, seperate, unrelated problems.
1) I need to graph and explain the graph of ln(ln(x))
2) For the graph of the function of f, where x = # of DVDs produced and y = Cost of production, what does f^-1(10) represent? (That's f-inverse(10), I wasn't sure if the way I typed it was
correct or not.)
3) I was given the inverse function of F. Do I get F using the same method I would to get inverse of F only go the other way around? and is finding the F function how I should be finding f(1)?
If anybody helps, I really appreciate it. And I'm not specifically looking for answers, I would like an explanation of how to do the problems. I like knowing how and why instead of what the
answer is. Thanks again!
so what have you done here? note that the domain of the graph would be $(1, \infty)$. why? and it will be in a somewhat similar in shape to ln(x) but it would increase less steeply. why?
2) For the graph of the function of f, where x = # of DVDs produced and y = f(x) = Cost of production, what does f^-1(10) represent? (That's f-inverse(10), I wasn't sure if the way I typed it was
correct or not.)
do you understand the relationship between a function and its inverse?
say $f^{-1}(10) = x$, then we have $10 = f(x)$
now, can you say what $f^{-1}(10)$ was?
3) I was given the inverse function of F. Do I get F using the same method I would to get inverse of F only go the other way around?
and is finding the F function how I should be finding f(1)?
after you find f, plug in 1
I still do not understand #2. I am not needing to know the value of f^-1(10) but what it represents.
How did you find the domain of number one? I thought the domain of ln was [0, infinity)
And how are you doing those math symbols? That's pretty cool.
The inverse of a function is a reflection over the line y=x ?
I have been working on my AP Calculus homework, and there are a few things I can't understand. Can you help? These are three, seperate, unrelated problems.
1) I need to graph and explain the graph of ln(ln(x))
Have you graphed it? Remember that it is not saying ln(x) times ln(x), this is a composite function, ie. f(g(x)) so you are evaluating ln(x) IN TERMS OF ln(x). This is a really cool question and
I think if you look at the graph of this function for awhile you can come up with a good answer. Hint: Remember that a log is just an exponent. These kinds of questions are supposed to make you
think, not get an answer from someone on a help forum. If you are in AP Calculus, you didn't get there by accident
2) For the graph of the function of f, where x = # of DVDs produced and y = Cost of production, what does $f^{-1}(10)$ represent?
Typically, the inverse of a cost vs. production graph, is the price per unit of what ever is being produced. Cost of production increases as more units must be produced; the more units they can
sell means the lower the price of each unit can be. $f^{-1} (10)$ is asking you what is happening on the inverse graph of f(x) at x=10.
3) I was given the inverse function of F. Do I get F using the same method I would to get inverse of F only go the other way around? and is finding the F function how I should be finding f(1)?
If f is the inverse of g, then g is the inverse of f. This is an elementary property of inverse functions.
If anybody helps, I really appreciate it. And I'm not specifically looking for answers, I would like an explanation of how to do the problems. I like knowing how and why instead of what the
answer is. Thanks again!
f(x) is the cost, x is the number of DVDs. i told you that saying $f^{-1}(10) = x$ is the same as saying $f(x) = 10$. now can you state what it represents?
How did you find the domain of number one? I thought the domain of ln was [0, infinity)
actually, zero is not included in the domain
thus, the domain of $\ln (\ln x)$ is all real $x$ so that $\ln x > 0$. $\ln x = 0$ for $x = 1$, thus we want $x > 1$
And how are you doing those math symbols? That's pretty cool.
see here.
The inverse of a function is a reflection over the line y=x ?
that's true. but that's not the relationship i was going for. Molly already said it, "if f is the inverse of g, then g is the inverse of f...."
February 2nd 2009, 07:49 PM #2
February 2nd 2009, 08:07 PM #3
Feb 2009
February 2nd 2009, 08:34 PM #4
February 2nd 2009, 08:41 PM #5 | {"url":"http://mathhelpforum.com/pre-calculus/71457-few-calculus-problems.html","timestamp":"2014-04-16T21:55:35Z","content_type":null,"content_length":"50269","record_id":"<urn:uuid:a3966b1d-7058-4155-9ef3-0a45cbfe3429>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does taking logs of two variables increase correlation between the two?
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
This may be a naive question, but if I have random variables X and Y and take logs of both, would corr(log X, log Y) be greater than corr(X, Y)? Thank you in advance for
your answer.
up vote 0 down vote
favorite st.statistics
add comment
This may be a naive question, but if I have random variables X and Y and take logs of both, would corr(log X, log Y) be greater than corr(X, Y)? Thank you in advance for your answer.
short answer: not necessarily. for example:
let X be a positive random variable with a pdf supported on some non-degenerate interval, [0, 1] say. let Y = 1 + X.
then X and Y are perfectly correlated. but U = logX and V = log Y = log (1 + X) = log(1 + e$^U$) are not linearly related, so their correlation is less than 1.
up vote 4 down vote accepted
[X can also be discrete, as long as it assumes at least 3 different values with positive probability.]
this example can be tweaked so that X and Y start out not perfectly correlated, but still corr(U,V) < corr(X,Y).
add comment
let X be a positive random variable with a pdf supported on some non-degenerate interval, [0, 1] say. let Y = 1 + X.
then X and Y are perfectly correlated. but U = logX and V = log Y = log (1 + X) = log(1 + e$^U$) are not linearly related, so their correlation is less than 1.
[X can also be discrete, as long as it assumes at least 3 different values with positive probability.]
this example can be tweaked so that X and Y start out not perfectly correlated, but still corr(U,V) < corr(X,Y). | {"url":"http://mathoverflow.net/questions/40684/does-taking-logs-of-two-variables-increase-correlation-between-the-two?sort=newest","timestamp":"2014-04-18T15:41:58Z","content_type":null,"content_length":"51101","record_id":"<urn:uuid:9e88685c-53f4-4d3e-89da-1f72d0e9a8df>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Graphs and Combinatorics 5, 95-106 (1989)
© Springer-Verlag 1989
Legitimate Coloringsof ProjectivePlanes
N. Alon 1. and Z. FiJredi 2
1 Department of Mathematics, Sackler Faculty of ExactSciences, Tel Aviv University, Ramat
Aviv, Tel Aviv, Israel
2 Mathematical Institute of the Hungarian Academy of Sciences, Budapest, P.O.B. 127, H-1364,
Abstract. For a projective plane Pn of order n, let X(Pn)denote the minimum number k, so that
there is a coloring of the points of P~in k colors such that no two distinct lines contain precisely
the same number' of points of each color. Answering a question of A. Rosa, we show that for all
sufficiently large n, 5 < X(Pn)< 8 for every projective plane P, of order n.
1. Introduction
Let P = Pn = (P, ~) be a projective plane of order n, with a set of points P and a
set of lines ~. As is well known, P has n2 + n + 1 points and n2 + n + 1 lines with
n + 1 points on every line. A g-colorin9 of P is a function f from P to the set
{1, 2..... X}, which may also be viewed as the (ordered)x-partition (P1, P2 ..... Px)
of P defined by Pi = f-l(i). Let C be a X-coloring of P, corresponding to the | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/942/1390763.html","timestamp":"2014-04-20T14:19:20Z","content_type":null,"content_length":"8298","record_id":"<urn:uuid:1fec5b8c-518e-42d0-80b1-4a5b5f55d965>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
What's a Group?
Copyright © University of Cambridge. All rights reserved.
We received a good international response to this classic problem: Patrick, from Woodbridge School and Aurimas from Chatham Grammar School for Boys in England; Neil from Kensal Park in Canada;
Jungsun and Junho from Nanjing International School in China.
Each adopted a slightly different way of viewing the problem and it might be interesting to compare approaches: for this reason we include all correct responses in full!
Parts E, F and G caused the most trouble with a few incorrect solutions concerning identities and inverses; overall, however, a full answer was constructed by all respondents collectively. Well done!
Solutions are as follows:
Patrick adopted a classic minimalist approach of a mathematician by realising that a single counterexample was sufficient to show that something was not a group. He uses notation of != to mean 'not
equal to' and '==' to mean 'is equivalent to. Of course, this hides any thought which went into creating these counterexamples!
Part a) The set of natural numbers with subtraction is not a group, since $3-7 =-4$ which is not in the set, so the property of closure is not satisfied.
Part b) The set of positive rationals with division is not a group, since taking $a=1, b=2, c=3$ we have $(1/2)/3 = 1/6 != 1/(2/3) = 3/2$, so the property of associativity is not satisfied.
Part c) The set of natural numbers with multiplication is not a group, since there is no inverse of 2: The identity is $1$, so $2*x = x*2 = 1$, where $x$ is the inverse. $2x = 1$ implies $x = 1/2$
which is not in the set of natural numbers.
Part d) The set of positive even integers with multiplication is not a group since the identity does not exist: there is no even number by which 4 can be multiplied to give $4$: $4x = 4$ $x = 1$
which is not a member of the even natural numbers.
Part e) $m*n = m+n+1$; find the identity and inverse of m. The identity is $e$ such that $m*e == e*m == m$; $m*e == m+e+1 == e*m$ by definition, so we need $m+e+1 == m$ $e+1 == 0$ $e == -1$
Aurimas followed a similar style to Patrick:
Part a:The given set with an operation of subtraction does not satisfy the first condition( did not even consider the other ones, as Closure is not satisfied). Let $a = 3$ and $b = 5$, but then $3-5=
-2$ which is not a natural number, and is not in the group, therefore it is not a group
Part b: This set does not satisfy ASSOCIATIVITY. Let $a = 2 b = 4 c = 6$, then $(a*b)*c = (b*c)*a ( 48 = 48)$, but when division comes in, this condition is not satisfied, eg : $((a/b))/c =( 1/12)$
is not equal to $((b/c))/a =( 1/3)$ and therefore it is not a group.
Neil realised for part G that an standard arithmetical equation could be solved to find identities and inverses:
Part g: To find the identity, we set up an equation: $xy+x+y=x$. Subracting $x$ from both sides, we get $xy+y=0$. Therefore, $y(x+1)=0$ or $y=0$. Hence, the identity is $0$. To find $x$'s inverse, we
set up another equation: $xy+x+y=0$. Then, $y(x+1)=-x$ or $y=-x/(x+1)$. Therefore, $x$'s inverse is $-x/(x+1)$.
Jungsun adopted a more expansive approach in which ways to alter the structure of the groups as given so that the various axioms might be satisfied were also considered along with counterexamples.
Jungsun also looked at which axioms did work and why that was the case. We really liked this: if something doesn't quite work mathematicians often work very hard to understand how something MIGHT be
made to work. Mathematics is as much about exploration of mathematical structures as it is about answering specific questions. Well done!
PART a) For the first property, closure, for all positive integers $a$ and $b$ in the group, the element of $a-b$ should be also in the group. However, if $a$ is smaller than $b$, the value of $a-b$
is negative and doesn't satisfy the property. On the other hand, for all integers $a$ and $b$, the element of $a+b$ is always an integer which satisfies the property.
For the second property, associativity, for all positive integers, the element of $(a - b) - c$ should equal to $a - (b - c)$. However, if $a=1, b=3$, and $c=7$, ($a-b)-c =(1-3)-7=-9$ while $a-(b-c)
=1-(3-7)=5$ which doesn't qualify the property. But for all integers, $(a + b) + c = a + (b + c)$ definitely works.
For the third property, identity, for all positive integers $a$ and $e$, $a - e = e - a = a$ should work. To satisfy it, $e$ should be $0$, but since it is not a positive integer, the property is not
fulfilled. But for all integers, since $0$ is included in an integer group, $a + e = e + a = a$ works.
For the fourth property, inverses, for all positive integers $a$ and $a'$, $a - a' = a' - a = e$ should work. However, since the third property is not satisfied, there is no e, so the fourth one
can't be worked. For all integers, however, there is the value of $e$ and $a + a' = a' + a =0$ works.
PART b) For the first property, closure, for all positive rational numbers $a$ and $b$ in the group, the element of $a/b$ should be also in the group. However, if $a = 1$ and $b = 3$, the value of $a
/b$ is not a rational number anymore, so it doesn't satisfy the property. On the other hand, for all positive rational numbers $a$ and $b$, the element of $a*b$ is always a positive rational number
which satisfies the property.
For the second property, associativity, for all positive rational numbers, the element of $(a/b)/c$ should equal to $a/(b/c)$. However, if $a = 1, b = 2$, and $c = 10$, $(a/b)/c = (1/2)/10 = 0.05$
while $a/(b/c) = 1/(2/10) =5$ which doesn't qualify the property. But for all positive rational numbers, $(a*b)*c = a*(b*c)$ definitely works.
For the third property, identity, for all positive rational numbers $a$ and $e$, $a/e =e/a = a$ should work. However, there is no way to satisfy it. But for all positive rational numbers, $a*e = e*a
= a$ works when $e=1$.
For the fourth property, inverses, both multiplication and division work. For division, when $a$ equals to $a'$ and for multiplication, when a equals to $1/a'$.
PART c) For the first property, $a*b$ works for all positive integers.
For the second property, $(a*b)*c = a*(b*c)$ also works for all positive integers.
For the third property, identity, for all positive integers $a$ and $e$, $a*e =e*a = a$ works when $e=1$.
However, for the fourth property, inverses, for all positive integers $a$ and $a'$, $a*a' = a'*a =1$ should work. It is satisfied when $a$ equals to $1/a'$ , but then, $1/a'$ is not a positive
integer anymore.
So the set of positive integers with the operation of multiplication doesn't form a group.
PART d) For the first property, $a*b$ works for all positive even integers. For the second property, $(a*b)*c = a*(b*c)$ also works for all positive even integers.
For the third property, identity, for all positive even integers $a$ and $e$, $a*e = e*a = a$ works when $e=1$, but since $1$ is not a positive even integer, it doesn't work.
Since the third property doesn't work, the fourth one doesn't work as well.
So the set of positive even integers with the operation of multiplication doesn't form a group.
PART f) For all integers, $m*n = m+(-1)mn$ works. For the identity element, $m*e =e*m = m+(-1)me = m$ and $e = 0$. For the inverse element, $m*m' = m'*m = e$ which is $0$. So $m+(-1)mm'$ should equal
to $0$. When m is an odd integer, $m'=m$ while $m$ is an even integer, $m'=-m$. In brief, the identity element is $0$ and when $m$ is an odd integer, the inverse element of element $m$ is $m$, while
it is an even integer, that of element $m$ is $-m$.
PART g) For all real numbers excluding the number $-1$, $x*y=xy+x+y$ works. For the identity element, $x*e=e*x=xe+x+e=x$. So $xe+e=0$. To satisfy $e(x+1)=0$ for all real numbers $x$, e should be $0$.
For the inverse element of element $x$, $x*x'=x'*x=e$ which is $0$. So $xx'+x+x'=0$ and $x'(x+1)=-x$. As a result, $x'=(-x)/(x+1)$. In brief, the identity element is $0$ and the inverse element of
element $x$ is $(-x)/(x+1)$.
Junho laid out the solution to parts e, f and g very clearly:
This problem was also published several years ago: the solutions received the first time around are shown here:
Good solutions to 'What's A Group' came from Curt from Reigate College and Andrei from Tudor Vianu National College, Bucharest, Romania.
The set of natural numbers with the operation of subtraction
does not form a group because the following properties of a group are not present:
Closure: If $a$ and $b$ are natural numbers and $a< b$, then $(a - b)$ is not a natural number.
Associativity: $(a - b) - c \neq a - (b - c)$, e.g. $(20-12)-5\neq 20 - (12-5)$.
Identity: 0 is an integer, and $a + 0 = 0 + a = a$. True (So the identity $e = 0$)
Inverses: $a + (-a) = (-a) + a = 0$. If $a$ is a natural number, $(-a)$ is not a natural number.
Rational numbers with multiplication
A rational number, by definition is expressible as $m/n$ where $m, n $ are integers and $n\neq 0$. With the operation of multiplication, the following occurs: $m/n * t/y = mt/ny$. Once again, both
the numerator and denominator are integers (also $n, y \neq 0$), hence closure is achieved.
When one investigates division, the following may occur. Let t=0 (it is a numerator so this integer is allowed) $$(m/n)/(t/y) = m/n * y/t = (my)/(nt).$$ But now the denominator is equal to 0. Any
real number divided by 0 is not defined within the set of rational numbers. Therefore, condition (1) is contravened; closure is not achieved.
Also division is not associative e.g. $(100/50)/5 \neq 100/(50/5).$
Positive integers with multiplication
I check all the properties of the groups for the set of positive integers with the operation of multiplication:
Closure: If $a$ and $b$ are positive integers, then $a * b$ is also a positive integer. True.
Associativity: $(a * b) * c = a * (b * c)$ for all $a, b, c$ in the set. True.
Identity: 1 is a positive integer, and $a * 1 = 1 * a = a$. True (So the identity e = 1).\\ Inverses: $a * (1/a) = (1/a) * a = 1$. But $1/a$ is not always an integer.
So, the set of positive integers with the operation of multiplication does not form a group as it does not contain inverse elements.
The set of all positive even integers with the operation of multiplication
would not form a group because this set does not contain inverses. But it is noted that no identity exists in this set. For any real number, the identity element of multiplication is 1. But 1 is not
an even integer, and therefore not a member of the set of all even integers. This contravenes condition (3).
The set of integers with the operation $*$ defined such that $m*n = m + n + 1$
is a group. The identity $e$, is defined as being an element such that $m*e= m$ or (as all the operations that make up $*$ are not affected by order) $e*m = m$. Therefore $m*e = m = m + e + 1$ which
implies $e + 1 = 0$ and so $e = -1$.
The inverse element, $m^{-1}$, has the property that $m*m^{-1} = e = -1 = m + m^{-1} + 1$ an so $m^{-1}$, the inverse of $m$, is given by $m^{-1}= -(m+2).$
The set of integers with the operation $*$ defined by $m*n = m + (-1)^m n $
is a group with the following properties:
Closure: If $m$ and $n$ are integers, then $m * n$ is also an integer. True: $m*n=m + n$ when $m$ is even and $m - n$ when $m$ is odd.
Associativity: $(m * n) * p = m * (n * p)$. True.
Identity: $m *e = m + (-1)^m e = m$ for all integer $m$ implies $e = 0$ and also $0 * m = 0 + (-1)^0 m = m$, so the identity element is 0.
Inverses. One must work separately for even and odd elements. If $m$ is even ($m =2p$ where $p$ is an integer) then $m * m^{-1} = 0$ implies $ m + m^{-1} = 0$ and so $m^{-1} = -m.$ Similarly $-m * m
= 0$ which verifies that the inverse is $-m$. If $m$ is odd then the inverse of $m$ is $m$.
The inverse of $m$ is given for both cases by $m^{-1} = (-1)^{m+1}m$.
The set of all real numbers excluding only the number -1 together with the operation $x*y = xy + x + y$
is a group.
The inverse $e$ is such that $x*e= x = xe + x + e$ and so $xe = -e$. Therefore $e=0$ is the identity.
If $x*x^{-1} = 0 = xx^{-1} + x + x^{-1} = 0$ then $ x^{-1}(x+1) = -x$ and so the inverse of $x$ is given by $x^{-1} = -x/(x+1)$ (hence $x\neq -1$, or else $x^{-1}$ is not a real number, which would
contravene the properties of the group) | {"url":"http://nrich.maths.org/2680/solution?nomenu=1","timestamp":"2014-04-18T06:24:34Z","content_type":null,"content_length":"16664","record_id":"<urn:uuid:22e64e96-1c52-412f-bdfa-751ead87d22d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics 230A - Quantum Field Theory I
Spring 2005, Tue and Thu, 9:30 - 11:00 a.m., 430 Birge Hall
Discussions: Fri, 2:10 - 3:00 p.m. (originally in 385 Le Conte Hall; effective immediately, relocated to the Oppenheimer Room, 4th floor Birge)
(Sometimes, by announcement, the discussion will be moved to Thu, 4:10 p.m., Oppenheimer room.)
Instructor: Petr Hořava (email: horava@socrates.berkeley.edu)
Office: 441 Birge (usually on Tue and Thu); 50A-5107 LBNL (usually on Mon, Wed, Fri)
Homework assignments will be posted here every Thursday by noon (or later :-), and they will be due in class in seven days. They will be graded on a crude scale, + or - (+ for reasonable effort, -
for not turning the homework in or for no visible effort). Unless stated otherwise, the problems are from A. Zee's book.
Week 1: No official homework; instead, an Opening Quiz has been handed out.
Week 2: Problems I.3.1, I.3.2, I.4.1, I.5.1 (due Thu, Feb 3).
Week 3: Problems I.7.1, I.7.2, I.7.3, I.7.4, I.8.1, I.8.3, I.8.5 (due Thu, Feb 10).
Week 4: Problems I.9.1, I.9.3, I.9.4, I.10.2, I.10.3, I.10.4 (due Thu, Feb 17).
Week 5: Problems II.1.1, II.1.2, II.1.3, II.1.8, II.1.11, II.2.1, II.4.1 (due Thu, Feb 24).
Week 6: From the perspective of Zee's book, this week we were hopping around and effectively sampling different chapters. As a result, the homework problems are from several different parts of the
book. Here they are: II.3.2, II.3.4, II.5.1, III.5.1, III.5.2, V.5.1 (due Thu, March 3).
Week 7: Just two problems: V.6.1, V.6.2 (due Thu, March 10).
Week 8: Problems IV.1.2, IV.4.3, IV.4.4, IV.5.1, IV.5.2, IV.5.4 (due Thu, March 17).
Week 9: This week's problems can be found here. In addition to the three problems there, you are also required to read Chapter IV.6 of Zee (on the Higgs mechanism). The three problems are due Thu,
March 31. Any questions about the Higgs mechanism will be addressed in discussion in April.
Week 10: Spring break.
Week 11: This week's discussion (on April 1) has been cancelled, and there is no homework this week. Next week's discussion will be held on Thu, April 7, 4:10pm in the Oppenheimer room; there is no
discussion on Fri, April 8.
Week 12: Problems VI.1.1, VI.1.2, VI.1.4, VI.6.2, VI.6.3, VII.1.1, VII.1.4 (due Thu, April 14).
Week 13: Problems VI.1.7, VI.7.1, VI.8.3, VII.4.1, VII.4.4 (this problem is related to the material to be covered in class on Tue, April 19) -- (due Thu, April 21).
Week 14: Problems VII.4.5 and VII.4.6 (due Thu, April 28); in addition, as a part of this week's assignment, read as much as you can of J. Maldacena's TASI 2003 lectures on the AdS/CFT
correspondence, hep-th/0309246; our discussion session on Fri, April 29 will be (primarily) on AdS/CFT.
Week 15: Problems IV.3.3, IV.3.4, III.1.3, III.3.1, III.3.2, III.3.4. This is the final official homework assignment of the semester (due Thu, May 5).
Week 16: I was originally planning to post an official final take-home exam here. However, I have decided that there will be no official take-home exam. I am happy with how hard the students have
worked throughout the semester, on all those homework assignments. I feel that I have enough information to assign grades solely on the homework performance (plus the students' activity in class and
in discussions). At this point, the spectrum of grades will range from the best grade of A+, to the worst grade of A-. If somebody feels that they might fall into the A- category and are unhappy
about it, they can still convince me that they deserve an A+, by solving the otherwise voluntary problem below. (If you intend to do so, please let me know by May 11.)
So, instead of a mandatory final exam, I offer the following problem, to those who wish to test their understanding of the material of the course:
Voluntary problem. We all know that in four spacetime dimensions, the coupling constant of the scalar field theory with the (phi)^4 self-interaction runs, and the theory is driven to strong
coupling at large energies. Consider now the scalar field theory in 5+1 spacetime dimensions, with the (phi)^3 self-interaction. Show -- in the lowest order in the perturbation theory in the
small coupling constant -- that the coupling constant of this (phi)^3 theory also runs, but the theory is now asymptotically free. [Hint: This is not as simple as it may sound -- you have to take
into account the wave-function renormalization of the scalar field.]
One final piece of information: Those of you who would like to further practice their understanding of the material covered in class during Week 16 could consider problems IV.7.1, IV.7.4, III.7.1,
VI.8.1 and VI.8.7 (this somewhat tersely defined problem is explained in more detail in the body of the chapter, and in the solutions at the end of the book).
Week 17: Our final class is scheduled for Tue, May 10, 9:40-11am, in 430 Birge as usual. For the rest of the week of May 9-13, I recommend that you attend as many talks of our math/physics workshop
on matrix models as you can. Thanks for a great semester, I really enjoyed it! I hope to see you in the Fall semester in my Phys-250 class on AdS/CFT correspondence!
The course is designed as a logical continuation of the material covered in 229A, with three major themes: the path integral method, renormalization in quantum field theory, and quantization of
Yang-Mills theories. My intention is to keep some balance between the techniques of and conceptual insight into (both perturbative and non-perturbative) quantum field theory in general.
In this course, we will primarily use two textbooks:
A. Zee, Quantum Field Theory in a Nutshell
M.E. Peskin & D.V. Schroeder, An Introduction to Quantum Field Theory. | {"url":"http://www-theory.lbl.gov/~horava/230A-s05.html","timestamp":"2014-04-21T07:40:29Z","content_type":null,"content_length":"7569","record_id":"<urn:uuid:4043130e-e863-4f44-ae65-afedaeb09d2c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Pass the GED Math Test
July 15, 2011
The math section of the GED is the part of the exam that the majority of people see as the most difficult. Developing a good knowledge of the mathematical concepts that the exam is assessing you for
is, obviously, the best way to gain a high score on the math section of the GED. Still, there are certain things you should be aware of that, if your familiarity with the mathematical concepts is
still not perfect, might help you successfully pass the math section of the GED. Even if you are already very confident using all of the math skills you will need on the test, being aware of the tips
below just might help you get an even higher score on the math component of the GED, which would raise your all round score and be to your benefit if there is a different component of the test that
you are weak on.
Become Acquainted with the Calculator
For one half of the test you will have the ability to use a calculator. It is crucial you know how to use every one of the helpful capabilities of this calculator to get the full use from it. If you
don’t know how to work with fractions on the calculator, for instance, you’re depriving yourself of an advantage that the test administrators are making available to you.
The calculator is a scientific calculator, in particular a Casio fx-260, and has much more capabilities than a standard calculator. You do not need to know how to use all of these characteristics,
but you ought to understand the way to make use of the calculator for fractions, exponents, square roots and trigonometric functions, in addition to, of course, the basic functions of addition,
subtraction, multiplication and division.
The best way to get familiar with the calculator is through using it while you are get yourself ready for the GED. Whenever you review a subject as you are studying, for instance reducing fractions,
ensure you can conduct procedures on your calculator as well as manually. In addition, any time you take a practice GED math test, use the calculator on the part of the test that permits calculator
Understand When to Make use of Formulas
Throughout the test you will have access to a sheet of paper that details all of the formulas that will be helpful to you in answering GED math questions. Take care not to use this as a crutch,
though. The page doesn’t tell you which kinds of problems to use every formula on, which is crucial. To know this you need to study the mathematical concepts and by the time you fully understand them
all, you will probably have the formulas memorized anyways.
Use Time Wisely
You will have 45 minutes to complete each part of the GED math test, which is 90 minutes total. To make certain that you have sufficient time to respond to each problem as well as you can, it is
essential to use good time management skills. When you begin to work on every new problem, quickly look over what it is expecting you to do. If it is something that seems especially difficult and you
have no idea how to deal with it, skip it, being sure to make note of which questions you skipped. This will ensure that there is enough time to respond to all the questions that are not difficult or
slightly difficult for you.
If you have adequate time after responding to the less challenging questions, return to those you passed over and do the best you can do to answer them. At the very least you can just guess. If it is
a multiple choice and not a bubble-in question, one method is to look at all of the possibilities and try each one to see if you can work in reverse from the answer to the question with success.
For more suggestions and explanations of the principles you need to understand to pass the GED math test visit http://www.math4ged.com. Math4GED has in depth explanations for every concept that you
could be tested on the GED math test.
Hi, this is a comment.
To delete a comment, just log in, and view the posts’ comments, there you will have the option to edit or delete them. | {"url":"http://gedmath.wordpress.com/2011/07/15/how-to-pass-ged-math/","timestamp":"2014-04-20T20:55:57Z","content_type":null,"content_length":"51377","record_id":"<urn:uuid:062561c9-b1df-40a2-b728-6ce95e508bc8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra/Solving equations
From Wikibooks, open books for an open world
Quadratic equations[edit]
Up to now you have only dealt with equations and expressions involving just x; in this section we'll move onto solving things which have $x^2$ in them.
All quadratic equations can be arranged in the form $ax^2+bx+c=0\ (a e 0)$, and a,b,c are all constants. Now let's look at some examples:
Examples: Rearrange the following equations in the form $ax^2+bx+c=0 \qquad$:
$(1)\qquad x(x-3)=3-5x; \qquad$
Solution for (1):
$x^2 - 3x = 3 - 5x \qquad$
$x^2 + 2x = 3 \qquad$
$x^2 + 2x - 3 =0 \qquad$
Note that in the first step you distributed the x on the left side of the equation. The second step was obtained by adding a 5x to both sides of the equation and subsequently subtracting a 3 from
both sides of the equation.
$(2)\qquad 2x+1=(x^2+2)\sqrt{3}$
Solution for (2):
$\begin{matrix} 2x+1&=&x^2\sqrt{3}+2\sqrt{3}\\ -x^2\sqrt{3}+2x+1-2\sqrt{3}&=&0\\ x^2\sqrt{3}-2x-1+2\sqrt{3}&=&0 \end{matrix}$
Note that in the last step, both sides are multiplied by -1, to make the term $-x^2\sqrt{3}$ positive, so that the solving of the equation would be easier.
Factorization is the most common way to solve quadratic equations. Let us consider again the first example above: $x(x-3)=3-5x$ We have already simplified the equation into
Now, we want to factorize the equation - that is to say, get it into a form such as:
$(x+''\text{something}'')(x+''\text{something else}'')=0$
Look at the number term c. In this example, it is -3. Now, if we are lucky, the numbers "something" and "something else" will turn out to be nice whole numbers, so let's think of two numbers that
will multiply together to give -3. Either 3 and -1, or -3 and 1. But we also need to get the x term correct (here, b=2). In fact, we need our two factors of c to add together to make b. And (3)+(-1)=
2. So, we have found our 'somethings': they are 3 and -1. Let's fill them in.
Just to check, we can multiply out the brackets to check we have what we started with:
$\begin{matrix} (x+3)(x-1)=0\\ x^2 +3x -1x -3=0\\ x^2 +2x -3=0 \end{matrix}$
Now, we know that in an equation the left side is always equal to the right side. And in this case the right side of the equation is 0, so from that we can conclude the term $(x+3)(x-1)$ must equal
to zero as well. And that means that either $(x+3)$ or $(x-1)$ must equal zero. (Not convinced? Remember (x+3) and (x-1) are just numbers. Can you find two non-zero numbers which multiply to make
Let's write that algebraically:
$\begin{matrix} x^2 +2x -3=0\\ x+3=0 \qquad \mbox{or} \qquad x-1=0\\ x=-3 \qquad \mbox{or} \qquad x=1 \end{matrix}$
Thus, there are two different solutions to the same equation! This is the case for all quadratic equations. We say that this quadratic equation has two distinct and real roots.
With practice, you will often be able to write down the equation in factorised form almost immediately. Here is another example, in this case the x easily factorises out:
$\begin{matrix} 2x^2&=9x\\ 2x^2-9x&=0\\ x(2x-9)&=0\\ x=0&\qquad \mbox{or} \qquad x=\frac{9}{2} \end{matrix}$
Completing the square[edit]
Sometimes the roots (solutions) of a quadratic equation cannot be easily obtained by factorisation. In such cases, we have to solve the equation by completing the square, or using the quadratic
formula (see below).
In order to complete the square, we need to rewrite the given equation in the form $(x+a)^2=b$. Now here is an example:
$\begin{matrix} x^2+8x+9&=&0\\ x^2+8x&=&-9\\ x^2+8x+4^2&=&4^2-9\\ (x+4)^2&=&7\\ &x+4=\sqrt{7}& \quad \mbox{or} \quad x+4=-\sqrt{7}\\ &x=-4+\sqrt{7}& \quad \mbox{or} \quad x=-4-\sqrt{7}\\ &x=-1.35
& \quad \mbox{or} \quad x=-6.65 \end{matrix}$
In general, we get
$x^2+kx+\left({k \over 2}\right)^2=\left(x+{k \over 2}\right)^2$
Note that when we reach the stage of taking the square root of both sides of the equation, we might have a negative left-hand side. In this case, the roots will be complex. If you have not yet
learned about complex numbers, it is possible to simply state that the equation "has no real roots".
Quadratic Formula[edit]
The quadratic formula is a special generalization of completing the square that allows the two roots of a quadratic equation to be obtained by simple substitution. It can be used to solve any
quadratic equation and is very quick to work out on a calculator.
$ax^2 + bx = -c \Rightarrow x^2 + \frac {b}{a}x = \frac {-c}{a}$
Complete the square:
$x^2 + \frac {b}{a}x + \left(\frac {b}{2a}\right)^2 = \frac {-c}{a} + \left(\frac {b}{2a}\right)^2$
Which equals 4y to the 19th power.
$\left(x + \frac {b}{2a} \right )^2 = \frac {-c}{a} + \left(\frac {b}{2a}\right)^2$
$x + \frac {b}{2a} = \pm \sqrt {\frac {b^2}{4a^2} + \frac {-c}{a}}$
$x + \frac {b}{2a} = \pm \sqrt {\frac {b^2}{4a^2} + \frac {-4ac}{4a^2}}$
$x = \frac {-b}{2a} \pm \sqrt {\frac {b^2-4ac}{4a^2}}$
$x = \frac {-b}{2a} \pm \frac {\sqrt {b^2-4ac}}{2a}$
$x = \frac {-b \pm \sqrt {b^2-4ac}}{2a}$
which is the desired form of the quadratic formula.
Hence, given that a quadratic is in the form $ax^2+bx+c=0$, the two roots are:
$x={-b \pm \sqrt{b^2-4ac} \over 2a}$
The quantity ${b^2-4ac}$ in the equation, known as the discriminant, is an indication of the solubility and nature of the roots:
• discriminant is positive -- soluble over R, real roots
• discriminant is zero -- soluble over R, real repeated (single) roots
• discriminant is negative -- insoluble over R (yet soluble over C), no real roots
Weda's Theorem[edit]
If the quadratic equation $ax^2+bx+c=0\ (ae0)$ has two real roots $x_1$ and $x_2$, then
$\left\{ \begin{matrix} x_1+x_2&=&-\cfrac{b}{a}\\ x_1x_2&=&\cfrac{c}{a} \end{matrix} \right.$
This is because $x_1=\frac{-b+\sqrt{b^2-4ac}}{2a}$ and $x_2=\frac{-b-\sqrt{b^2-4ac}}{2a}$. By simply adding or multiplying the two roots we will get the above two equations. This is called Weda's
Using Weda's Theorem we can find the second root of a given quadratic equation without solving the equation.
Example: Given that one of the real roots of the equation $4x^2-13x+10=0$ is 2, find the other root without solving the equation.
$\begin{matrix} x_1 \cdot x_2 &=& \cfrac c a \\ \Rightarrow x_1 \cdot 2 &=& \cfrac{10}{4}\\ x_1&=&\cfrac{5}{4} \end{matrix}$
We can also determine the signs of two roots by applying the following rules:
1. the equation has two positive roots if $\Delta\ge0,\ \frac{c}{a}>0,\ \mbox{and}\ \frac{b}{a}<0$;
2. the equation has two negative roots if $\Delta\ge0,\ \frac{c}{a}>0,\ \mbox{and}\ \frac{b}{a}>0$;
3. the equation has two roots with different signs if $\ \frac{c}{a}<0$
($\Delta$ represents the discriminant of the equation.)
Another problem involving Weda's Theorem:
Example: For the equation $2x^2+ax+1=0$, given that the sum of squares of roots is $7\frac{1}{4}$, find the value of $a$.
$\begin{matrix} (x_1+x_2)^2-2x_1x_2&=&x_1^2+x_2^2\\ \left(-\cfrac{a}{2}\right)^2-2\cdot\cfrac{1}{2}&=&7\cfrac{1}{4}\\ \cfrac{a^2}{4}&=&8\cfrac{1}{4}\\ a^2&=&33\\ a&=&\pm\sqrt{33} \end{matrix}$
Pythagorean Theorem ____________________
Solving simultaneous linear and nonlinear equations[edit]
In previous chapters you have already learned how to solve simultaneous linear equations. Now we will learn how to solve a system of simultaneous linear and non-linear equations with two unknowns. It
is usually done by substitution method.
Example: Solve the following simultaneous equations:
$\left\{ \begin{matrix} 2x^2+y^2-5xy&=8 & \mbox{------ (1)}\\ y-x&=2 & \mbox{------ (2)} \end{matrix} \right.$
$\begin{matrix} \mbox{Rearrange (2):} & y=x+2 & \mbox{------ (3)}\\ \mbox{Substitute(3) into (1):} & 2x^2+(x+2)^2-5x(x+2)=8\\ & 2x^2+x^2+4x+4-5x^2-10x-8=0\\ & x^2+3x+2=0\\ & (x+2)(x+1)=0\\ & x=-2 \
quad \mbox{or} \quad -1\\ \mbox{Substitute x=-2 back into (3):} & y=-2+2=0\\ \mbox{Substitute x=-1 back into (3):} & y=-1+2=1 \end{matrix}$
∴ x=-1 and y=1, or x=-2 and y=0. | {"url":"http://en.wikibooks.org/wiki/Algebra/Solving_equations","timestamp":"2014-04-18T03:08:06Z","content_type":null,"content_length":"41367","record_id":"<urn:uuid:9c88b973-3903-49dd-97d8-9ddd70f3e4a2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maywood, CA Geometry Tutor
Find a Maywood, CA Geometry Tutor
...My degree in Mathematics is from UCLA, which was a very rigorous course of study. I know how to break math problems down into simple steps. I can analyze what your son or daughter needs to
succeed in math.
14 Subjects: including geometry, Spanish, ESL/ESOL, reading
...Students I worked with have scored higher on their finals and other placement tests. I am very flexible and available weekdays and weekends. I will be a great help for students who require
science classes in their majors or for those who are looking to score high on their entry exams.
11 Subjects: including geometry, chemistry, algebra 1, algebra 2
...I moved to California 7 years ago, but I have been tutoring for 12 years and I have taught almost every course from 7th grade Math to Calculus over the past 9 years in Canada and the United
States. I love tutoring because it gives me a chance to focus on one person at a time and most people just...
11 Subjects: including geometry, physics, algebra 1, GED
...SAT Writing test has one 25 minute essay and 2 multiple choice section that tests grammatical errors in sentences and passages. In the essay section, I focus on making sure that the student is
able to write a coherent and thoughtful essay in 25 minutes. Unless the student is ESL, I've never had...
38 Subjects: including geometry, reading, English, writing
...Don't think " Oh well, it well get better next year." It won't! Fuzzy understanding at this level will be a disaster in future courses. Get help now.
24 Subjects: including geometry, chemistry, English, calculus
Related Maywood, CA Tutors
Maywood, CA Accounting Tutors
Maywood, CA ACT Tutors
Maywood, CA Algebra Tutors
Maywood, CA Algebra 2 Tutors
Maywood, CA Calculus Tutors
Maywood, CA Geometry Tutors
Maywood, CA Math Tutors
Maywood, CA Prealgebra Tutors
Maywood, CA Precalculus Tutors
Maywood, CA SAT Tutors
Maywood, CA SAT Math Tutors
Maywood, CA Science Tutors
Maywood, CA Statistics Tutors
Maywood, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Maywood_CA_Geometry_tutors.php","timestamp":"2014-04-20T15:55:39Z","content_type":null,"content_length":"23798","record_id":"<urn:uuid:24799e4b-f6ae-48b3-83ab-5869e852a72e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove whether the set property is always true, always false, or sometimes true or fal
October 3rd 2011, 10:07 PM #1
Oct 2010
Prove whether the set property is always true, always false, or sometimes true or fal
Q: If B is a proper subset of C, then C - A does not equal the empty set. Is this statement always true, always false, or sometimes true and sometimes false? Explain.
I get that if B is a proper subset of C, then there must be at least one element within the set C that doesn't belong in set B. And for C - B to take place, I explained that x must be a member of
C and can't be a member of A. But I really don't know how to tie these relations together. Am I getting hot or cold?
Re: Prove whether the set property is always true, always false, or sometimes true or
And for C - B to take place, I explained that x must be a member of C and can't be a member of A.
You mean, C - A.
Consider the situations when A = C and A = B.
October 4th 2011, 02:19 AM #2
MHF Contributor
Oct 2009 | {"url":"http://mathhelpforum.com/discrete-math/189515-prove-whether-set-property-always-true-always-false-sometimes-true-fal.html","timestamp":"2014-04-17T07:15:49Z","content_type":null,"content_length":"33498","record_id":"<urn:uuid:d51fd36e-800a-4f8e-8831-b98a6467c611>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra 1 Tutors
Elkridge, MD 21075
Providing tutoring in Math, Physics, and Excel
...I focus on concepts, and not just doing homework problems. Let me help you or you son/daughter obtain that foundation that they will need. I have a BS in mechanical engineering, took
Algebra 1
& 2, geometry, trigonometry, Calc 1, 2, and 3 AP in high school, earning...
Offering 10 subjects including algebra 1 | {"url":"http://www.wyzant.com/Baltimore_MD_Algebra_1_tutors.aspx","timestamp":"2014-04-24T00:11:32Z","content_type":null,"content_length":"61623","record_id":"<urn:uuid:d2b941d8-2996-4e0a-bb37-54dfa5ee250c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH-221 Real Nbr Sys/Meth for Elem Sch
This course provides prospective elementary school teachers with background needed for teaching elementary mathematics. Both content and methodology relevant to school mathematics are considered.
Topics covered include the real number system and its sub-systems. Pedagogical issues addressed include the nature of mathematics and of mathematics learning and the role of problem solving and the
impact of technology in the elementary school mathematics curriculum. Prerequisites: Education 102. This course meets mathematics core. | {"url":"http://www.calvin.edu/academics/majors-minors/course-description.html?course=MATH-221","timestamp":"2014-04-16T07:23:42Z","content_type":null,"content_length":"1777","record_id":"<urn:uuid:71b9ed70-bf94-4b59-9f28-e2561fcca9b6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
neuroscientist says PSAT score related to simple arithmetic skills
I've never taken a SAT either. I'm guessing it has to do with the fact that people who use the quantity part of their brain are looking for numbers to plug into equations while people using the
fact-driven part of their brain are trying to solve the problem.
I think it's the opposite. The people who accessed the left hemisphere area he mentioned were simply recalling the previously learned fact that 7-3=4, while those who accessed the right brain area
that processes matters of quantity were the ones who were actually trying to figure out the difference between the two quantities, 7 and 3 from scratch. The latter is much more time and energy
It sounds to me like the moral is: the people who do better on the PSAT would be those who don't have to re-invent the wheel each time a wheel is called for. They just pull one off the shelf. | {"url":"http://www.physicsforums.com/showthread.php?p=4222577","timestamp":"2014-04-19T07:32:28Z","content_type":null,"content_length":"36678","record_id":"<urn:uuid:db19f4a6-c7a9-42ea-87c1-30addf5012b4>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
PIRSA - Perimeter Institute Recorded Seminar Archive
A real ensemble interpretation of quantum mechanics
Abstract: A new ensemble interpretation of quantum mechanics is proposed according to which the ensemble associated to a quantum state really exists: it is the ensemble of all the systems in the same
quantum state in the universe. Individual systems within the ensemble have microscopic states, described by beables. The probabilities of quantum theory turn out to be just ordinary relative
frequencies probabilities in these ensembles. Laws for the evolution of the beables of individual systems are given such that their ensemble relative frequencies evolve in a way that reproduces the
predictions of quantum mechanics. These laws are highly non-local and involve a new kind of interaction between the members of an ensemble that define a quantum state. These include a stochastic
process by which individual systems copy the beables of other systems in the ensembles of which they are a member. The probabilities for these copy processes do not depend on where the systems are in
space, but do depend on the distribution of beables in the ensemble. Macroscopic systems then are distinguished by being large and complex enough that they have no copies in the universe. They then
cannot evolve by the copy law, and hence do not evolve stochastically according to quantum dynamics. This implies novel departures from quantum mechanics for systems in quantum states that can be
expected to have few copies in the universe. At the same time, we are able to argue that the centre of masses of large macroscopic systems do satisfy Newton's laws.
Date: 03/05/2011 - 4:00 pm | {"url":"http://pirsa.org/11050022","timestamp":"2014-04-20T16:14:08Z","content_type":null,"content_length":"9587","record_id":"<urn:uuid:b6ba8c28-4f66-4716-acf1-9f87f3a38e6a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Practical Group Representation Theory
June 4th 2008, 10:32 AM #1
Jun 2008
Practical Group Representation Theory
Hi Math Forum,
I'm having trouble with group representation theory - I've read a few books about it (Michael Artin's Algebra and J-P Serre's Linear Representation of Finite Groups) but they don't seem to have
how to actually solve a representation problem - just definitions and properties.
Does anyone know how to find a representation? Is there a clear method of finding one?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/40605-practical-group-representation-theory.html","timestamp":"2014-04-20T18:42:39Z","content_type":null,"content_length":"29073","record_id":"<urn:uuid:ccbeb67b-2697-45d6-b300-5dfe8b395033>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
seem to be a simple question encountered
April 13th 2012, 01:35 AM #1
Mar 2012
Hong Kong
seem to be a simple question encountered
the original question is
In a shop, the selling price of three items are $49.3 $54.9 and $82.2
(a) By rounding up the selling price of each item to the nearest dollar, estimate the total selling price of the three items.
It's easy. No problem . Total = 49+52+82= $186
(b) A person has $190. Will he have enough money to buy the three items? Use the result of (a) to explain your answer.
That comes to a problem. At the first glance the answer should be $190-186 = 4 and so "YES, enough"
but in fact, some of the items in (a) is rounded down....so it is possible that the money is not enough because in fact the items
will not be sold to the nearest dollar, so why using the result of (a) to explain the answer of (b) makes sense?
Thank you.
Re: seem to be a simple question encountered
The question said round UP each price.
April 13th 2012, 01:48 AM #2
Senior Member
Mar 2012
Sheffield England | {"url":"http://mathhelpforum.com/algebra/197194-seem-simple-question-encountered.html","timestamp":"2014-04-17T13:31:26Z","content_type":null,"content_length":"31808","record_id":"<urn:uuid:8782bfea-3d8c-4375-b9c2-62ec4543a17a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yahoo Groups
Re: [usa-tesla] Re: Ohms Law Value at Series Resonance?
Expand Messages
View Source
People who can do documentation are a vital link in the development of inventions. Since so many of our best inventors have no funding at all, we run into this situation too often. I've had the
same problem as everyone else with this particular work -- looking at the pure text obviously isn't doing it justice.
If Mr. Norris happens to be anywhere around the places i travel, i'd be willing to stop by and take some pictures, write up some stuff, and get it out there. PESwiki would also be interested.
I live in Loveland, Colorado and regularly go all up and down the Front Range from Cheyenne to Denver, and by extension can materialize anywhere else in Colorado.
Later this month i'm planning on traveling to Albany New York, which means i'd be able to work in the Mohawk and Hudson Valleys, New York City, and New Jersey. Then am planning to spend a few
days in Philadelphia working on the possible Tesla Days conference before returning here.
Most of my cross-country travel is on Amtrak so any place along a rail line could become a stop. Since this appears to be a sort of emergency, i'd just ask that somehow a place to stay & meals be
provided while i'm on the project.
-----Original Message-----
From: ED
Sent: Apr 26, 2012 6:43 PM
To: usa-tesla@yahoogroups.com
Subject: Re: [usa-tesla] Re: Ohms Law Value at Series Resonance?
None of this makes any sense at all without a diagram showing the actual configuration, the measurement points, and the measurements and predicted values for each point. Try that and we
might have some sensible comments.
Harvey D Norris wrote:
--- In usa-tesla@yahoogroups.com, "McGalliard, Frederick B" <frederick.b.mcgalliard@...> wrote:
> Harvey. You are strongly overstating the dif between a freshman EE class, and a grad student level evaluation of a range of real applications. The freshman uses simple coil and
capacitor models and does his lab demo with components that fall in the range where all the little idiosyncrasies do not apply. In fact, as all skilled and experienced EEs, and even
some physicists, know, inductors and capacitors typically have a well behaved nearly ideal range of behavior,
If we took a single coil and then air core coupled it with another coil by mutual inductance, the inductive reactance of the first coil will be reduced. If we then used that lowered
reactance and gave it an identical capacitive reactance, the current would never be able to reach its ohms law value expressed from the single coil. The Q factor of that coil could
not reach the X(L)/R ratio. If it did all of the apparent VI input energy would have been used up,(because now in these ideal conditions VI=I^2R and no energy would be left over for
the secondary to record any current. If it did there would be more power out then what went in. If the secondary were made more receptive by it also having a C value in its loop, this
would further drive the primaries inductive reactance down again by a smaller margin. If the circuit were retuned again, the same thing would apply and the single inductor would
deviate even more from its ideal behavior. However for just the single inductor without any other receptors in space around it, we are still confronted with the electric field between
the windings, or the internal capacity of the coil. If the series resonance were ideal, ALL of the available electric field created by the series resonant rise of voltage would be in
the capacitor, and none would be left over to manifest itself in the internal capacity of the coil.
I will clarify then the measurements made in http://www.youtube.com/wa
First the total current was measured for two 14 gauge coil spools in isolation and in series @ 2.6 ohms and given an opposing capacitive reactance within 1% of the needed value.
Stopping the video at 1:06 shows those notes where it is indicated that
16.05 volts enables 5.11 A
Only 82.8% of the expected 6.17 A developes if the load were truly 2.6 ohms. The resonance has not come very close to its ohms law value at all. This to me is not operating in an
ideal range of behavior. When I showed the circuit to my friend who nit pics and has an electronics associate degree, he protested that I was not counting the resistance of all the
connecting wires, so I replaced all the capacitive alligator clips with tight 14 gauge wire connections. At 5:20 in the video most of these can be seen, but there would have to be
some 170 ft of 14 gauge wire involved for his protest to be valid. Then he said the circuit wasn't perfectly balanced and the books can't be wrong. This too is invalid because the
ratio X(L)/R is not large, thus we do not have a narrow bandwidth of resonance.
Next the cap bank was shorted to find the Impedance of just the inductive side. The variac supplying this voltage of the low end of its 150 volt range then showed 18.74 volts enabling
1.67 A for Z=11.22 ohms. After subtracting the squares to find the square of X(L):(Z^2-R^2=X(L)^2) for the actual 2.6 ohms resistance X(L)= 10.9 ohms
Lastly the inductive side was shorted to determine X(C).
Notice that the variac supply then rose to its highest value where 19.54 volts enabled 1.78 A, which gives X(C)= 10.97 ohms, within 1 % of the needed value. My electronics friend also
noted the the wireless amperage meter was very accurate in comparison to meters he brought over, and it was very convenient to have both amperage and voltage displays on the same
screen. My actual repeat of these observations on the video was unduly long due to inadequate preparation. I hope I have made my point here. If I had used actual alternator
frequencies (~465Hz) for the demo, the discrepancies between ideal and real behavior would have been vast, as I had mentioned only ~30% of the expected amperage developed in that
Internal capacity must become more predominant at higher frequencies.
Sincerely HDN
-- Michael Riversong
Tesla Academy
Fort Collins, Colorado
Your message has been successfully submitted and would be delivered to recipients shortly. | {"url":"https://groups.yahoo.com/neo/groups/usa-tesla/conversations/topics/26361?viscount=-30&l=1","timestamp":"2014-04-18T16:35:16Z","content_type":null,"content_length":"46130","record_id":"<urn:uuid:5be4a6f9-b9be-4e30-9b80-aab853d6dfc7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II Generalizability Theory and Military Performance Measurements: I. Individual Performance Richard J. Shavelson INTRODUCTION This paper sketches a
statistical theory of the multifaceted sources of error in a behavioral measurement. The theory, generalizability (G) theory (Cronbach et al., 1972), models traditional measurements such as aptitude
and achievement tests. It provides estimates of the stability of a measurement (“testretest” reliability in classical test theory), the consistency of responses to parallel forms of a test
(“equivalent-forms ” reliability), and the consistency of responses to test items (“internal-consistency” reliability). Each type of classical reliability coefficient defines measurement error
somewhat differently. One of G theory's major achievements is that it simultaneously estimates the magnitude of the errors influencing all three classical reliabilities. Hence, we speak of G theory
as a theory of the multifaceted sources of error. Performance measurements may contain the same sources of error as traditional pencil-and-paper measurements: instability of responses from one
occasion to the next, nonequivalence of supposedly parallel forms of a performance measurement, and heterogeneous subtask responses. And more. Two additional, pernicious sources of error are
inaccuracies due to scoring, where observers typically score performance in real time, and inaccuracies The author gratefully acknowledges helpful and provocative comments provided by Lee Cronbach
and the graduate students in his seminar on generalizability theory. The author alone is responsible for the contents of this paper.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II due to unstandardized testing conditions, where performance testing is typically carried out under widely varying laboratory and field conditions.1
G theory's ability to estimate the magnitude of each of these sources of error, individually and in combinations, enables this theory to model human performance measurement better than any other. The
next section provides an example of how generalizability theory can be applied to military job performance measurements, using hypothetical data. The third section presents G theory formally, but
with a minimum of technical detail. Key features of the theory are illustrated with concrete numerical examples. The fourth section presents applications of the theory. These applications were chosen
to highlight the theory 's flexibility in modeling a wide range of measurements. The fifth section concludes the paper by discussing some limitations of the theory. APPLICATION OF GENERALIZABILITY
THEORY TO THE MEASUREMENT OF MILITARY PERFORMANCE Background Military decision makers, ideally, seek perfectly reliable measures of individuals' performance in their military occupational
specialities. 2 Even with imperfect measures, the decision maker typically treats as interchangeable measures of an individual's performance on one or another representative sample of military
occupational specialty tasks (and subtasks) that were carried out at any one of many test stations, on any of a wide range of occasions, as scored by any of a large number of observers. Because he
wants to know what the person 's performance is like, rather than what he did on one particular moment of observation, he is forced to generalize from a limited sample of behavior to an extremely
large universe: the individual's job performance across time, tasks, observers, and settings. This inference is sizable. Generalizability theory provides the statistical apparatus for answering the
question: Just how dependable is this measurement-based inference? To estimate dependability, an individual's performance needs to be observed on a sample of tasks/subtasks, on different occasions,
at different stations, with different observers. A generalizability study (G study), then, might randomly sample five E-2s,3 who would perform a set of tasks (and subtasks) on two different
occasions, at two different stations, with four 1 By design, traditional pencil-and-paper tests control for scoring errors by using a multiplechoice format with one correct answer, and testing
conditions are standardized by controlling day, time of day, instructions, etc. 2 “Military occupational specialty” is used generically and applies to Air Force specialties and Navy ratings as well
as to Army and Marine Corps military occupational specialties. 3 Large samples should be used. For illustrative purposes, small samples are more instructive.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II observers scoring their performance. An individual would be observed under all possible combinations of these conditions or a total of 16 times (2
occasions × 2 stations × 4 observers) on the set of tasks/subtasks. If performance is consistent across tasks, occasions, stations, and observers—i.e., if these characteristics of the measurement do
not introduce systematic or unsystematic variation in the measurement —the measurement is dependable and the decision maker's ideal has been met. More realistically, however, if the individual 's
score depends on the particular sample of tasks to which he was assigned, on the particular occasion or station at which the measurement was taken, and/or on the particular observer scoring the
performance, the measurement is less than ideally dependable. In this case, interest attaches to determining how to minimize the impact of different sources of measurement error. Performance
Measurement: Operate and Maintain Caliber .38 Revolver To make this general discussion concrete, an example is in order. One of the Army's military occupational specialty-specific performance
measures involves operating and maintaining a caliber .38 revolver. The soldier is told that this task covers the ability to load, reduce a stoppage in, unload, and clean the caliber .38 revolver,
and that this will be timed. The score sheet for this measurement is presented in Table 1. Note that there are two measurements taken: time and accuracy. In the G study, suppose that each of five
soldiers performed the revolver test four times: on two different occasions (e.g., week 1 and week 2) at two different test stations.4 The soldiers' performance on each of the three tasks and
subtasks (see Table 1) was independently scored by four observers. Also, each task as a whole is independently timed. Hypothetical results of this study are presented in Table 2 for the time measure.
Note that time is recorded for each of three tasks and not for individual subtasks (Table 1); hence, subtasks are not shown in Table 2. Classical Theory Approach With all the information provided in
Table 2, how might classical reliability be calculated? With identical performance measurements taken on 4 There is good reason to worry about an order effect. This is why “tuning” subjects before
they are tested is strongly recommended (e.g., Shavelson, 1985). “Tuning” is familiarizing subjects with the task before they are tested. (If a subject can “fake” the task in a performance test, this
means that she can perform it.) Nevertheless, soldiers would be counterbalanced such that half would start at station 1 and half at station 2. Finally, as will be seen, an alternative design with
occasions nested within stations might be used.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II TABLE 1 Caliber .38 Revolver Operation and Maintenance Task Score Task Subtask Go No Go Load the weapona (1) Held the revolver forward and down — —
(2) Pressed thumb latch and pushed cylinder out — — (3) Inserted a cartridge into each chamber of the cylinder — — (4) Closed the cylinder — — (5) Performed steps 1-4 in sequence — — Time to load the
weapon _______________ Reduce a stoppageb (6) Recocked weapon — — (7) Attempted to fire weapon — — (8) Performed steps 6-7 in sequence — — Time to reduce stoppage _______________ Unload and clear the
weaponc (9) Held the revolver with muzzle pointed down — — (10) Pressed thumb latch and pushed cylinder out — — (11) Ejected cartridges — — (12) Inspected cylinder to ensure each chamber is clear
(13) Performed steps 6-9 in sequence — — Time to unload and clear the weapon _______________ NOTES: Instructions to soldier: aThis task covers your ability to load the revolver; we will time you.
Begin loading the weapon. bYou must now apply immediate action to reduce a stoppage. Assume that the revolver fails to fire. The hammer is cocked. Begin. cYou must now begin unloading the weapon.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II TABLE 2 Caliber .38 Revolver Operation and Maintenance Task: Time to Complete Tasks Observer Station Occasion Task 1 2 3 4 1 84 85 86 87 82 84 85
85 1 91 92 92 94 83 82 84 85 75 76 78 78 76 76 77 77 75 84 75 76 1 2 83 81 83 81 77 78 76 77 69 70 70 70 94 95 96 97 91 92 93 94 3 99 99 99 99 93 94 94 95 83 83 84 85 * * * 2 80 81 81 82 78 78 81 80
OCR for page 207
Performance Assessment for the Workplace: VOLUME II two occasions, a test-retest reliability can be calculated. By recognizing that tasks are analogous to items on traditional tests, an internal
consistency reliability coefficient can be calculated. A test-retest coefficient is calculated by correlating the soldiers ' scores at occasion 1 and occasion 2, after summing over all other
information in Table 2. The correlation between scores at the two points in time is .97. If soldiers' performance times are averaged over two occasions to provide a performance time measure, the
reliability is .99, following the SpearmanBrown prophecy formula. An internal-consistency coefficient is calculated by averaging, for each task, soldiers' performance times across stations,
occasions, and observers. The soldiers' average task performance times would then be intercorrelated: r(task 1,task 2), r(task 1,task 3), and r(task 2,task 3). The average of the three correlations
would provide the reliability for a single task, and the Spearman-Brown formula could be used to determine the reliability for performance times averaged over the three tasks. The reliability of
performance-time measures obtained on a single task is .99, and the reliability of scores averaged across the three tasks is .99. Generalizability Theory Approach Two limitations of classical theory
are readily apparent. The first limitation is that a lot of information in Table 2 is ignored (i.e., “averaged over”). This information might contain measurement error that classical theory assumes
away. This could lead to false confidence in the dependability of the performance measure. The second limitation is that separate reliabilities are provided; which is the “right one”? G theory
overcomes both limitations. The theory uses all of the information obtained in the G study, and it provides a coefficient that includes a definition of error arising from each of the sources of error
in the measurement. Finally, G theory estimates each source of variation in the measurement separately so that improvements can be made by pinpointing which characteristics of the performance
measurement gave rise to the greatest error. Generalizability theory uses the analysis of variance (ANOVA) to accomplish this task. A measurement study (called a generalizability study) is designed
to sample potential sources of measurement error (e.g., raters, occasions, tasks) so that their effects on soldiers ' performance can be examined. Thus soldiers and each source of error can be
considered factors in an ANOVA. The ANOVA, then, can be used to estimate the effects of soldiers (systematic, “true-score” variation), each source of error, and their interactions. More specifically,
the ANOVA is used to estimate the variance components associated with each effect in the design (“main effects” and “interactions”). As Rubin (1974:1050) noted, G theory concentrates on mixed models
OCR for page 207
Performance Assessment for the Workplace: VOLUME II of variance designs, that is, designs in which factors are crossed or nested and fixed or random. Emphasis is given to the estimation of variance
components and ratios of variance components, rather than the estimation and testing of effects for fixed factors as would be appropriate for designs based on randomized experiments. Variance
Components The statistical machinery for analyzing the results of a G study is the analysis of variance. The ANOVA partitions the multiple sources of variation into separate components (“factors” in
ANOVA terminology) corresponding to their individual main effects (soldiers, stations, occasions, tasks, and judges) and their combinations or interactions. The total variation in performance times
(shown in Table 2) is partitioned into no less than 31 separate components—five individual components and all their possible combinations (Cartesian products)—accounting for the total variation in
the performance-time data (see Table 3). Of the 30 sources of variation, 1 accounts for performance consistency: the soldier (or P for person) effect represents systematic differences in the speed of
performance among the five soldiers (variance component for soldiers in Table 3). By averaging the time measure across observers, tasks, occasions, and stations, we find that soldier 5 performed the
task the fastest and soldier 3 performed the task the slowest. The other three soldiers fell in between. This variation in mean performance can be used to determine systematic differences among
soldiers, called true-score variance in classical test theory and universe-score variance in generalizability theory. This universe-score variance—variance component for P = 14.10 (Table 3)—is the
signal sought through the noise created by error. It is the “stuff” that the military decision maker would like to know as inexpensively and as feasibly as possible. The 29 other sources of variation
represent potential measurement error. The first four sources of variation are attributable to each source of error considered singly (“main effects” in ANOVA terminology). The station effect
(variance component for station in Table 3) shows whether mean performance times, averaged over all other factors, systematically vary as to the location at which the measurement was taken.
Apparently performance time did not differ according to station (variance component for station = 0). This is not surprising; unlike many other performance measurements, the revolver task appears
self-contained. The occasion effect shows whether performance times, averaged over all other factors, change from one occasion to the next. Relative to other variance components, performance appears
stable over occasions. The task effect shows whether performance times differed over tasks 1-3. Since task 2 contained fewer subtasks (three)
OCR for page 207
Performance Assessment for the Workplace: VOLUME II TABLE 3 Generalizability Study for a Soldier (P) × Station (S) × Occasion (O) × Task (T) × Judge (J) Design Source of Variation df Mean Squares
Variance Components Soldiers (P) 4 1020.80 14.10 Stations (S) 1 1.00 0.00 Occasions (O) 1 1273.00 7.40 Tasks (T) 2 1659.80 20.00 Judges (J) 3 349.80 2.45 PS 4 1.00 0.00 PO 4 239.00 9.55 PT 8 9.80
0.00 PJ 12 106.80 8.75 SO 1 1.00 0.00 ST 2 1.00 0.00 SJ 3 1.00 0.00 OT 2 59.80 1.25 OJ 3 97.80 3.20 TJ 6 1.80 0.00 PSO 4 1.00 0.00 PST 8 1.00 0.00 PSJ 12 1.00 0.00 POT 8 9.80 1.00 POJ 12 1.80 0.00
PTJ 24 1.80 0.00 SOT 2 1.00 0.00 SOJ 3 1.00 0.00 STJ 6 1.00 0.00 OTJ 6 1.80 0.00 PSOT 8 1.00 0.00 PSOJ 12 1.00 0.00 PSTJ 24 1.80 0.00 SOTJ 6 1.00 0.00 PSOTJ (residual) 24 1.00 1.00 than tasks 1 and 3
(five each), performance time on task 2, averaged over all other sources of variation, should be shorter. The task effect reflects this characteristic of the performance measurement (variance
component for task = 20). And variation across judges shows whether observers are using the same criterion when timing performance. From a measurement point of view, main-effect sources of error
influence absolute decisions about the
OCR for page 207
Performance Assessment for the Workplace: VOLUME II speed of performance (regardless of how other soldiers performed; called “absolute decisions ”). The soldiers' performance times will depend on
whether they are observed by a “fast” or “slow” timer, at a “fast” or “slow” station, and so on. The remaining sources of variation in Table 3 reflect combinations or “statistical interactions” among
the factors. Interactions between persons and other sources of error variation represent unique, unpredictable effects; the particular performance times assigned to soldiers have one or more
components of unpredictability (error) in them. As a consequence, different tasks, observers, or occasions might rank order soldiers differently and unpredictably.5 The soldier × judge effect
(variance component = 8.75), for example, indicates that observers did not agree on the times they assigned to each soldier. If observer 1, for example, were used in the performance measurement,
soldier 1 might be timed as faster than soldier 4. If observer 4 were used, the rank ordering would be reversed. The soldier × task interaction indicates that soldiers who performed quickly on task 1
also performed quickly on the other tasks, compared to their peers. The rank ordering of soldiers apparently does not depend on the task they performed. This is why the internal consistency
coefficient, based on classical theory, was so high (.99). The soldier × occasion × judge interaction indicates judges disagreed on performance times they assigned each soldier, and the nature of
this disagreement changed from one occasion to the next (negligible, Table 3). The most complex interaction, soldiers × stations × occasions × tasks × observers, reflects the effect of an extremely
complex combination of error sources and other unmeasured and random error sources. It is the residual that accounts for the remaining variation in all performance times. The remainder of the
interactions do not involve persons. As a consequence, they do not affect the rank ordering of soldiers. However, they do affect the absolute performance-time score received by each soldier. For
example, a sizable occasion × judge interaction would indicate that the performance times received by soldiers depend both on who observes them and on what occasion that observation occurs. A sizable
task × judge interaction would indicate that the performance times received by soldiers depends on the particular task and observer. In doing task 1, for example, the soldiers would want judge 3
because she assigns the fastest times on this task while, in performing task 3, they might want judge 1 because he assigns the fastest times on that task. 5 Technically, an interaction could also
occur when soldiers have identical rank orders across, say, occasions and the distance between soldiers' performance times on each occasion is different (an ordinal interaction). An interaction with
reversals in rank order (a disordinal interaction) is more dramatic and, for simplicity, is used to describe interpretations of interactions in this paper.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II Improvement of Performance Measurement Just as the Spearman-Brown prophecy formula can be used to determine the number of items needed on a test to
achieve a certain level of reliability, the magnitudes of the sources of error variation can also be used to determine the number of occasions, observers, and so on that are needed to obtain some
desired level of generalizability (reliability). For example, the effects involving judges (soldier × judge, judge × task, judge × task × occasion, etc.) can be used to determine whether several
judges are needed and whether different judges can be used to score the performance of different soldiers, or whether the same judges must rate all soldiers due to disagreements among them. The
analysis of the performance-time data in Table 3 suggests, based on the pattern of the variance component magnitudes, that several judges are needed and that the same set of judges should time all
soldiers (e.g., variance components for PJ and OJ). Generalizability of the Performance Measurement Generalizability theory provides a summary index representing the consistency or dependability of a
measurement. This coefficient, the “generalizability coefficient,” is analogous to the reliability coefficient in classical theory. The coefficient for relative decisions reflects the accuracy with
which soldiers have been rank ordered by the performance measurement, and is defined as: where n′ is the number of times each source of error is sampled in an application of the measurement. For the
data in Table 3, with n = 1 station, occasion, task, and judge: The G coefficient for absolute decisions is defined as: where n′ is the number of times each source of error is sampled in an
application of the measurement. For the data in Table 3, with n = 1 station, occasion, task, and judge:
OCR for page 207
Performance Assessment for the Workplace: VOLUME II Regardless of whether relative or absolute decisions are to be made on the basis of the performance measurement, the dependability of the measure
based on the G theory analysis is considerably different than the analysis based on classical theory. In these examples, it is especially important to sample occasions and judges extensively for
relative decisions and to sample tasks extensively as well for absolute measurements. Summary: Revolver Test With Accuracy Scores Recall that both time and accuracy were recorded by four observers
judging soldiers' performance in the caliber .38 revolver performance test. By way of reviewing the application of G theory to performance measurements, hypothetical data on accuracy is presented.
This is not merely a repeat of what has gone before. The accuracy data call for a somewhat different analysis than the performance-time data. Design of the Revolver Test Using Accuracy Scores In the
generalizability study, each of five soldiers performed the revolver test four times: on two different occasions (O) at two different test stations. The soldiers' (P) performance on each of the three
tasks (T) and subtasks (S) (see Table 1) was independently judged by four observers (J). Hypothetical accuracy scores for this G-study design are presented in Table 4. The data in Table 4 have been
collapsed over stations. This seemed justifiable. Because of the nature of the revolver task, stations did not introduce significant measurement error. Further, to simplify the analysis, only two of
the three tasks were selected: loading and unloading/cleaning the revolver. Including the stoppage removal task would have created an “unbalanced” design, with five subtasks for tasks 1 and 3 each
and only three subtasks for task 2. (See the later discussion of unbalanced designs.) The data in Table 4 represent a soldiers × occasion × task × subtask:task × observer (P × O × T × S:T × J)
design. Notice that each of the two tasks—loading and unloading—contain somewhat different subtasks. So identical subtasks do not appear with each task and we say that subtasks are nested within
tasks (cf. a nested analysis of variance design). The consequence of nesting can be seen in Table 5, where not all possible combinations of P, O, T, S:T, and J appear in the source table as was the
case in Table 3. This is because all terms that include interactions of T and S:T together cannot be estimated due to the nesting (see the later discussion of nesting).
OCR for page 207
Performance Assessment for the Workplace: VOLUME II averaged over the two occasions and ignoring the effect of platoon and company, the reliability is .64. Clearly, this reliability coefficient is
influenced by leniency of different observers, the difficulty of the terrain or terrains on which the missions were conducted, the differences between missions, the time of day (day or night), the
day that the performance was observed, and so forth. However, the importance of these possible sources of measurement error cannot be estimated using classical theory, even if the measurement facets
had been systematically identified. Furthermore, performance might be influenced by the policies and leadership skills within particular companies or platoons. Classical reliability is mute on how to
treat these hierarchical data. Generalizability Theory Approach The generalizability analysis proceeded along the lines suggested by symmetry: Choose the facets of measurement and compute mean
squares. Estimate variance components. Specify the facet (or combination of facets) that is the focus of measurement, and specify the sources of error. Examine alternative D-study designs. Steps 1
and 2 are shown in Table 15 for the Company (C) × Platoon: Company (P:C) × Crew:Platoon:Company (Cr:P:C) × Occasion (O) partially nested design. Interpretation of Variance Components In theory, a
variance component cannot be negative, yet a negative estimate occurred (as indicated in TABLE 15 Variance Components for the Study of TankCrew Performance Measurement a Source of Variation Mean
Squares Estimated Variance Component Companies (C) 55461 0b Platoons:C (P:C) 78636 1607.19 Crews:P:C (Cr:P:C) 45383 15967.50 Occasions (O) 244505 3573.21 C × 83711 3538.79 P:C × O 30629 3436.17
Cr:P:C × O 31448 13448.20 aThe design is crews nested in platoons nested in companies crossed with occasions. bNegative variance component set to 0.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II Table 15). With sample Table VIII data, a negative variance component can arise either due to sampling error or misspecification of the measurement
model. If the former, the most widely accepted practice is to set the variance coefficient to 0, as was done in Table 15. If the latter, the model should be respecified and variance components
estimated with the new model. The rationale for setting the company variance component to 0 was the following. First, the difference in the mean performance of the three companies was small: 770.90,
763.33, and 692.93. Variation among company means accounted for only 0.3 percent of the total variation in the data. The best estimate of the variance due to companies, then, was 0. (See the
concluding section for additional discussion on estimating variance components.) The largest variance component in Table 15 is for crews: the universescore variance. Crew performance differs
systematically, and the measurement procedure reflects this variation. The next largest component is associated with the residual, indicating that error is introduced due to inconsistency in
tank-crew performance from one occasion to the next, and other unidentified sources of error (e.g., inconsistency due to time of day, observer, terrain, and the like). The remaining variance
components are roughly onefourth the size of the residual, with the exception of the component for companies. Since the variance component for companies is 0 and the variance component for platoons
is the smallest one remaining, neither sufficiently influences variation in performance enough to have an important influence if they are considered part of the universe-score variance.
Generalizability Coefficients. Since decision makers are interested in the generalizability of unit performance, one possible method for calculating the G coefficient for crews is: The
generalizability of tank crew performance, averaged over the two observation occasions, is .65. If, however, the decision maker is interested in the generalizability of the score of a single tank
crew selected randomly and observed on a single occasion, the coefficient drops to .48 due to the large residual variance component. The principle of symmetry states that the universe-score variance
is comprised of all components that give rise to systematic variation among crews. In this case, variation due to companies and platoons, as well as variation due to crews, must be considered
universe-score variation. Characteristics of companies and platoons, such as leadership ability, contribute to systematic variation among crews. Following symmetry, the G coefficient for crews,
averaged over two occasions, is:
OCR for page 207
Performance Assessment for the Workplace: VOLUME II We write to distinguish this coefficient from the one above. Surprisingly, by increasing universe-score variance, the G-coefficient decreased, for
two reasons. The increase in universe-score variance by incorporating systematic variation due to companies was negligible: And the additional error introduced () by considering variation due to
companies and platoons as universe-score variance, while not large relative to other sources of variation (e.g., ), were large relative to the systematic variability of companies and platoons.
Finally, if the decision maker is interested in the dependability of platoon performance, the generalizability of the measurement was estimated (aggregating over crews within platoons and occasions)
as follows: Notice here that crews is considered a source of error; variability in crews introduces uncertainty in estimating the performance of the entire platoon— the average of the performance of
a platoon's individual crews. The low generalizability coefficient, then, reflects the fact that there was greater variability among crews within a platoon than among platoons. CONCLUDING COMMENTS ON
GENERALIZABILITY THEORY: ISSUES AND LIMITATIONS In the preceding sections, I argued that generalizability theory was the most appropriate behavioral measurement theory for treating military
performance measures and showed how the theory could be used to model and improve performance measures. Even the best of theories have limitations in their applications, and generalizability theory
is no exception. In concluding, I address the following topics: negative estimated variance components; assump-
OCR for page 207
Performance Assessment for the Workplace: VOLUME II tion of constant universe scores; and dichotomous data (for a more extensive treatment, see Shavelson and Webb, 1981; Shavelson et al., 1985).
Small Samples and Negative Estimated Variance Components Two major contributions of generalizability theory are its emphasis on multiple sources of measurement error and its deemphasis of the role
played by summary reliability or generalizability coefficients. Estimated variance components are the basis for indexing the relative contribution of each source of error and the undependability of a
measurement. Yet Cronbach et al. (1972) warned that variance-component estimates are unstable with usual sample sizes of, for example, a couple of occasions and observers. While variance-component
estimation poses a problem for G theory, it also afflicts all sampling theories. One virtue of G theory is that it brings estimation problems to the fore and puts them up for examination. Small
Samples and Variability of Estimated Variance Components The problem of fallible estimates can be illustrated by expressing an expected mean square as a sum of population variances. In a two-facet,
crossed (p × i × j), random model design, the variance of the estimated variance component for persons—of the estimated universe-score variance—is With all of the components entering the variance of
the estimated universescore variance, the fallibility of such an estimate is quite apparent, especially if n(i) and n(j) are quite modest. In contrast, the variance of the estimated residual variance
has only one variance component,
OCR for page 207
Performance Assessment for the Workplace: VOLUME II In a crossed design, then, the number of components and hence the variance of the estimator increase from the highest-order interaction component
to the main effect components. Consequently, sample estimates of the universe-score variance—estimates of crucial importance to the dependability of a measurement—may reasonably be expected to be
less stable than estimates of components of error variance. Negative Estimates of Variance Components Negative estimates of variance components can arise because of sampling errors or because of
model misspecification (Hill, 1970; see also previous discussion). With respect to sampling error, the one-way ANOVA illustrates how negative estimates can arise. The expected mean squares are: and
where E MSWithin is the expected value of the mean square within groups and E MSBetween is the expected value of the mean square between groups. Estimation of the variance components is accomplished
by equating the observed mean squares with their expected values and solving the linear equations. If MSWithin is larger than MSBetween, the estimate of will be negative. Realizing this problem in G
theory, Cronbach et al. (1972:57) suggested that a plausible solution is to substitute zero for the negative estimate, and carry this zero forward as the estimate of the component when it enters any
equation higher in the table of mean squares. Notice that by setting negative estimates to 0, the researcher is implicitly saying that a reduced model provides an adequate representation of the data,
thereby admitting that the original model was misspecified. Although solutions such as Cronbach et al.'s are reasonable, the sampling distribution of the (once negative) variance component as well as
those variance components whose calculation includes this component is more complicated and the modified estimates are biased. Brennan (e.g., 1983) provides an alternative algorithm that sets all
negative variance components to 0. Each variance component, then, “is expressed as a function of mean squares and sample sizes, and these do not change when some other estimated variance component is
negative” (Brennan, 1983:47). Brennan's procedure produces unbiased estimated-variance components, except for negative components set to 0.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II Bayesian methods provide a solution to the problem of negative variance-component estimates (e.g., Box and Tiao, 1973; Davis, 1974; Fyans, 1977;
Shavelson and Webb, 1981). Consider a design with two sources of variation: within groups and between groups. The Bayesian approach includes the constraint that MS(between groups) is greater than or
equal to MS (within groups) so that the between-groups variance component cannot be negative. Unfortunately, the computational complexities involved and the distributional-form assumptions make these
procedures all but inaccessible to practitioners. An attractive alternative that produces nonnegative estimates of variance components is maximum likelihood (Dempster et al., 1981). Maximum
likelihood estimators are functions of every sufficient statistic and are consistent and asymptotically normal and efficient (Harville, 1977). Although these estimates are derived under the
assumption of a normal distribution, estimators so derived may be suitable even with an unspecified distribution (Harville, 1977). Maximum likelihood estimates have not been used extensively in
practice because they are not readily available in popular statistical packages. However, researchers at the University of California, Los Angeles, (Marcoulides, Shavelson, and Webb) are examining a
restricted maximum likelihood approach that, in simulations so far, appears to offer considerable promise in dealing with the negative variance component problem. Assumption of Constant Universe
Scores Nearly all behavioral measurement theories assume that the behavior being studied remains constant over observations; this is the steady-state assumption made by both classical theory and G
theory. Assessment of stability is much more complex when the behavior changes over time. Among those investigating time-dependent phenomena are Bock (1975), Bryk and colleagues (Bryk and Weisberg,
1977; Bryk et al., 1980), Rogosa and colleagues (Rogosa, 1980; Rogosa et al., 1982, 1984). Rogosa et al. (1984) consider generalizability theory as one method for assessing the stability of behavior
over time. Their approach is to formulate two basic questions about stability of behavior: (1) Is the behavior of an individual consistent over time? (2) Are individual differences among individuals
consistent over time? For individual behavior, consistency is defined as absolutely invariant behavior over time. They characterized inconsistency in behavior in several ways: unsystematic scatter
around a flat line, a linear trend (with and without unsystematic scatter), and a nonlinear trend (with or without scatter). Changing behavior over time has important implications in generalizability
theory for the estimation of universe scores. When behavior changes systematically over time, the universe-score estimate will be time dependent.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II The second, and more common, question about stability is the consistency of individual differences among individuals. Perfect consistency occurs
whenever the trends for different individuals are parallel, whether the individuals' trends are flat, linear, or nonlinear. A generalizability analysis with occasions as a facet is described by
Rogosa et al. (1984) as one method for assessing the consistency of individual differences over time. The variance component that reflects the stability of individual differences over time is the
interaction between individuals and occasions. A small component for the interaction (compared to the variance component for universe scores) suggests that individuals are rank-ordered similarly
across occasions; that is, their trends are parallel. It says nothing about whether individual behavior is changing over time. As described above, the behavior of all individuals could be changing
over time in the same way (a nonzero main effect for occasions). A relatively large value of the component for the individuals × occasion interaction (compared to the universe-score variance
component) shows that individuals are ranked differently across occasions. This could be the result of unsystematic fluctuations in individual behavior over time, the usual interpretation made in G
theory under the steady-state assumption. But it could also reflect differences in systematic trends over time for different individuals. The behavior of some individuals might systematically improve
over time, while that of others might not. Furthermore, the systematic changes could be linear or nonlinear. Clearly, it is necessary to specify the process by which individual military performance
changes in order to model this change. Rogosa et al. provide excellent steps in that direction by describing analytic methods for assessing the consistency of behavior of individuals and the
consistency of differences among individuals. At the least, their exposition is valuable for clarifying the limited ability of G theory to distinguish between real changes in behavior over time and
random fluctuations over time that should be considered error. Although the analytic models for investigating time-dependent changes in behavior are important, they do not alleviate the investigator
's responsibility to define the appropriate time interval for observation. In studying the dependability of a measurement, it is necessary to restrict the time interval so that the observations of
behavior can reasonably be expected to represent the same phenomenon. There are other developments in the field that examine changing behavior over time, such as models of change based on Markov
processes (e.g., Plewis, 1981). However, since these developments do not follow our philosophy of isolating multiple sources of measurement error, and do not provide much information about how
measurement error might be characterized or estimated, they are not discussed here.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II Dichotomous Data Analysis of variance approaches to reliability, including G theory, assume that the scores being analyzed represent continuous
random variables. When the scores are dichotomous, as they were in the earlier example with observers' “go-no go” scores for soldiers' performance on the revolver task, analysis of variance methods
produce inaccurate estimates of variance components and reliability (Cronbach et al., 1972; Brennan, 1980). In analyses of achievement test data with dichotomously scored items, L. Muth én (1983)
found that the ANOVA approach for estimating variance components tended to overestimate error components and underestimate reliability. She found that a covariance structure analysis model (see B.
Muth én, 1978, 1983; Jöreskog, 1974), specifically designed to treat dichotomous data as a manifestation of an underlying continuum (B. Muthén, 1983), produced estimates of variance components and
generalizability coefficients that were closer to the true values than those from the ANOVA. Concluding Comment Used wisely, none of the foregoing limitations invalidates G theory. They simply point
to the care needed in designing and interpreting the results of G studies. In spite of its limitations, generalizability theory does what those seeking to determine the dependability of performance
measures want a theory of behavioral measurement to do. G theory: models the sources of error likely to enter into a performance measurement, models the ways in which these errors are sampled,
provides information on where the major source of measurement error lies, provides estimates of how the measurement would improve under alternative plans for sampling and thereby controlling sources
of error variance, and indicates when the measurement problem cannot be overcome by sampling, so that alternative revisions of the measurement (e.g., modifications in administration, training of
observers, or both) might be considered. REFERENCES Bell, J.F. 1985 Generalizability theory: the software problem. Journal of Educational Statistics 10:19-30. Bock, D. 1975 Multivariate Statistical
Methods in Behavioral Research. New York: McGraw-Hill.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II Box, G.E.P., and G.C. Tiao 1973 Bayesian Inference in Statistical Analysis. Reading, Mass.: Addison-Wesley. Brennan, R.L. 1980 Applications of
generalizability theory. In R.A. Berk, ed., Criterion-Referenced Measurement: The State of the Art. Baltimore, Md.: The Johns Hopkins University Press. 1983 Elements of Generalizability Theory. Iowa
City, Iowa: American College Testing Publications. Bryk, A.S., and H.I. Weisberg 1977 Use of the nonequivalent control group design when subjects are growing Psychological Bulletin 84:950-962. Bryk,
A.S., J.F. Strenio, and H.I. Weisberg 1980 A method for estimating treatment effects when individuals are growing Journal of Educational Statistics 5:5-34. Cardinet, J., and L. Allal 1983 Estimation
of generalizability parameters. Pp. 17-48 in L.J. Fyans, Jr., ed., Generalizability Theory: Inferences and Practical Applications. San Francisco: Jossey-Bass. Cardinet, J., and Y. Tourneur 1974 The
Facets of Differentiation [sic] and Generalization in Test Theory Paper presented at the 18th congress of the International Association of Applied Psychology, Montreal, July-August. 1977 Le Calcul de
Marges d'Erreurs dans la Theorie de la Generalizabilite. Neuchatel, Switzerland: Institut Romand de Recherches et de Documentation Pedagogiques. Cardinet, J.Y. Tourneur, and L. Allal 1976a The
generalizability of surveys of educational outcomes. Pp. 185-198 in D.N.M. DeGruijter and L.J. Th. van der Kamp, eds., Advances in Psychological and Educational Measurement. New York: Wiley. 1976b
The symmetry of generalizability theory: applications to educational measurement. Journal of Educational Measurement 13:119-135. 1981 Extension of generalizability theory and its applications in
educational measurement. Journal of Educational Measurement 18:183-204. Cronbach, L.J. 1976 Research on Classrooms and Schools: Formulation of Questions, Design, and Analysis. Occasional paper,
Stanford Evaluation Consortium. Stanford University (July). Cronbach, L.J., G.C. Gleser, A.N. Nanda, and N. Rajaratnam 1972 The Dependability of Behavioral Measurements: Theory of Generalizability
for Scores and Profiles. New York: Wiley. Davis, C.E. 1974 Bayesian Inference in Two-way Analysis of Variance Models: An Approach to Generalizability. Unpublished doctoral dissertation. University of
Iowa. Dempster, A.P., D.B. Rubin, and R.K. Tsutakawa 1981 Estimation in covariance components models. Journal of the American Statistical Association 76:341-353. Erlich, O., and R.J. Shavelson 1976
The Application of Generalizability Theory to the Study of Teaching Technical Report 76-9-1, Beginning Teacher Evaluation Study. Far West Laboratory, San Francisco. Fyans, L.J. 1977 A New Multi-Level
Approach for Cross-Cultural Psychological Research Unpublished doctoral dissertation. University of Illinois at Urbana-Champagne. Hartley, H.O., J.N.K. Rao, and L. LaMotte 1978 A simple
synthesis-based method of variance component estimation. Biometrics 34:233-242.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II Harville, D.A. 1977 Maximum likelihood approaches to variance component estimation and to related problems. Journal of the American Statistical
Association 72:320-340. Hill, B.M. 1970 Some contrasts between Bayesian and classical influence in the analysis of variance and in the testing of models. Pp. 29-36 in D.L. Meyer and R.O. Collier,
Jr., eds., Bayesian Statistics. Itasca, Ill.: F.E. Peacock. Jöreskog, K.G. 1974 Analyzing psychological data by structural analysis of covariance matrices. In D.H. Krantz, R.C. Atkinson, R.D. Luce,
and P. Suppes, eds., Contemporary Developments in Mathematical Psychology, Vol. II. San Francisco: W.H. Freeman & Company. Kahan, J.P., N.M. Webb, R.J. Shavelson, and R.M. Stolzenberg 1985 Individual
Characteristics and Unit Performance: A Review of Research and Methods. R-3194-MIL. Santa Monica, Calif.: The Rand Corporation. Muthén, B. 1978 Contributions to factor analysis of dichotomous
variables. Psychometrika 43:551-560. 1983 Latent variable structural equation modeling with categorical data Journal of Econometrics 22:43-65. Muthén, L. 1983 The Estimation of Variance Components
for the Dichotomous Dependent Variables: Applications to Test Theory. Unpublished doctoral dissertation, University of California, Los Angeles. Office of the Assistant Secretary of Defense (Manpower,
Reserve Affairs, and Logistics) 1983 Second Annual Report to the Congress on Joint-Service Efforts to Link Standards for Enlistment to On-the-Job Performance. A report to the House Committee on
Appropriations. U.S. Department of Defense, Washington, D.C. Plewis, I. 1981 Using longitudinal data to model teachers' ratings of classroom behavior as a dynamic process. Journal of Education
Statistics 6:237-255. Rogosa, D. 1980 Comparisons of some procedures for analyzing longitudinal panel data Journal of Economics and Business 32:136-151. Rogosa, D., D. Brandt, and M. Zimowski 1982 A
growth curve approach to the measurement of change. Psychological Bulletin 90:726-748. Rogosa, D., R. Floden, and J.B. Willett 1984 Assessing the stability of teacher behavior. Journal of Educational
Psychology 76:1000-1027. Rubin, D.B., reviewer 1974 The dependability of behavioral measurements: theory of generalizability for scores and profiles. Journal of the American Statistical Association
69:1050. Shavelson, R.J. 1985 Evaluation of Nonformal Education Programs: The Applicability and Utility of the Criterion-Sampling Approach. Oxford, England: Pergamon Press. Shavelson, R.J., and N.M.
Webb 1981 Generalizability theory: 1973-1980. British Journal of Mathematical and Statistical Psychology 34:133-166. Shavelson, R.J., N.M. Webb, and, L. Burstein 1985 The measurement of teaching. In
M.C. Wittrock, ed., Handbook of Research on Teaching, 3rd ed. New York: Macmillan.
OCR for page 207
Performance Assessment for the Workplace: VOLUME II Tourneur, Y. 1978 Les Objectifs du Domaine Cognitif. 2me Partie: Theorie des Tests. Ministere de l'Education Nationale et de la Culture Francaise,
Universite de l'Etat a Mons, Faculte des Sciences Psycho-Pedagogiques. Tourneur, Y., and J. Cardinet 1979 Analyse de Variance et Theorie de la Generalizabilite: Guide pour la Realization des Calculs.
Doc. 790.803/CT/9. Universite de l'Etat a Mons, France. U.S. Department of Labor 1972 Handbook for Analyzing Jobs. Washington, D.C.: U.S. Department of Labor. Webb, N.M., and R.J. Shavelson 1981
Multivariate generalizability of general educational development ratings. Journal of Educational Measurement 18:13-22. Webb, N.M., R.J. Shavelson, J. Shea, and E. Morello 1981 Generalizability of
general educational development ratings of jobs in the U.S. Journal of Applied Psychology 66:186-191. Webb, N.M., R.J. Shavelson, and E. Maddahian 1983 Multivariate generalizability theory. Pp. 67-82
in L. J. Fyans, Jr., ed., Generalizability Theory: Inferences and Practical Applications. San Francisco: Jossey-Bass. Wittman, W.W. 1985 Multivariate reliability theory: principles of symmetry and
successful validation strategies. Pp. 1-104 in R.B. Cattell and J.R. Nesselroade, eds., Handbook of Multivariate Experimental Psychology, 2nd ed. New York: Plenum Press. | {"url":"http://www.nap.edu/openbook.php?record_id=1898&page=207","timestamp":"2014-04-16T05:06:05Z","content_type":null,"content_length":"101161","record_id":"<urn:uuid:44e28c55-4333-4608-9564-cb2d70058613>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverter Design Shines in Photovoltaic Systems
A proposed photovoltaic current-source gridconnected inverter has small volume, low total harmonic distortion, high power factor and simple control, and also simplifies photovoltaic system design.
The electric utility grid-connected photovoltaic (PV) system is an important technology for future renewable energy applications. This requires the design of a high-efficiency grid-connected inverter
that delivers power to the grid with low total harmonic distortion (THD) and high power factor (PF).
There are two basic types of grid-connected inverters: voltage-source inverters (VSI) and current-source inverters (CSI). A VSI grid-connected system requires the system's output voltage to be
boosted and regulated, which greatly increases its complexity and cost.
Compared with a VSI system, the output current of a CSI system is not influenced by grid voltage (U[GRID]), so its grid current (I[GRID]) has low THD and high PF. Also, when the input voltage to a
CSI system is lower than the peak value of U[GRID], it can successfully interface with the grid. Consequently, the input voltage to a grid-connected inverter is not restricted by U[GRID]. Therefore,
the current-source grid-connected inverter is ideal for a PV generation system.
The immittance converter theory, which is a variation of the impedance-admittance converter, has been analyzed in detail in several papers. A novel topology is being proposed for a current-source
grid-connected inverter based on the immittance converter theory. Compared with the traditional current-source inverter that employs power-frequency inductors and transformers, the proposed topology
uses high-frequency inductors and transformers, resulting in a small-volume, low-cost system with low THD and high PF.
The new topology employs a disturbance observer derived by monitoring the PV cell output voltage and cycle-by-cycle current to determine the output power. By analyzing the disturbance, the injection
direction can easily be obtained. By estimating the output power, the disturbance injection direction can be determined, which can achieve the maximum power point tracking (MPPT). This method is the
traditional MPPT solution, which provides a quick response. However, its disadvantages are more components and higher costs.
A concept that will be explored here is the injection of a disturbance (δΠ that causes the system's duty cycle (D[CYCLE]) to vary. The MPPT can be determined by tracking and programming the D[CYCLE]
variation caused by injection of the input δ. The direction of the D[CYCLE] variation needs to be known, as it will affect the inverter's next switching cycle. This disturbance observer uses a new
concept for dc MPPTs, obtained by monitoring the inverter output current as an input parameter. This simplifies the control algorithm and cuts down the voltage sense in the disturbance observer,
providing significant cost savings.
System Topology
Fig. 1 is the circuit diagram for the current-source grid-connected inverter. The proposed system consists of a high-frequency full-bridge inverter, immittance converter, center-tapped transformer,
high-frequency bridge rectifier, power frequency inverter and low-pass filter. For the purposes of this discussion, certain nodes in the circuit are highlighted as test points (TP) and given letter
designations. For example, test point A is TP[A] (the test point letter designations are circled in Fig. 1 for easy reference).
The immittance converter has two inductors, L1 and L2, and a capacitor, C2, which provides the voltage-source to current-source conversion. Inductances L1 = L2 = L, and the transfer function is:
where ω is the resonant frequency of the immittance converter. When the carrier-frequency of the high-frequency inverter is equal to the resonant frequency, that is ω=1/√LC, Eq. 1 becomes:
where Z[0] = √LC is the characteristic impedance of the immittance converter. From Eq. 2, the input voltage (u[1]) of the immittance converter is proportional to the output current (i[2]) of the
immittance converter. Therefore, the immittance converter effectively converts a voltage source into a current source.
A sine-sine pulse-width modulator (SPWM) controls this high-frequency inverter. The immittance converter produces a high-frequency current with a sinusoidal envelope. The center-tapped transformer,
high-frequency rectifier bridge, power-frequency inverter and low-pass filter deliver the sinusoidal current to the grid.
From the aforementioned analysis, the carrier frequency of the high-frequency inverter is equal to the resonant frequency of the immittance converter. Furthermore, to avoid core saturation, the
positive-drive pulse width must be equal to the negative-drive pulse width during every resonant period. | {"url":"http://powerelectronics.com/power-electronics-systems/inverter-design-shines-photovoltaic-systems","timestamp":"2014-04-21T01:01:59Z","content_type":null,"content_length":"80495","record_id":"<urn:uuid:eaafd107-0763-4176-9007-047b3a0b89d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
IIT JEE Physics Syllabus 2014
JEE Advanced 2014 Online Mock Test / Free Trial -
Kind Attention to IIT JEE Aspirants : IIT JEE under New Format. AIEEE Conducted by CBSE would now be known as JEE Main and IIT JEE known as JEE Advanced 2014.
Visit the following link to view JEE Main 2014 Physics Syllabus
JEE Advanced 2014 Exam will be held on 25th May, 2014. For more details please visit the following link ….
JEE Advanced 2014 Physics Syllabus
IIT JEE 2013 Physics Syllabus :
The syllabus contains two Sections – A and B. Section – A pertains to the Theory Part having 80% weightage, while Section – B contains Practical Component ( Experimental Skills ) having 20%
Section – A
Unit 1 : Physics and Measurement
Physics, technology and society, S I Units, Fundamental and derived Units. Least count, accuracy and precision of measuring instruments, Errors in measurement, Dimensions of Physical quantities,
dimensional analysis and its applications.
Unit 2 : Kinematics
Frame of reference. Motion in a straight line : Position – time graph, speed and velocity. Uniform and nonuniform motion, average speed and instantaneous velocity Uniformly accelerated motion,
velocity – time, position – time graphs, relations for uniformly accelerated motion. Scalars and Vectors, Vector addition and Subtraction, Zero Vector, Scalar and Vector products, Unit Vector,
Resolution of a Vector. Relative Velocity, Motion in a plane, Projectile Motion, Uniform Circular Motion.
Unit 3 : Laws of Motion
Force and Inertia, Newton’s First Law of motion; Momentum, Newton’s Second Law of motion; Impulse; Newton’s Third Law of motion. Law of conservation of linear momentum and its applications,
Equilibrium of concurrent forces. Static and Kinetic friction, laws of friction, rolling friction. Dynamics of uniform circular motion : Centripetal force and its applications.
Unit 4 : Work, Energy and Power
Work done by a constant force and a variable force; kinetic and potential energies, workenergy theorem, power. Potential energy of a spring, conservation of mechanical energy, conservative and non
conservative forces; Elastic and inelastic collisions in one and two dimensions.
Unit 5 : Rotational Motion
Centre of mass of a two – particle system, Centre of mass of a rigid body; Basic concepts of rotational motion; moment of a force, torque, angular momentum, conservation of angular momentum and its
applications; moment of inertia, radius of gyration. Values of moments of inertia for simple geometrical objects, parallel and perpendicular axes theorems and their applications. Rigid body rotation,
equations of rotational motion.
Unit 6 : Gravitation
The universal law of gravitation. Acceleration due to gravity and its variation with altitude and depth. Kepler’s laws of planetary motion. Gravitational potential energy; gravitational potential.
Escape velocity. Orbital velocity of a satellite. Geo – stationary satellites.
Unit 7 : Properties of Solids and Liquids
Elastic behaviour, Stress – strain relationship, Hooke’s Law, Young’s modulus, bulk modulus, modulus of rigidity. Pressure due to a fluid column; Pascal’s law and its applications. Viscosity, Stokes’
law, terminal velocity, streamline and turbulent flow, Reynolds number. Bernoulli’s principle and its applications. Surface energy and surface tension, angle of contact, application of surface
tension – drops, bubbles and capillary rise. Heat, temperature, thermal expansion; specific heat capacity, calorimetry; change of state, latent heat. Heat transferconduction, convection and
radiation, Newton’s law of cooling.
Unit 8 : Thermodynamics
Thermal equilibrium, zeroth law of thermodynamics, concept of temperature. Heat, work and internal energy. First law of thermodynamics. Second law of thermodynamics : reversible and irreversible
processes. Carnot engine and its efficiency.
Unit 9 : Kinetic Theory of Gases
Equation of state of a perfect gas, work doneon compressing a gas.Kinetic theory of gases – assumptions, concept of pressure. Kinetic energy and temperature : rms speed of gas molecules; Degrees of
freedom, Law of equipartition of energy,applications to specific heat capacities of gases; Mean free path, Avogadro’s number.
Unit 10 : Oscillations and Waves
Periodic motion – period, frequency, displacement as a function of time. Periodic functions. Simple harmonic motion ( S.H.M. ) and its equation; phase; oscillations of a spring – restoring force and
force constant; energy in S.H.M. – kinetic and potential energies; Simple pendulum – derivation of expression for its time period; Free, forced and damped oscillations, resonance. Wave motion.
Longitudinal and transverse waves, speed of a wave. Displacement relation for a progressive wave. Principle of superposition of waves, reflection of waves, Standing waves in strings and organ pipes,
fundamental mode and harmonics, Beats, Doppler effect in sound
Unit 11 : Electrostatics
Electric charges : Conservation of charge, Coulomb’s law – forces between two point charges, forces between multiple charges; superposition principle and continuous charge distribution. Electric
field : Electric field due to a point charge, Electric field lines, Electric dipole, Electric field due to a dipole, Torque on a dipole in a uniform electric field. Electric flux, Gauss’s law and its
applications to find field due to infinitely long uniformly charged straight wire, uniformly charged infinite plane sheet and uniformly charged thin spherical shell. Electric potential and its
calculation for a point charge, electric dipole and system of charges; Equipotential surfaces, Electrical potential energy of a system of two point charges in an electrostatic field. Conductors and
insulators, Dielectrics and electric polarization, capacitor, combination of capacitors in series and in parallel, capacitance of a parallel plate capacitor with and without dielectric medium between
the plates, Energy stored in a capacitor.
Unit 12 : Currrent Electricity
Electric current, Drift velocity, Ohm’s law, Electrical resistance, Resistances of different materials, V – I characteristics of Ohmic and nonohmic conductors, Electrical energy and power, Electrical
resistivity, Colour code for resistors; Series and parallel combinations of resistors; Temperature dependence of resistance. Electric Cell and its Internal resistance, potential difference and emf of
a cell, combination of cells in series and in parallel. Kirchhoff’s laws and their applications. Wheatstone bridge, Metre bridge. Potentiometer – principle and its applications.
Unit 13 : Magnetic Effects of Current and Magnetism
Biot – Savart law and its application to current carrying circular loop. Ampere’s law and its applications to infinitely long current carrying straight wire and solenoid. Force on a moving charge in
uniform magnetic and electric fields. Cyclotron. Force on a current – carrying conductor in a uniform magnetic field. Force between two parallel current – carrying conductors – definition of ampere.
Torque experienced by a current loop in uniform magnetic field; Moving coil galvanometer, its current sensitivity and conversion to ammeter and voltmeter. Current loop as a magnetic dipole and its
magnetic dipole moment. Bar magnet as an equivalent solenoid, magnetic field lines; Earth’s magnetic field and magnetic elements. Para – , dia – and ferro – magnetic substances.Magnetic
susceptibility and permeability, Hysteresis, Electromagnets and permanent magnets.
Unit 14 : Electromagnetic Induction and Alternating Currents
Electromagnetic induction; Faraday’s law, induced emf and current; Lenz’s Law, Eddy currents. Self and mutual inductance. Alternating currents, peak and rms value of alternating current / voltage;
reactance and impedance; LCR series circuit, resonance; Quality factor, power in AC circuits, wattless current. AC generator and transformer.
Unit 15 : Electromagnetic Waves
Electromagnetic waves and their characteristics. Transverse nature of electromagnetic waves. Electromagnetic spectrum ( radio waves, microwaves, infrared, visible, ultraviolet, Xrays, gamma rays ).
Applications of e.m. waves.
Unit 16 : Optics
Reflection and refraction of light at plane and spherical surfaces, mirror formula, Total internal reflection and its applications, Deviation and Dispersion of light by a prism, Lens Formula,
Magnification, Power of a Lens, Combination of thin lenses in contact, Microscope and Astronomical Telescope ( reflecting and refracting ) and their magnifying powers.Wave optics : wavefront and
Huygens’ principle, Laws of reflection and refraction using Huygen’s principle. Interference, Young’s double slit experiment and expression for fringe width. Diffraction due to a singe slit, width of
central maximum. Resolving power of microscopes and astronomical telescopes, Polarisation, plane polarized light; Brewster’s law, uses of plane polarized light and Polaroids.
Unit 17 : Dual Nature of Matter Andradiation
Dual nature of radiation. Photoelectric effect, Hertz and Lenard’s observations; Einstein’s photoelectric equation; particle nature of light. Matter waves – wave nature of particle, de Broglie
relation. DavissonGermer experiment.
Unit 18 : Atoms and Nuclei
Alpha – particle scattering experiment; Rutherford’s model of atom; Bohr model, energy levels, hydrogen spectrum. Composition and size of nucleus, atomic masses, isotopes, isobars; isotones.
Radioactivity – alpha, beta and gamma particles / rays and their properties; radioactive decay law. Mass – energy relation, mass defect; binding energy per nucleon and its variation with mass number,
nuclear fission and fusion.
Unit 19 : Electronic Devices
Semiconductors; semiconductor diode : I – V characteristics in forward and reverse bias; diode as a rectifier; I – V characteristics of LED, photodiode, solar cell and Zener diode; Zener diode as a
voltage regulator. Junction transistor, transistor action, characteristics of a transistor; transistor as an amplifier ( common emitter configuration ) and oscillator. Logic gates ( OR, AND, NOT,
NAND and NOR ). Transistor as a switch.
Unit 20 : Communication Systems
Propagation of electromagnetic waves in the atmosphere; Sky and space wave propagation, Need for modulation, Amplitude and Frequency Modulation, Bandwidth of signals, Bandwidth of Transmission
medium, Basic Elements of a Communication System ( Block Diagram only ).
Section – B
Unit 21 : Experimental Skills
Familiarity with the basic approach and observations of the experiments and activities :
1. Vernier callipers – its use to measure internal and external diameter and depth of a vessel.
2. Screw gauge – its use to determine thickness / diameter of thin sheet / wire.
3. Simple Pendulum – dissipation of energy by plotting a graph between square of amplitude and time.
4. Metre Scale – mass of a given object by principle of moments.
5. Young’s modulus of elasticity of the material of a metallic wire.
6. Surface tension of water by capillary rise and effect of detergents.
7. Co – efficient of Viscosity of a given viscous liquid by measuring terminal velocity of a given spherical body.
8. Plotting a cooling curve for the relationship between the temperature of a hot body and time.
9. Speed of sound in air at room temperature using a resonance tube.
10. Specific heat capacity of a given ( i ) solid and ( ii ) liquid by method of mixtures.
11. Resistivity of the material of a given wire using metre bridge.
12. Resistance of a given wire using Ohm’s law.
13. Potentiometer :
• Comparison of emf of two primary cells.
• Determination of internal resistance of a cell.
14. Resistance and figure of merit of a galvanometer by half deflection method.
15. Focal length of :
• Convex mirror
• Concave mirror, and
• Convex lens using parallax method.
16. Plot of angle of deviation vs angle of incidence for a triangular prism.
17. Refractive index of a glass slab using a travelling microscope.
18. Characteristic curves of a p – n junction diode in forward and reverse bias.
19. Characteristic curves of a Zener diode and finding reverse break down voltage.
20. Characteristic curves of a transistor and finding current gain and voltage gain.
21. Identification of Diode, LED, Transistor, IC, Resistor, Capacitor from mixed collection of such items.
22. Using multimeter to :
• Identify base of a transistor
• Distinguish between npn and pnp type transistor
• See the unidirectional flow of current in case of a diode and an LED.
• Check the correctness or otherwise of a given electronic component ( diode, transistor or IC ).
• Check the correctness or otherwise of a given electronic component ( diode, transistor or IC ).
• Check the correctness or otherwise of a given electronic component ( diode, transistor or IC ).
IIT JEE 2014 Navigation : IIT JEE 2014 Counselling, IIT JEE Updates & Events, IIT JEE 2014 Results, IIT JEE 2014 Contact Details, IIT JEE 2014 Admit Card, IIT JEE 2014 Aptitude Test Syllabus, IIT JEE
2014 Chemistry Syllabus, IIT JEE 2014 Physics Syllabus, IIT JEE 2014 Mathematics Syllabus, IIT JEE 2014 Syllabus, IIT JEE 2014 Exam Centres, IIT JEE 2014 Exam Centres, IIT JEE 2014 List of Bank
Branches, IIT JEE 2014 Application Form, IIT JEE 2014 Exam Pattern, IIT JEE 2014 Eligibility Criteria, IIT JEE 2014 Entrance Exam, IIT JEE Preparation Question Bank CD
IIT JEE Related : IIT JEE 2014 Syllabus, IIT JEE 2014 Physics Syllabus, IIT JEE 2014 Syllabus Detail, IIT JEE Physics Syllabus 2014, IIT JEE 2014 Physics Syllabus Download, IIT JEE 2014 Physics
Syllabus Details, IIT JEE 2014 Engineering Syllabus, IIT JEE 2014 Syllabus Material, IIT JEE 2014 Entrance Exam Syllabus, IIT JEE 2014 BTech Syllabus, IIT JEE 2014 Admission Syllabus, IIT JEE 2014
Physics Study Material, IIT JEE Advanced 2014 Syllabus, Indian Institute of Technology Physics Syllabus 2014, IIT JEE 2014 Syllabus Pattern, IIT JEE BE 2014 Syllabus, IIT JEE 2014 Syllabus
Information, IIT JEE 2014 Syllabus for Physics, IIT JEE Advanced 2014 BArch Syllabus,
IIT JEE Physics Syllabus 2014 – IIT JEE Physics Syllabus Detail 2014 – IIT JEE Engineering Syllabus 2014 – IIT JEE Advanced Syllabus 2014 – Syllabus for IIT JEE Physics 2014.
Posted In engineering entrance exam : iit jee : Leave a response for iit jee physics syllabus 2014 by sharon | {"url":"http://www.winentrance.com/engineering_entrance_exam/iit_jee/IIT-JEE-Physics-Syllabus.html","timestamp":"2014-04-18T20:54:58Z","content_type":null,"content_length":"50750","record_id":"<urn:uuid:20920562-5277-48e1-9c35-fa2ac3e240a3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
"A New Polynomial-Time Algorithm for Linear Programming", Combinatorica, vol. 4, N. Karmarkar, 1984, pp. 373-395. .
"An Extension of Karmarkar's Algorithm for Linear Programming Using Dual Variables", Technical Report No. 648, Cornell University College of Engineering, Todd et al., Jan. 1985. .
"Efficient Implementation of a Class of Preconditioned Conjugate Gradient Methods", SIAM J. Sci. Stat. Comput., vol. 2, No. 1, S. C. Eisenstat, Mar. 1981. .
"Some Computational Experience and a Modification of the Karmarkar Algorithm", ISME Working Paper 85-105, Pennsylvania State University, Cavalier et al., Feb. 1985. .
"A Variation on Karmarkar's Algorithm for Solving Linear Programming Problems", IBM T. J. Watson Research Center, Earl R. Barnes. .
"On Projected Newton Barrier Methods for Linear Programming and an Equivalence to Karmarkar's Projective Method", Technical Report SOL 85-11, Systems Optimization Laboratory, Stanford University,
Gill et al., Jul. 1985.. | {"url":"http://patents.com/us-4885686.html","timestamp":"2014-04-18T06:14:23Z","content_type":null,"content_length":"56399","record_id":"<urn:uuid:9f0c3c8d-dc1a-4944-a576-e844880947b8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Staten Island Geometry Tutor
...Many students who receiving help in SAT and academics were able to reach their goals in school as well as compete for entrance into the top-sought universities in the nation including: Yale NYU
Princeton Rutgers Emory Columbia Univ. of Chicago Highlights: -Perfect 800 SAT Math -Perfect 800 P...
35 Subjects: including geometry, English, chemistry, SAT math
...Prior to becoming a teacher, I was a Wall Street Vice President for some major financial services organizations. I am willing to travel to your home or to a library for tutoring. I am also
comfortable doing online tutoring and could combine that with in person tutoring.I have been teaching High School Algebra for 10 years.
20 Subjects: including geometry, algebra 1, GRE, finance
...I have passed four of the Casualty Actuarial Society exams which rely heavily on probability and statistics and are extremely difficult. I have tutored in both probability and statistics before
and have also taught these topics at a community college many times. Differential Equations is one of my favorite subjects to teach.
28 Subjects: including geometry, physics, GRE, calculus
...I strive to imbed a deeper understanding of most subjects than is usually covered in a standard curriculum so that any kind of problem can be tackled, both within courses and outside of them.
My areas of expertise are primarily math and science, having studied both extensively as an astrophysics...
38 Subjects: including geometry, Spanish, algebra 1, GRE
...You got past the initial stages of Algebra 1 and 2 and now you are ready for your first advanced topic - trig! Some of it depends on geometry and spatial recognition concepts. These are tricks
which will make a hard subject a little easier.
14 Subjects: including geometry, chemistry, calculus, physics | {"url":"http://www.purplemath.com/staten_island_ny_geometry_tutors.php","timestamp":"2014-04-19T02:28:18Z","content_type":null,"content_length":"24145","record_id":"<urn:uuid:d17bf107-3df4-49a9-add6-b3e9dd724111>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carnival of Math: Tenth Edition!
Welcome to the Tenth Carnival of Mathematics
After reading other Carnivals of Math and hosting this one, I'm wondering if anyone feels there might be a need to have
Carnivals of Mathematics:
A Carnival of Mathematics for middle and secondary grades and
A Carnival of Advanced Math for undergraduate and graduate level as well as for the research mathematicians out there.
Another approach might be to have both a Carnival of Math Education (a category into which my blog might naturally fall) and A Carnival of Mathematics.
I may be way off here and outvoted by the vast majority of readers who may simply prefer to pick and choose the posts that interest them, but there does seem to be a clear demarcation between these
categories (in the non-algebraic sense of course!). The Carnival may need to reach a critical mass before this would be practical but I'd be interested in your comments here.
With the above in mind, I will begin with math blogs that focus on middle and secondary grades...
jd2718 has a wonderful variation on the
Four Fours Puzzle
. He has an uncanny knack for taking a good problem and adding enough complexity to it to boggle the mind! This problem is still open-ended and waiting for more ideas...
Alane over at
Math Notes
divisibility tests for 7
and 11 and provides easily understood explanations for why these rules work. Her other post introduces students to perhaps their first mathematical proof, the classic "
irrationality of √2
" by contradiction. To assess their understanding of indirect proof one might modify the problem to "Show that √3 is irrational."
Patty Paper Trisection
, Denise, at
Let's Play Math
, challenges her readers to prove that
Math Trek's origami trisection
referred to in Carnival #9 really works. Denise has an engaging writing style that invites her readers to challenge themselves. She sees math problems as puzzles, a view shared by many who have a
passion for our subject. This post is designed for students and teachers in grades 7-12 as well as homeschoolers.
Murray Bourne over at
offers us a
practical application
of semi-log graph paper in plotting the dramatic increase in the ranking of You Tube in just a year and a half. The vertical scale is equally spaced, marked in powers of 10 -- logs base 10 to the
rescue! Students will eat this up! He also is promoting a fascinating
change in standard math notation
(thanks, Murray, for promoting the name of my blog!)
Mark D from the
Universe of Discourse
shares a recurrence form for binomial coefficients that is far more efficient than the traditional factorial definition. He suggests that this ancient relation (published nearly 1300 years ago) has
not gotten the recognition it deserves. Since the last student project in my BC Calculus class focused on efficient formulas for approximating pi (Ramanujan's formula in particular), your post fit
right into the discussion.
Vlorbik on Math Ed has a fascinating post on
Textbooks and Notations.
He contends, and I concur, that current texts over-stress natural language (as in spelling out the meanings of symbols in English) for set-theoretic formulas, conditional probability in particular.
My philosophy has always been to introduce concepts and formulas in colloquial language to which students can relate, then move on to the formal symbolism of mathematics as early as possible.
Students need to appreciate the efficiency of symbolic notation and how it provides a universal language for mathematical discourse, not subject to interpretation! Once again, great justification for
the name of my blog! I knew there was a method to my madness (aka, dumb luck!).
On the technical research side of mathematics we have a couple for you to digest...
The Unapologetic Mathematician writes about the
importance of category theory
for undergrad math majors. Categories have become significant in contemporary mathematics. For background, read the
Wikipedia article on category theory
Michi at
Michi's Blog
presents a
technical piece in the area of homological algebra
(If only I could recall anything Professor Dyer was trying to teach me in algebraic topology 40 years ago!). The posting deals with combinatorics and coding of a very important tool for his current
research - looking at extended algebraic structures in group cohomology.
Michi also recommended
Terry Tao's blog
. I particularly enjoyed his
Advice on Mathematical Careers.
And now for something completely different...
A monthly feature, Who's Counting?, on ABC News.com is authored by the internationally recognized mathematician and author John Allen Paulos. His specialties are statistics and logic but he is also
well-known for the popularizations of mathematics he has written (Innumeracy, etc.). He is a Professor of Math at Temple University and he knows how to make math interesting and meaningful. Read
through some of his articles from the past 2 years. There's considerable food for thought in these articles and enough material there for projects for Statistics/AP Stat classes for every month of
the school year! Not to mention that it makes for fascinating reading. He is a gifted writer who weaves a beautiful web.
I want to personally thank all of our contributors who were considerate about replying by June 13th! Further, those who responded to my gmail account provided some fascinating insights about their
passion for mathematics. I felt right at home...
There are so many outstanding math bloggers out there. I can never do justice to all of them. This Carnival is just the tip of the iceberg. One that I've recently discovered is Mathematics Weblog.
The author has concisely summarized all of the Carnivals to date and his discussion of math humor is worth the read (he reviews books like Comic Sections and Mathematics Made Difficult). If the books
are as humorous as their titles, they're worth looking at!
I haven't yet mentioned any of my recent postings on this Carnival as I wanted to celebrate others' blogs, not mine. However, I'm working on a way to introduce and develop recursive functions and
linear recurrence relations for grades 7-12. It will be entitled --
"Take any number, Add Three, Divide the Result by -1. Now Repeat this!" I hope you will look for it and share your comments as we approach our summer break (for me, a more permanent break!). Also,
all of those 'beautiful' mortgage formulas I've been alluding to in the series of posts on applications of exponential functions are now displayed as screenshots from the TI-84. Those posts have
received many visits and I'm not sure if it's more for the math or more for mortgage advice (believe me, you don't want advice from me on that!).
Update: Submitted late but I decided to add it on 6-15:
An interesting brain teaser for the frontal lobes on SharpBrains. To solve it you need to analyze balance-scale relationships among 3 quantities (spades, clubs and diamonds). Some might try this
mentally using logic, others may want to set up algebraic equations. Have fun!
Would you believe, another couple of late additions that I discovered in my web travels--
Best of the Web - Math Blogs
(Of course my blog didn't make the cut!!)
Not Even Wrong - A Random Collection of Stuff (a nice summary of some technical math blogs from a Columbia math professor I believe)
There's no end to this so I had better stop...
Stay tuned for our next Carnival on June 29th over at Grey Matters.
11 comments:
Thanks for a great carnival! There are always so many interesting posts. I'm off to post a link...
(Speaking of links, though, the link for Michi's post needs fixed, and did you really mean to link to Terence Tao's comment rather than to the original article?)
Thanks, Denise!
I need to double check those links!!
They hopefully are now corrected. Once again, my apologies to Michi and Terry.
In his blog, Michi expressed that my suggestion for separating the Carnival is premature. I tend to agree. I was just trying to elicit some thoughts about this. The number of submissions we
currently receive are affected by many factors such as the newness of the Carnival (thus some are not yet aware of this medium). I do believe however some potential contributors of K-12 math/math
ed might be intimidated by the high level of mathematics that appears in some links. Perhaps, down the road a piece, a split might be worth pursuing...
Dave, thanks for a great carnival. Personally, I like that it includes both Math Ed and higher mathematics!
thanks for the kind words of support! i guess i'm being outnumbered here - so far commenters seem to enjoy the Carnival just the way it is. That's fine with me!
This comment has been removed by the author.
Great Carnival, Dave! Your suggestion about splitting up the Carnival is especially interesting.
For the next Carnival, please send your submissions to me by 11:59 PM PDT on Wednesday, July 27th, at greymattersblog@gmail.com.
My two cents: keeping the carnival together is the better way to go. I see value to grad students reading elementary stuff, and to grade school teachers getting a sense of what college folks
discuss, and having them all get a look at some computer science or economics. When we are too separate, we can get wierd(er?)
Scott, did you really mean July 27th? Or June?
Denise, I apologize for the mistake. I meant June 27th! Please send your submissions for the 11th Carnival of Mathematics to greymattersblog@gmail.com by 11:59 PDT on JUNE 27th. no July!
Your blog keeps getting better and better! Your older articles are not as good as newer ones you have a lot more creativity and originality now keep it up!
My friend and I were recently talking about the prevalence of technology in our day to day lives. Reading this post makes me think back to that debate we had, and just how inseparable from
electronics we have all become.
I don't mean this in a bad way, of course! Ethical concerns aside... I just hope that as the price of memory decreases, the possibility of copying our memories onto a digital medium becomes a
true reality. It's a fantasy that I dream about every once in a while.
(Posted on Nintendo DS running [url=http://kwstar88.zoomshare.com/2.shtml]R4 SDHC[/url] DS NePof) | {"url":"http://mathnotations.blogspot.com/2007/06/carnival-of-math-tenth-edition.html","timestamp":"2014-04-21T14:41:13Z","content_type":null,"content_length":"213865","record_id":"<urn:uuid:fe3facd5-b4e7-40f2-9944-1d750acfa731>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dielectric breakdown
1. The problem statement, all variables and given/known data
I am asked to find the maximum voltage in a cylindrical capacitor. The capacitor consists of an inner wire and an outer cylindrical shell. The wire has radius [tex]r_{1}[/tex] and the cylinder has
inner radius [tex]r_{2}.[/tex] The space between the wires is filled with a dielectric having dielectric constant [tex]\kappa.[/tex]
2. Relevant equations
This is in CGS units (actual calculations have been converted to SI)
So I know that the electric field E in a dielectric is [tex]E_{no_dielectric}/\kappa[/tex]. So then if my cylindrical capacitor has E field = [tex]\frac{2\lambda}{r}[/tex], then my E field inside the
dielectric material would be [tex]\frac{2\lambda}{r\kappa}[/tex]. So then if I am given a value for the dielectric strength of the dielectric (say [tex]A[/tex], which would happen at the inner radius
of the cylindrical shell which is [tex]r_{2}[/tex]), would I do
[tex]A = \frac{2\lambda}{r\kappa}[/tex], and then I can find the charge density which is
[tex]\frac{Ar\kappa}{2}[/tex]. And, since the potential between the wire and the shell would be [tex]2\lambda*ln\frac{r_{2}}{r_{1}}[/tex], would I just substitute the new value for lambda I got to
get the potential? For some reason this was marked wrong? | {"url":"http://www.physicsforums.com/showthread.php?t=345045","timestamp":"2014-04-17T07:21:20Z","content_type":null,"content_length":"20057","record_id":"<urn:uuid:dae061dd-39fd-43a1-b8cf-a4df7edc37de>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex analysis: Find domain of definition and range
February 16th 2009, 11:36 AM #1
Feb 2009
Find the domain of definition of each function:
a. f(z)=3z^2+5z+i+1
b. g(z)=1/z
c. h(z)=(z+i)/(z^2+1)
d. q(z)=(2z^2+3)/(|z-1|)
e. F(z)=e^3z
f. G(z)=e^z+e^-z
Describe the range of each function:
g. f(z)=z+5 for Re z>0
h. g(z)=z^2 for z in the first quadrant, Re z >=0, Im z >=0
i. h(z)= 1/z for 0<|z|<=1
j. p(z)=-2z^3 for z in the quarter-disk |z| < 1, 0<Arg z<pi/2
the domains here will be pretty much the same as if z was a real number.
Describe the range of each function:
g. f(z)=z+5 for Re z>0
h. g(z)=z^2 for z in the first quadrant, Re z >=0, Im z >=0
Hint: when the domain is not restricted, polynomials are onto the whole complex plane (consequence of the Fundamental theorem of Algebra)
i. h(z)= 1/z for 0<|z|<=1
consider, regarding the range: what happens when z approaches zero? what happens if it is on the unit disk?
j. p(z)=-2z^3 for z in the quarter-disk |z| < 1, 0<Arg z<pi/2
see my comment for (g) and (h)
now, what can you come up with?
jhevon said "the domains here will be pretty much the same as if z was a real number."
Pretty much. One important difference is that $z^2+ 1$ is never 0 in the real numbers but is in the complex numbers it is. So in the complex numbers you have to watch out for fractions with $z^2+
1$ in the denominator.
jhevon said "the domains here will be pretty much the same as if z was a real number."
Pretty much. One important difference is that $z^2+ 1$ is never 0 in the real numbers but is in the complex numbers it is. So in the complex numbers you have to watch out for fractions with $z^2+
1$ in the denominator.
indeed. i didn't notice part (c), i should have been more explicit. but i did mention the fundamental theorem of algebra, so we know that all polynomials have zeros, and dividing by zero is
always a no-no. so we have to watch out if there is any polynomial in the denominator of a rational function.
Actually non-constant polynomials, but this trivial point was not why I wanted to responded. In fact any entire function (analytic everywhere) has range the whole complex plane (with an exception
of only a single point)! This is such a deep, elegant and amazing result from complex analysis! (Picard's Little Theorem)
February 16th 2009, 12:07 PM #2
February 16th 2009, 02:04 PM #3
MHF Contributor
Apr 2005
February 16th 2009, 02:13 PM #4
February 18th 2009, 07:05 PM #5
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/calculus/73928-complex-analysis-find-domain-definition-range.html","timestamp":"2014-04-18T11:12:41Z","content_type":null,"content_length":"48756","record_id":"<urn:uuid:92295eab-f200-423f-8883-5c908749541a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to calculate expected value and variance from probability distribution table
October 20th 2010, 08:58 AM #1
Oct 2010
How to calculate expected value and variance from probability distribution table
Marketing manager gave different proposal for allocation of money to tv advertising and other marketing methods.
He the estimated the probability for different outcomes of the proposal budget as follows
Allocation of advertising budget
Budget level old (90% tv and 10% others) New (60% Tv and 40% old)
High ---------------------------0.3 ---------- ----- -0.2---------------------
Median ------------------------0.2----------… ----------0.1-----------
Low ------- ------------------- 0.1 -------- ------------ 0.1--------------
The above is basically a table, eg the probability that high budget is used and the old allocation advertising budget is used is 0.3. To clarify further the low budget used and old allocation
advertising budget used is 0.1. High budget advertising budget and high budget level is 0.2. sorry for the bad table.
The problem: for high median and low budget levels they are corresponding to 3, 2.5 and 2 million dollars respective.
a) Let the random variable X denote the proportion of budget on Tv advertising. How do I find the expected value E(X) and variance of X.
The answer for variance is 0.0216
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/statistics/160391-how-calculate-expected-value-variance-probability-distribution-table.html","timestamp":"2014-04-16T09:16:23Z","content_type":null,"content_length":"30673","record_id":"<urn:uuid:cef5120b-8eab-4398-935f-9727358d412d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: why formal power series do not work
Kanovei kanovei at wminf2.math.uni-wuppertal.de
Wed Nov 12 13:58:36 EST 1997
>it occurred to me that polynomials
>with real coefficients, more precisely formal power series (since
>1/(1+x) must be infinite: 1 - x + x^2 - x^3 + ...) which are allowed to
>begin at a negative power of x (since 1/x must be x^{-1}), ought to be
>able to serve as the canonical model Jon is looking for.
Try to define 2^z for the z being x^{-1}.
Get a series with inf. many negative powers.
You may admit such -- now you
have the next problem, how to linearly order
your infinitesimals.
The point is that one needs not only to define
infinitesimals but then to work with them.
(Isn't here a distinction foundations and
the *other* math ?)
Euler etc. worked with this stuff quite a bit
and what they sometimes did is puzzling even
from the modern NSA standpoint (see my paper
on the Euler sin factorization in RMS, 1988, no 4).
Therefore one needs infinitesimals obeying
some rules. NSA provides the complete solution
(Transfer). There is no such a simple way
(as the *polynomial* model) to get an enlargement
fully satisfying Transfer.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1997-November/000239.html","timestamp":"2014-04-19T04:24:23Z","content_type":null,"content_length":"3494","record_id":"<urn:uuid:001f4249-9cc3-4d09-9339-6623e08d24e4>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kirby Math Questions Of the Week #3
09-25-2012, 04:56 PM
Kirby Math Questions Of the Week #3
Ill go one at a time here
lim x -> 0
(e^(2x) - 1) / e^(x)-1
yeah I have no idea how to do this without stuff I shouldn't know
09-25-2012, 05:05 PM
09-25-2012, 05:06 PM
09-25-2012, 05:11 PM
also wrong :I
09-25-2012, 05:12 PM
it's a simple thing when you learn it, l'hopital's rule. if you have a limit where when you plug in the value its 0/0 or infinity/infinity, take the individual derivatives of the top and bottom
until you get something not undefined
09-25-2012, 05:13 PM
09-25-2012, 05:14 PM
09-25-2012, 05:15 PM
Im not suppose to know that yet God. If I do that on a test that isn't the right work for the answer. Its like those kids in elementary school that could multiply large numbers in their head, but
still needed to write down the organized process.
09-25-2012, 05:15 PM
I thought you meant (e^(2x) - 1) / e^(x)) * -1 for some reason
09-25-2012, 05:35 PM
use the squeeze theorem. take something like 10000000000x^2 + 2 and -100000000000x^2 + 2. they're going to be strictly above and below your function respective on, say (-1, -1), and both their
limits at 0 are 2. this your function also tends to 2 at 0 since its continuous.
09-25-2012, 05:37 PM
i mean, that's a proof moreso a method to find the answer. but you could always just find the answer for yourself with l'hopital and make yourself appropriate functions to justify your answer
with the squeeze theorem on the test
09-25-2012, 09:40 PM
thats really simple calculus........
Its the top / the bottom cofficient wise when something= impossible numbers
So 2x/x or 2.
How do you not know this yet?
09-25-2012, 09:52 PM
great theorem bro
09-26-2012, 07:11 AM
exponents arent coefficients.
oh also. just factor the top expression. (e^x + 1)(e^x - 1). this is probably how you're expected to do it. i always do something more convoluted than necessarily with stuff like this.
09-26-2012, 09:36 AM
God your avatar reminds me a lot of my math teacher:
09-26-2012, 02:35 PM
Kirby the Racist
09-26-2012, 02:59 PM
Yeah I can never spot the difference between the Japanese and Chinese races.
09-26-2012, 03:05 PM
Flaming Flamingo
09-26-2012, 03:06 PM
09-26-2012, 05:21 PM
09-26-2012, 05:26 PM
The Japanese were business suits, and the Chinese have pony tails
09-26-2012, 05:28 PM
09-26-2012, 05:30 PM
Is it just me and my asian fetish, or does the one is the middle look very well drawn? | {"url":"http://pokedream.com/forums/printthread.php?t=21776&pp=25&page=1","timestamp":"2014-04-17T15:25:49Z","content_type":null,"content_length":"14194","record_id":"<urn:uuid:f3b05beb-436f-48a6-8568-8c6de7d72041>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is 14.492 rounded to the nearest tenth?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5071f68fe4b057a2860cf1c7","timestamp":"2014-04-17T09:45:55Z","content_type":null,"content_length":"44072","record_id":"<urn:uuid:5233c339-4b87-4f9c-a1a8-fbf57d9083cf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Imperative Core Calculus for Java and Java with Effects (Abstract)
In order to study rigorously object-oriented languages such as Java or C#, a common practice is to define lightweight fragments, or calculi, which are sufficiently small to facilitate formal proofs
of key properties. However many of the current proposals for calculi lack important language features. In this paper we propose Middleweight Java, MJ, as a contender for a minimal imperative core
calculus for Java. Whilst compact, MJ models features such as object identity, field assignment, constructor methods and block structure. We define the syntax, type system and operational semantics
of MJ, and give a proof of type safety. In order to demonstrate the usefulness of MJ to reason about operational features, we consider a recent proposal of Greenhouse and Boyland to extend Java with
an effects system. This effects system is intended to delimit the scope of computational effects within a Java program. We define an extension of MJ with a similar effects system and instrument the
operational semantics. We then prove the correctness of the effects system; a question left open by Greenhouse and Boyland. We also consider the question of effect inference for our extended
calculus, detail an algorithm for inferring effects information and give a proof of correctness. | {"url":"http://www.cl.cam.ac.uk/~amp12/papers/impccj/impccj.html","timestamp":"2014-04-18T08:07:57Z","content_type":null,"content_length":"1869","record_id":"<urn:uuid:913e1b15-4f41-4b03-bd9e-269ce623e7af>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Issue 3.05 | May 1995
Geek Page - Wavelet Image Compression
Beating the bandwidth bottleneck.
By Peter Schröder
The military needs to send real-time video to hand-held receivers using only a 4800-baud satellite link. Sixteen Gbytes of remote sensing imagery need to be distributed on a single CD-ROM. These are
examples of the bandwidth bottleneck that is holding back many multimedia applications. But a previously obscure mathematical tool known as wavelet analysis may eliminate the problem.
According to a growing number of proponents, wavelets allow unprecedented image-compression ratios at landmark speeds; they promise to surpass alternative techniques such as JPEG or fractal
compression. However, the technique is only now moving from the mathematics community into industry.
The concept behind image compression is the same no matter what method is used. Any image can be described by listing the color of each pixel. But that's a waste of space. If a group of neighboring
pixels are the same color, it is more efficient to use a single description for the region. Take it one step further. What about a group of pixels that are almost the same color? By replacing them
with their average, we distort the image only slightly, and the description will be much more compact. This is known as lossy compression and is acceptable for most imaging applications.
What does vary among compression methods is how the regions of similarly colored pixels are detected and described. The technique used by wavelet compression represents an image in terms of special
mathematical functions called wavelets. For example, an image that is black on the left and white on the right could be succinctly represented by the mathematical function color(x,y)=0 if x<-0.5, 1
if x>0.5. Of course, most images are much more complex, and require more elaborate functions to describe. This is where wavelets come into their own. Because wavelets can be flexibly shaped and
molded to describe re-gions in terms of averages, they are the perfect building blocks to describe an image.
What makes wavelets better than older compression methods such as JPEG is their ability to adapt to the size and location of regions in the image. While JPEG works in terms of eight-by-eight squares,
wavelets can describe regions of varying size, shape, and location.
How do we identify the regions that can be coalesced into a single description without significantly distorting the image? By the fast wavelet transform.
The fast wavelet transform takes an image and computes its wavelet coefficients. These numbers, combined with the wavelet function, can later be used to reconstruct the image. The coefficients are
computed in different ways depending on the particular wavelet function we want to use. There are many: some smooth, others fractal-like, but they all boil down to fancy versions of averaging and
With the simplest wavelet, known as the Haar function, we take pairs of neighboring pixels and compute their average and their difference. So, for a row of pixels (14, 8, 4, 6), the averages would be
(11, 5) and the differences (6, -2). The averages provide a copy of the original image at half-resolution, the differences provide the wavelet coefficients at that level of resolution. Put the
differences aside and continue with the half-resolution image. Again, take averages and differences, put the differences aside, and continue with the yet smaller average image. Eventually, we obtain
a single overall average and all the differences - the wavelet coefficients - at the various levels of resolution.
So far, nothing has been compressed. However, when two neighboring pixels have the same value, their difference, and therefore their associated wavelet coefficient, will be zero. There is no need to
store this zero. We can also throw away coefficients that are close to zero without significantly distorting the image. We can then encode the remaining coefficients and transmit them along with the
overall average value.
On the other end, the image can be reconstructed by performing the inverse of the original transform: we start with the overall average, add in the differences for that level of resolution, and then
repeat the process until we have expanded the image back to its original size. Some detail will be lost because of the discarded coefficients, but the important regions - such as object edges, where
color differences and coefficients are large - will have been preserved.
Of course, a slightly more complex technique will be used in real-world applications. The wavelet function, for example, will use a much fancier version of averaging and differencing that operates on
more than two pixels at a time. But the basic ideas remain the same.
Several considerations are important when selecting a compression algorithm. The most obvious is space saved, and wavelets give consistently better results than other methods. Just as important is
the time required for encoding and decoding. While all compression algorithms will perform better if they have more time, wave-lets with encoding times only about twice the decoding time achieve
superb results. Compare this with fractal compression, which decompresses quickly but requires Herculean effort to compress it. This makes wavelets particularly advantageous for applications such as
live video, in which we need to provide compression on the fly.
These advantages are attracting an increasing number of companies that need cutting-edge compression. Magnavox, for example, is incorporating wavelet compression into a number of its video products.
The next step will be for this mathematical tool to prove itself in the entrenched world of standards committees.
Peter Schröder (ps@math.scarolina.edu) whose license plate reads "WAVELET," is a postdoc at the University of South Carolina. Wim Sweldens also contributed to this article. | {"url":"http://archive.wired.com/wired/archive/3.05/geek.html","timestamp":"2014-04-18T16:24:42Z","content_type":null,"content_length":"51099","record_id":"<urn:uuid:5842836e-df78-4e77-8dad-d73c95a2b78f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consider The Circuit Shown In The Figure Below. ... | Chegg.com
Consider the circuit shown in the figure below. Use the followingvariables as necessary:
= 6.00 Ω,
=1.70 Ω, and
=11.00 V.
(a) Calculate the equivalent resistance of theR[1] and 5.00 Ω resistors connected inparallel.
1 Ω
(b) Using the result of part (a), calculate the combined resistanceof the R[1], 5.00 Ω, and 4.00 Ωresistors.
2 Ω
(c) Calculate the equivalent resistance of the combined resistancefound in part (b) and the parallel 3.00 Ω resistor.
3 Ω
(d) Combine the equivalent resistance found in part (c) with theR[2] resistor.
4 Ω
(e) Calculate the total current in the circuit.
5 A
(f) What is the voltage drop across the R[2]resistor?
6 V
(g) Subtracting the result of part (f) from the battery voltage,find the voltage across the 3.00 Ω resistor.
7 V
(h) Calculate the current in the 3.00 Ω resistor.
8 A | {"url":"http://www.chegg.com/homework-help/questions-and-answers/consider-circuit-shown-figure--use-followingvariables-necessary-r1-600-r2-170-v-1100-v-cal-q659932","timestamp":"2014-04-23T20:54:55Z","content_type":null,"content_length":"26489","record_id":"<urn:uuid:6a1c218d-eb59-4736-8780-ac64184b6663>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Processor Instructions for Accelerating Encryption and Authentication Algorithms
High Performance Implementation of AES in Counter Mode
Significant performance optimization for encrypting (and decrypting) can be achieved if software using the AES instructions is designed to process multiple data blocks in parallel. This "software
pipelining" technique is applicable for parallelizable modes of operation such as Electronic Code Book (ECB), CTR, and decryption with the Cipher Block Chaining (CBC-Decryption) mode.
In such modes, different data blocks can be encrypted (or decrypted) independently of each other, and the hardware that supports the AES round instructions is pipelined. This allows independent AES
instructions to be dispatched, theoretically every one to two CPU clock cycles, if data can be provided sufficiently fast. As a result, the AES throughput can be significantly enhanced for parallel
modes of operation, if the software implementation itself is pipelined. Instead of completing the encryption of one data block and then continuing to the subsequent block, it is preferable to write
software sequences that compute one AES round on multiple blocks, using one round key, and only then continue to compute the subsequent round for multiple blocks. This technique speeds up any
parallelizable mode of operation, in particular the CTR mode. Listing 3 shows a code snippet encrypting eight blocks in parallel as part of the CTR mode (where the counters are encrypted).
mov rdx, OFFSET keyex_addr
; load Round key
movdqu xmm0, XMMWORD PTR [rdx]
pxor xmm1, xmm0
pxor xmm2, xmm0
pxor xmm3, xmm0
pxor xmm4, xmm0
pxor xmm5, xmm0
pxor xmm6, xmm0
pxor xmm7, xmm0
pxor xmm8, xmm0
mov ecx, 9
; load Round key
add rdx, 0x10
movdqu xmm0, XMMWORD PTR [rdx]
aesenc xmm1, xmm0
aesenc xmm2, xmm0
aesenc xmm3, xmm0
aesenc xmm4, xmm0
aesenc xmm5, xmm0
aesenc xmm6, xmm0
aesenc xmm7, xmm0
aesenc xmm8, xmm0
loop main_loop
add rdx, 0x10
movdqu xmm0, XMMWORD PTR [rdx]
aesenclast xmm1, xmm0
aesenclast xmm2, xmm0
aesenclast xmm3, xmm0
aesenclast xmm4, xmm0
aesenclast xmm5, xmm0
aesenclast xmm6, xmm0
aesenclast xmm7, xmm0
aesenclast xmm8, xmm0
; storing the encrypted blocks
movdqu XMMWORD PTR [dest], xmm1
movdqu XMMWORD PTR [dest+0x10], xmm2
movdqu XMMWORD PTR [dest+0x20], xmm3
movdqu XMMWORD PTR [dest+0x30], xmm4
movdqu XMMWORD PTR [dest+0x40], xmm5
movdqu XMMWORD PTR [dest+0x50], xmm6
movdqu XMMWORD PTR [dest+0x60], xmm7
movdqu XMMWORD PTR [dest+0x70], xmm8
High Performance Implementation of Galois Counter Mode
We now examine how GCM can be efficiently computed by using the PCLMULQDQ instruction, in combination with some improved algorithms.
The most compute-intensive part of GCM is the computation of the Galois hash, which is multiplication in the finite field GF(2^128), defined by the reduction modulo g = x^128 + x^7 + x^2 + x + 1. The
multiplication in this field is carried out in two steps: the first step is the carry-less multiplication of two 128-bit elements, and the second step is the reduction of the 256-bit carry-less
product modulo g = x^128 + x^7 + x^2 + x + 1. We explain these steps in the rest of this section.
Computing a 256-bit Carry-less Product with the PCLMULQDQ Instruction
The following algorithm steps can be viewed as one iteration of a carry-less schoolbook multiplication:
1. Multiply carry-less by the following operands: A[0] with B[0], A[0 1] with B[1], A[0] with B[1], and A[1] with B[0]. Let the results of the above four multiplications be A[0] * B[0] = [C[1] : C
[0]], A[1] * B[1] = [D[1] : D[0]], A[0], * B[1] = [E[1] : E[0]], A[1] * B[0] = [F[1] : F[0]]
2. Construct the 256-bit output of the multiplication [A[1]: A[0]] * [B[1] : B[0]] as follows in Equation 5:
One can also trade off one multiplication for additional XOR operations. This 2-step alternative approach can be viewed as a "carry-less Karatsuba" multiplication [9]:
1. Multiply carry-less by the following operands: A[1] with B[1], A[0] with B[0], and A[0] ⊕ A[1] with B0 ͵B; A[1]. Let the results of the above three multiplications be [C[1] : C[0]], [D[1] : D
[0]], and [E[1] : E[0]], respectively.
2. Construct the 256-bit output of the multiplication [A1: A0] * [B1 : B0] as follows in Equation 6:
Both methods can be used for the first step of the computation of the Galois hash. | {"url":"http://www.drdobbs.com/security/new-processor-instructions-for-accelerat/219400209?pgno=2","timestamp":"2014-04-19T00:27:56Z","content_type":null,"content_length":"95803","record_id":"<urn:uuid:42fbe151-ae1a-4781-b983-54d2f9f44f28>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to reconcile Godel's theorem with the completeness of the Predicate Calculus?
up vote 4 down vote favorite
In Mendelson's Introduction to Mathematical Logic, the proof of Godel's Theorem for S (his axiomatic arithmetic) goes via proving that a sentence that can be interpreted as "This statement has no
proof in S" cannot be proved either false or true in S, if S is consistent.
According to the completeness of the Predicate Calculus, any logically valid wf of a theory K is a theorem of K. The statement I interpret as "This statement has no proof in S" cannot be proved
either false or true, so I presume that there are models of arithmetic (or rather of axiom system S) in which it is false, and models in which it is true. Is this correct?
A model of arithmetic in which it is true seems sane enough. Are there models of S in which a proof of what is interpreted as "This statement has no proof in S" turns up as some sort of non-standard
number, or have I got completely confused? I have a vision in my mind of a non-standard number encoding "1 is not a proof of S. 2 is not a proof of S. 3 is not a proof of S...." or in some other way
satisfying the equation that asserts that X is a proof of Y, if not the mathematician posing the equation :-)
add comment
1 Answer
active oldest votes
A model of arithmetic in which the G"odel sentence "I am unprovable" is false is necessarily a non-standard model. It contains an infinite element which satisfies, in the model, the
up vote 4 down formula expressing the property of being a proof of the G"odel sentence --- a formula that is not satisfied by any standard natural number (not even in a non-standard model).
vote accepted
You can type o with an umlaut as ö but not in a comment, apparently :( – Nate Eldredge Sep 27 '10 at 21:22
(ö)/ There's no reason to be diacritical. – Eric Tressler Sep 27 '10 at 23:28
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic or ask your own question. | {"url":"http://mathoverflow.net/questions/40209/how-to-reconcile-godels-theorem-with-the-completeness-of-the-predicate-calculus","timestamp":"2014-04-17T15:37:06Z","content_type":null,"content_length":"52480","record_id":"<urn:uuid:19886fb0-9cd3-49a7-894a-cb63f82f5aff>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
vishwanath research
Research Interests
Theoretical Physics – Quantum Condensed Matter. I am interested in systems of many quantum particles, where strong interactions lead to new states of matter. These new states can potentially be
realized in experimental systems ranging from strongly correlated materials to dilute atomic gases confined to optical lattices.
Current Projects
Fractionalization: Conventional theories of electronic matter assume that the electron is a well defined excitation. However, materials like the high temperature cuprate superconductors, frustrated
magnets and heavy fermion systems where interactions between electrons are particularly important, show many phenomena that are strikingly unconventional. This led to the radical proposal that the
electron breaks apart, or fractionalizes, in such systems. This idea could potentially explain many of these anomalies, although unambiguous experimental evidence for this is still lacking.
Fractionalization is found to go hand in hand with emergent gauge fields. One of the attractive features of deconfined states is that they can naturally lead to dimensional confinement – excitations
can be confined to planes or chains of a three dimensional systems - which could explain phenomena seen in different materials
Unconventional Quantum Phase Transitions: Recently, it has become clear that quantum phase transitions can also exhibit fractionalization, although the phases on either side of the transition are
perfectly conventional. Remarkably, this allows for a (generically) continuous transition between states of very different symmetry, e.g. a superfluid, and a crystal. Such transitions are forbidden
according to Landau’s theory of classical phase transitions, and appear here due to the presence of quantum interference effects. I will be pursuing this exciting new development, which may be the
key to understanding certain puzzling quantum phase transitions seen in heavy fermion systems.
Fluctuating Superconductivity: When superconductivity is destroyed by thermal or quantum fluctuations – its presence may still be felt in different ways, for example in anomalies in the electrical or
thermal conduction properties of the system, or in more subtle signatures such as the Nernst effect. We have worked on establishing the universal signatures in thermal resistivity and thermopower, as
well as current noise near quantum phase transitions out of a superconductor. Thermally fluctuating superconductors have also been studied, in particular a theory based on fluctuating vortex degrees
of freedom is found to agree well with Nernst experiments on the cupartes.
Quantum Magnetism: Frustrated quantum magnets offer perhaps the simplest setting where novel many body effects could occur. We have worked on a number of problems in this area, from characterizing
the different spin liquid states possible on frustrated lattices such as the triangular, Kagome and the newly discovered hyper-Kagome lattice, to studying models to explain experimental data in
specific materials. Another interesting class of problems occurs in metallic helimagnets such as MnSi, where fluctuating magnetic spirals give rise to unusual metallic behavior that poses a major
theoretical challenge.
Cold Atomic Gases: I am currently interested in spinor condensates: Bose-Einstein condensates of particles with spin. We have found novel magnetic phases, stabilized (paradoxically) by quantum or
thermal fluctuations; and new kinds of phase transitions. An important set of issues here is how one may probe these exciting new systems – I have worked on noise measurements as a probe of Luttinger
liquid physics in cold atom systems, and dynamics across the BCS-BEC crossover, to probe Cooper pair formation.
Selected Publications
A. Turner, R. Barnett, E. Demler and Ashvin Vishwanath. “Nematic Order by Disorder in Spin-2 BECs”. Phys. Rev. Lett. 98, 190404 (2007).
B. Binz, A. Vishwanath, and V. Aji. “Theory of the helical spin crytal: a candidate for the partially ordered state of MnSi ” Phys. Rev. Lett. 96, 207202 (2006).
Fa Wang and A. Vishwanath “Spin Liquid States on the Triangular and Kagome Lattices: A PSG Analysis of Schwinger Boson States” Phys Rev. B. 74, 174423 (2006)
Daniel Podolsky, Srinivas Raghu, Ashvin Vishwanath. “Nernst effect and diamagnetism in phase fluctuating superconductors” cond-mat/0612096. Submitted to Phys. Rev. Lett.
T. Senthil, Ashvin Vishwanath, Leon Balents, Subir Sachdev, M. P. A. Fisher “Deconfined Quantum Critical Points” Science 303, 1490 (2004).
O. I. Motrunich and Ashvin Vishwanath, “Emergent Photons and New Transitions in the O(3) Sigma Model with Hedgehog Suppression” Phys. Rev. B 70, 075104 (2004).
Ashvin Vishwanath, “Quantized Thermal Hall Effect in the Mixed State of d-Wave Superconductors”. Phys. Rev. Lett. 87, 217004 (2001). | {"url":"http://www.physics.berkeley.edu/research/faculty/vishwanath.html","timestamp":"2014-04-19T04:41:09Z","content_type":null,"content_length":"9073","record_id":"<urn:uuid:92b8e51c-949e-4377-9406-300fab174ceb>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |